Testing Python AWS calls with Moto

In my previous Writing Tests For Your Python Project, I started writing tests for my Python code but then ran out of options because I had completed all tests that didn’t involve calls to the AWS API. Now we’ll begin testing Python AWS calls with Moto. You can view some additional details about Moto in their documentation. You will want to also review the list of Implemented Services in the Moto documentation to make sure you the API endpoints and methods are supported. The good news is that there are very few unsupported methods. The bad news is that they do not support the describe_export_tasks for the logs endpoint and we use that in our code. Because of this not being supported we’ll not create a test for it in this example.

I’ve previously explained the purpose of this blog. I want to document my struggles and random things I’ve been able to build. I personally dislike reading an article and seeing that someone just magically came up with the answer to their problems. this is going to show the process I took in testing Python AWS calls with Moto. I’ve included some of the struggles that I went through to get this to test successfully.

Adding Moto

This is pretty easy as Moto is available via pip. We can just run pip install moto to install the module into your environment.

In order to add moto to your code, the syntax is pretty straight forward as you can just import the modules you’d like to use in the form of

from moto import mock_<AWS_Service>

where <AWS_Service> is the AWS Service you’d like to mock. You can figure out the service by looking at your boto definitions. In our create_export_task function, we have a boto client definition that looks like

def create_export_task(
    group_name, from_date, to_date, s3_bucket_name, s3_bucket_prefix
):
    logging.info("Creating CloudWatch Log Export Task to S3")
    try:
        client = boto3.client("logs")
        response = client.create_export_task(
...

In this case, we’re using the logs service from AWS so we’ll need to mock the logs service in our import like this

from moto import mock_logs

Using Moto for Our Exception

Now that we know which service to mock, we’ll start simple and begin by first mocking an exception of the API call. We add the following lines to our testing code.

...
from moto import mock_logs
import pytest
import re
...

    @mock_logs
    def test_create_export_task_exception(self):
        with pytest.raises(Exception, match="Failed to Create Export Task"):
            export_task_id = cloudwatch_to_s3.create_export_task(
                group_name="group_name",
                from_date="start_date_ms",
                to_date="end_date_ms",
                s3_bucket_name="s3_bucket_name",
                s3_bucket_prefix="s3_bucket_prefix_with_date",
            )

We’re telling this function to use @mock_logs so that the boto client will connect to our moto logs service. We also introduce a new item here called pytest.raises() which tells the test that it should expect an Exception() to be generated that contains the substring Failed to Create Export Task. If we run pytest, we can see that our code executes successfully and we get the test we hoped for. We can also see that with the code coverage check in place, we’ve now tested some new lines of code. We are now covering 51% of the code, which is up from our previous 43%.

Using Moto to Test a Successful Submission

In order to check our successful request, we’ll steal a bulk of a our run() function and simply reuse that as our test. This will also add in some e2e testing of some of the other functions we’ve tested and that’s ok.

    @mock_logs
    def test_create_export_task_success(self):
        date_dict = cloudwatch_to_s3.generate_date_dict(n_days=2)
        date_dict["start_date_ms"] = cloudwatch_to_s3.convert_from_date(
            date_time=date_dict["start_date"]
        )
        date_dict["end_date_ms"] = cloudwatch_to_s3.convert_from_date(date_time=date_dict["end_date"])
        s3_bucket_prefix_with_date = cloudwatch_to_s3.generate_prefix(
            s3_bucket_prefix="s3_bucket_prefix", start_date=date_dict["start_date"]
        )
        export_task_id = cloudwatch_to_s3.create_export_task(
            group_name="group_name",
            from_date=date_dict["start_date_ms"],
            to_date=date_dict["end_date_ms"],
            s3_bucket_name="s3_bucket_name",
            s3_bucket_prefix=s3_bucket_prefix_with_date,
        )

        assert re.search("[a-z0-9]{8}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{4}-[a-z0-9]{12}", export_task_id)

This is mostly straight forward but I’ll quickly explain the assert being used. We won’t know the exact ID returned from the mock AWS API but we know its format. I wrote a quick regex that makes sure the return ID matches what we’d expect. With everything in place, now we’ll save this and run pytest again to see the results

root@cb5a579b89a2:/opt/app# pytest --cov=modules --cov=classes --cov-report term-missing
================================================================================= test session starts =================================================================================
platform linux -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0
rootdir: /opt/app
plugins: cov-4.0.0
collected 5 items                                                                                                                                                                     

modules/test_cloudwatch_to_s3.py ....F                                                                                                                                          [100%]
...
>           raise Exception("Failed to Create Export Task")
E           Exception: Failed to Create Export Task

modules/cloudwatch_to_s3.py:73: Exception
---------------------------------------------------------------------------------- Captured log call ----------------------------------------------------------------------------------
ERROR    root:cloudwatch_to_s3.py:72 You must specify a region.

---------- coverage: platform linux, python 3.10.6-final-0 -----------
Name                               Stmts   Miss  Cover   Missing
----------------------------------------------------------------
modules/cloudwatch_to_s3.py           83     54    35%   60-70, 83-115, 119-140, 149-174
modules/test_cloudwatch_to_s3.py      36      1    97%   66
----------------------------------------------------------------
TOTAL                                119     55    54%

=============================================================================== short test summary info ===============================================================================
FAILED modules/test_cloudwatch_to_s3.py::TestCloudWatch::test_create_export_task_success - Exception: Failed to Create Export Task
============================================================================= 1 failed, 4 passed in 1.10s =============================================================================

This all looks really bad and you can see the important thing is that our test failed 1 failed, 4 passed in 1.10s. You can see that this actually resulted in an Exception() and was not the success that we were hoping for. If you look a little closer, you’ll see that the Captured log call provides us with some details regarding the problem.

ERROR    root:cloudwatch_to_s3.py:72 You must specify a region.

This is exactly why we write test cases! This tells us that we generated the noted ERROR from line 72 of our cloudwatch_to_s3.py file. If you look at the code, you’ll see that this is the exception catch for our create_export_task. We’re executing this on a machine that is not in AWS and therefore has no default region. This means we’ll need to update our code to account for this.

Correcting Our Code and Testing Again

We could choose to fix this in two different ways:

  1. Add the environmental variable, AWS_DEFAULT_REGION, to our test code for this particular test
  2. Change our code to accept a region_name as an argument to the functions that make use of boto3

I want to use this as an example of fixing our code due to testing failures, so we’ll take approach #2 to resolving this problem. Using approach #2, I’m going to change the create_export_task and export_status_call functions to look like the following

...

def create_export_task(
    group_name, from_date, to_date, s3_bucket_name, s3_bucket_prefix, region_name="us-east-1"
):
    logging.info("Creating CloudWatch Log Export Task to S3")
    try:
        client = boto3.client("logs", region_name=region_name)

...

def export_status_call(task_id, region_name="us-east-1"):
    try:
        client = boto3.client("logs", region_name=region_name)

...

With these quick additions, I’ve now set a default region to be used in making the boto3 calls. Let’s run the test again

root@cb5a579b89a2:/opt/app# pytest --cov=modules --cov=classes --cov-report term-missing
================================================================================= test session starts =================================================================================
platform linux -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0
rootdir: /opt/app
plugins: cov-4.0.0
collected 5 items                                                                                                                                                                     

modules/test_cloudwatch_to_s3.py ....F                                                                                                                                          [100%]

>           raise Exception("Failed to Create Export Task")
E           Exception: Failed to Create Export Task

modules/cloudwatch_to_s3.py:73: Exception
---------------------------------------------------------------------------------- Captured log call ----------------------------------------------------------------------------------
ERROR    root:cloudwatch_to_s3.py:72 An error occurred (404) when calling the CreateExportTask operation: <?xml version="1.0" encoding="UTF-8"?>
<Error>
    <Code>NoSuchBucket</Code>
    <Message>The specified bucket does not exist</Message>
    <BucketName>s3_bucket_name</BucketName>
    <RequestID>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestID>
</Error>

---------- coverage: platform linux, python 3.10.6-final-0 -----------
Name                               Stmts   Miss  Cover   Missing
----------------------------------------------------------------
modules/cloudwatch_to_s3.py           83     53    36%   67-70, 83-115, 119-140, 149-174
modules/test_cloudwatch_to_s3.py      36      1    97%   66
----------------------------------------------------------------
TOTAL                                119     54    55%

=============================================================================== short test summary info ===============================================================================
FAILED modules/test_cloudwatch_to_s3.py::TestCloudWatch::test_create_export_task_success - Exception: Failed to Create Export Task
============================================================================= 1 failed, 4 passed in 1.20s =============================================================================

We failed again! If you look closer, this is an important lesson when using moto. This time, we are getting a NoSuchBucket error.

ERROR    root:cloudwatch_to_s3.py:72 An error occurred (404) when calling the CreateExportTask operation: <?xml version="1.0" encoding="UTF-8"?>
<Error>
    <Code>NoSuchBucket</Code>
    <Message>The specified bucket does not exist</Message>
    <BucketName>s3_bucket_name</BucketName>
    <RequestID>7a62c49f-347e-4fc4-9331-6e8eEXAMPLE</RequestID>
</Error>

The important lesson to be learned here is that you’ll also need to setup your AWS resources that you’d like to access. Now we’ll need to make sure we create the S3 bucket in Moto before we attempt our export.

Creating the Mock Resources We Need For Testing

Fixing The S3 Error

Now we need to setup a S3 bucket to make this work. Let’s add our new import statements and some new code to our testing function.

from moto import mock_logs, mock_s3
import boto3 
...
    @mock_logs
    @mock_s3
    def test_create_export_task_success(self):
        conn = boto3.resource("s3", region_name="us-east-1")
        # We need to create the bucket since this is all in Moto's 'virtual' AWS account
        conn.create_bucket(Bucket="s3_bucket_name")
        date_dict = cloudwatch_to_s3.generate_date_dict(n_days=2)
        date_dict["start_date_ms"] = cloudwatch_to_s3.convert_from_date(
            date_time=date_dict["start_date"]
        )
...

You can see from the import statements that I’m import mock_s3. I’m also decorating my function to include it. I had to also import boto3 so I could get the test function to connect to the mock AWS instance provided by moto to create our bucket. In our testing code, we call the bucket s3_bucket_name so that’s what we’ll create and we’ll test again.

root@cb5a579b89a2:/opt/app# pytest --cov=modules --cov=classes --cov-report term-missing
================================================================================= test session starts =================================================================================
platform linux -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0
rootdir: /opt/app
plugins: cov-4.0.0
collected 5 items                                                                                                                                                                     

modules/test_cloudwatch_to_s3.py ....F                                                                                                                                          [100%]
====================================================================================== FAILURES =======================================================================================
>           raise Exception("Failed to Create Export Task")
E           Exception: Failed to Create Export Task

modules/cloudwatch_to_s3.py:73: Exception
---------------------------------------------------------------------------------- Captured log call ----------------------------------------------------------------------------------
ERROR    root:cloudwatch_to_s3.py:72 An error occurred (ResourceNotFoundException) when calling the CreateExportTask operation: The specified log group does not exist

---------- coverage: platform linux, python 3.10.6-final-0 -----------
Name                               Stmts   Miss  Cover   Missing
----------------------------------------------------------------
modules/cloudwatch_to_s3.py           83     53    36%   67-70, 83-115, 119-140, 149-174
modules/test_cloudwatch_to_s3.py      40      1    98%   70
----------------------------------------------------------------
TOTAL                                123     54    56%

=============================================================================== short test summary info ===============================================================================
FAILED modules/test_cloudwatch_to_s3.py::TestCloudWatch::test_create_export_task_success - Exception: Failed to Create Export Task
============================================================================= 1 failed, 4 passed in 1.32s =============================================================================

We still failed! The good news is that we failed with a new error this time. Now we got an error stating that our log group we’re exporting from doesn’t exist. Let’s fix this next.

Fixing the Log Group Error

Next, we need to make sure the moto AWS has the log group that we’re attempting to export from, group_name. This is another boto3 client setup that we add to the code

    def test_create_export_task_success(self):
        conn = boto3.resource("s3", region_name="us-east-1")
        # We need to create the bucket since this is all in Moto's 'virtual' AWS account
        conn.create_bucket(Bucket="s3_bucket_name")

        client = boto3.client("logs", region_name="us-east-1")
        client.create_log_group(logGroupName="group_name")
        date_dict = cloudwatch_to_s3.generate_date_dict(n_days=2)

Let’s now cross our fingers and test once again

root@cb5a579b89a2:/opt/app# pytest --cov=modules --cov=classes --cov-report term-missing
================================================================================= test session starts =================================================================================
platform linux -- Python 3.10.6, pytest-7.2.1, pluggy-1.0.0
rootdir: /opt/app
plugins: cov-4.0.0
collected 5 items                                                                                                                                                                     

modules/test_cloudwatch_to_s3.py .....                                                                                                                                          [100%]


---------- coverage: platform linux, python 3.10.6-final-0 -----------
Name                               Stmts   Miss  Cover   Missing
----------------------------------------------------------------
modules/cloudwatch_to_s3.py           83     51    39%   68-69, 83-115, 119-140, 149-174
modules/test_cloudwatch_to_s3.py      43      0   100%
----------------------------------------------------------------
TOTAL                                126     51    60%


================================================================================== 5 passed in 1.34s ==================================================================================

Conclusion

Success! It worked! We were able to complete most of our testing Python AWS calls with Moto. Unfortunately, we can’t test our describe_export_tasks but otherwise, this tests our great. The code as it stands now can be accessed in my Github repo here.