Skip to content

[Question] warnings.filterwarnings("ignore", "No metrics to publish*"). Document metric/logger testing #161

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
patrickwerz opened this issue Sep 9, 2020 · 15 comments
Assignees
Labels
documentation Improvements or additions to documentation

Comments

@patrickwerz
Copy link

Hi guys,
thanks for this great tool. Unfortunately, I have some problems with the metric functionality. I get the following exception.
[ERROR] SchemaValidationError: Invalid format. Error: data._aws.CloudWatchMetrics[0].Dimensions[0] must contain at least 1 items, Invalid item

The documentation states that you can suppress this error with "warnings.filterwarnings("ignore", "No metrics to publish*")" . Unfortunately, I still get this Exception.
Where in my code do I have to put the filterwarning to get my handler working?
Thanks in advance,
Patrick

@heitorlessa
Copy link
Contributor

Hey Patrick - Could you share a snippet of your code?

According to the error, you're neither adding metrics, nor metric dimensions, and neither defining the service param (or env var).

@bisoldi
Copy link

bisoldi commented Sep 9, 2020

It might also help if the documentation included import warnings in the section that advises warnings.filterwarnings("ignore", "No metrics to publish*").

@heitorlessa
Copy link
Contributor

Hey Brooks - We should definitely fix that, thanks for pointing that out.

I suspect this issue though is unrelated though, or it's a new use case we haven't accounted for (more docs updates).

When using Metrics utility, we'd expect you to either add metrics, a namespace, and a dimension, or conditionally add metrics but dimensions and namespace always being present - CloudWatch Embedded Metric (EMF) Format Spec.

The warning is to let customers know they didn't add any metrics while decorating the function handler with log_metrics method - This gives the opportunity to not fail fast when metrics are not always added.

metrics = Metrics(namespace="MyApp", service="myService")

Should be enough to not cause any validation exception but a metrics warning

Looking forward to hearing from Patrick

@patrickwerz
Copy link
Author

patrickwerz commented Sep 9, 2020

Hi Heitor, thanks for your quick reply. Really appreciate:

this is my handler

warnings.filterwarnings("ignore", "No metrics to publish*")
logger = Logger(service="status_handling", level="DEBUG")
metrics = Metrics(namespace="hdm-iot-platform") # I don't want service as a dimension 
metrics.add_dimension(name="stage", value=os.environ['stage'])


@metrics.log_metrics
@logger.inject_lambda_context(log_event=True)
def process(event: Dict[str, Any], context: LambdaContext):
    parsed_event = parse_dynamodb_event(ddb_event=event, return_change_type=True, new_and_old_images=True)
    status_handling(parsed_event[0], metrics)

The metrics are then added (or not) in the status handling function (imported from a module)

@patrickwerz
Copy link
Author

Guys,
Do you have a best practice how to patch @metrics.log_metrics decorator in my handler unit tests?
I tried to patch the Metrics which I import from aws_lambda_powertools but I doesn't work

from path.to.handler import process
@patch('path.to.handler.Metrics')
  def test_my_handler(mock_metrics):
     process({}, None)

@heitorlessa
Copy link
Contributor

Ah! That explains it - That code above the handler will only execute once per warm invocation, meaning that your metrics wouldn't have dimensions and always fail validation after the first invocation succeeds.

Bring that "add_dimension" inside your Lambda handler and it will work as expected.

As for patching, it looks like (in your pseudo code) that patching isn't working because it's not being applied correctly -- You need to patch the Metrics class imported from your code, and not from within your process function.

I can work on an example tomorrow if that helps

@patrickwerz
Copy link
Author

patrickwerz commented Sep 11, 2020

I can work on an example tomorrow if that helps

That would be amazing. figured out a way but that is really cumbersome
I had to write the following code in order to mock/patch the Logger, Tracer and Metrics class

class TestPostSenderProvisioning(unittest.TestCase):
    def setUp(self):
        class MetricStub:
            def __init__(self, namespace, service):
                pass

            def add_dimension(self, name, value):
                pass

            def log_metrics(self, func):
                return func

        class TracerStub:
            def __init__(self, service, patch_modules):
                pass

            def capture_lambda_handler(self, func):
                return func

        class LoggerStub:
            def __init__(self, service, level):
                pass

            def inject_lambda_context(self, func):
                return func

            def info(self, stuff):
                pass

            def debug(self, stuff):
                pass

            def exception(self, stuff):
                pass

        def kill_patches():  # Create a cleanup callback that undoes our patches
            patch.stopall()  # Stops all patches started with start()
            importlib.reload(handler)  # Reload our UUT module which restores the original decorator

        self.addCleanup(
            kill_patches)  # We want to make sure this is run so we do this in addCleanup instead of tearDown

        # patch the decorator, so it returns everything you put into

        patcher_metric = patch('aws_lambda_powertools.Metrics', MetricStub)
        self.mock_metrics = patcher_metric.start()
        patcher_logger = patch('aws_lambda_powertools.Logger', LoggerStub)
        self.mock_logger = patcher_logger.start()
        patcher_tracer = patch('aws_lambda_powertools.Tracer', TracerStub)
        self.mock_tracer = patcher_tracer.start()
        importlib.reload(handler)

    @patch('other.Provisioning')
    @patch('other.Configuration')
    def test_handler_201(self, mock_conf, mock_prov):

This is the handler I want to unittest

@metrics.log_metrics
@logger.inject_lambda_context
@tracer.capture_lambda_handler
def post_handler(event, context):
 pass

@heitorlessa
Copy link
Contributor

Sure, I'll work on an example and post here later - One question though, are you monkeypatching because you want to know whether it was called? or something else?

I've done a kitchen sink example with common features used, and a test that sets up the env variables to disable Tracer, set dummy Metrics namespace, and test whether metrics were actually dumped to stdout . here - https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/example/tests/test_handler.py

@patrickwerz
Copy link
Author

I want to test the business logic written in my handler method. Therefore I have to import it into my test_handler.py. What I don't want to test is the functionality of the aws_lambda_powertools lib.

@metrics.log_metrics
@logger.inject_lambda_context
@tracer.capture_lambda_handler

But since they are decorators and loaded together with the handler function I have to patch /deactivate them somehow. If wouldn't have stubbed / monkeypatch them but injected a Mock()/MagicMock() the import of my handler would have failed.

@patrickwerz
Copy link
Author

I just checked the test in your link. Looks promising. I am not familiar with pytest (yet) so I have to investigate a little bit.

@heitorlessa
Copy link
Contributor

That's what I suspected :) You definitely don't need to mock/patch any of that - You only need two things to ensure your tests work correctly based on the snippet you shared:

  1. Ensure Metrics has the minimum to work, or else it'll fail validation as it does now
    • Move your add_dimension inside the handler, or for a function that is called within the handler
    • Set POWERTOOLS_METRICS_NAMESPACE to any dummy value as part of your tests setup
  2. Ensure Tracer is disabled
    • Set POWERTOOLS_TRACER_DISABLED=1 as part of your tests setup

Pytest makes this easier you'd only need the following, where auto_use=True is like setUp in Unittest package, and monkeypatch.setenv sets an environment variable that will be available during your tests:

import pytest

@pytest.fixture(auto_use=True)
def env_vars(monkeypatch):
    monkeypatch.setenv("POWERTOOLS_METRICS_NAMESPACE", "example_namespace")
    monkeypatch.setenv("POWERTOOLS_TRACE_DISABLED", "1")

If stuck, either reply here OR come hang out with us on #lambda-powertools channel on Slack - Public Invite

@patrickwerz
Copy link
Author

patrickwerz commented Sep 11, 2020

Thanks for your efforts, Heitor,
this is my test:
Your slack invite is not working... It is not active anymore

import pytest


@pytest.fixture(autouse=True)
def env_vars(monkeypatch):
    monkeypatch.setenv("POWERTOOLS_METRICS_NAMESPACE", "example_namespace")
    monkeypatch.setenv("POWERTOOLS_SERVICE_NAME", "example_service")
    monkeypatch.setenv("POWERTOOLS_TRACE_DISABLED", "1")
    monkeypatch.setenv("PROVISIONING_REQUESTS_TABLE_NAME", "mockTable")
    monkeypatch.setenv("aws_region", "mockington")


@pytest.fixture()
def lambda_handler(env_vars):
    from src.sender_provisioning.status_handling.handler import process
    return process


def test_lambda_handler(lambda_handler):
    test_event = {'test': 'event'}
    lambda_handler(test_event, {})

Unfortunately I still get an metrics and logger error since they are active:

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/Users/patrick/.local/share/virtualenvs/iot-provisioning-service-j7qRQlTU/lib/python3.7/site-packages/aws_lambda_powertools/metrics/metrics.py:144: in decorate
    response = lambda_handler(event, context)
/Users/patrick/.local/share/virtualenvs/iot-provisioning-service-j7qRQlTU/lib/python3.7/site-packages/aws_lambda_powertools/logging/logger.py:238: in decorate
    lambda_context = build_lambda_context_model(context)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

context = {}

    def build_lambda_context_model(context: object) -> LambdaContextModel:
        """Captures Lambda function runtime info to be used across all log statements
    
        Parameters
        ----------
        context : object
            Lambda context object
    
        Returns
        -------
        LambdaContextModel
            Lambda context only with select fields
        """
    
        context = {
>           "function_name": context.function_name,
            "function_memory_size": context.memory_limit_in_mb,
            "function_arn": context.invoked_function_arn,
            "function_request_id": context.aws_request_id,
        }
E       AttributeError: 'dict' object has no attribute 'function_name'

When I delete the @logger.inject_lambda_context decorator the test works

@heitorlessa
Copy link
Contributor

Ahhh, that's right, it's because you don't have a LambdaContext object - My bad - Here's your tests with a fake Lambda Context object to solve this and make no changes to your handler

...
@pytest.fixture
def lambda_context():
    lambda_context = {
        "function_name": "test",
        "memory_limit_in_mb": 128,
        "invoked_function_arn": "arn:aws:lambda:eu-west-1:809313241:function:test",
        "aws_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72",
    }

    return namedtuple("LambdaContext", lambda_context.keys())(*lambda_context.values())

def test_lambda_handler(lambda_handler, lambda_context):
    test_event = {'test': 'event'}
    lambda_handler(test_event, lambda_context) # this will now have a Context object populated

We launched LambdaContext data class for type hints in 1.5.0 - I will update the docs to create a Test guide to include these and the bits we discussed here

@patrickwerz
Copy link
Author

Yes, now its working. Thanks, Heitor.

@heitorlessa heitorlessa added the documentation Improvements or additions to documentation label Sep 14, 2020
@heitorlessa heitorlessa changed the title [Question] warnings.filterwarnings("ignore", "No metrics to publish*") [Question] warnings.filterwarnings("ignore", "No metrics to publish*"). Document metric/logger testing Sep 21, 2020
heitorlessa referenced this issue in heitorlessa/aws-lambda-powertools-python Sep 22, 2020
heitorlessa referenced this issue in heitorlessa/aws-lambda-powertools-python Sep 22, 2020
@heitorlessa heitorlessa added the pending-release Fix or implementation already in dev waiting to be released label Sep 22, 2020
@to-mc
Copy link
Contributor

to-mc commented Sep 23, 2020

Documentation improvements in 1.6.0 added to address this issue. @patrickwerz please let us know if you feel this issue should still be open!

@to-mc to-mc closed this as completed Sep 23, 2020
@heitorlessa heitorlessa removed the pending-release Fix or implementation already in dev waiting to be released label Oct 26, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation Improvements or additions to documentation
Projects
Development

No branches or pull requests

4 participants