-
Notifications
You must be signed in to change notification settings - Fork 421
[Question] warnings.filterwarnings("ignore", "No metrics to publish*"). Document metric/logger testing #161
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hey Patrick - Could you share a snippet of your code? According to the error, you're neither adding metrics, nor metric dimensions, and neither defining the service param (or env var). |
It might also help if the documentation included |
Hey Brooks - We should definitely fix that, thanks for pointing that out. I suspect this issue though is unrelated though, or it's a new use case we haven't accounted for (more docs updates). When using Metrics utility, we'd expect you to either add metrics, a namespace, and a dimension, or conditionally add metrics but dimensions and namespace always being present - CloudWatch Embedded Metric (EMF) Format Spec. The warning is to let customers know they didn't add any metrics while decorating the function handler with log_metrics method - This gives the opportunity to not fail fast when metrics are not always added. metrics = Metrics(namespace="MyApp", service="myService") Should be enough to not cause any validation exception but a metrics warning Looking forward to hearing from Patrick |
Hi Heitor, thanks for your quick reply. Really appreciate: this is my handler
The metrics are then added (or not) in the status handling function (imported from a module) |
Guys,
|
Ah! That explains it - That code above the handler will only execute once per warm invocation, meaning that your metrics wouldn't have dimensions and always fail validation after the first invocation succeeds. Bring that "add_dimension" inside your Lambda handler and it will work as expected. As for patching, it looks like (in your pseudo code) that patching isn't working because it's not being applied correctly -- You need to patch the Metrics class imported from your code, and not from within your process function. I can work on an example tomorrow if that helps |
That would be amazing. figured out a way but that is really cumbersome
This is the handler I want to unittest
|
Sure, I'll work on an example and post here later - One question though, are you monkeypatching because you want to know whether it was called? or something else? I've done a kitchen sink example with common features used, and a test that sets up the env variables to disable Tracer, set dummy Metrics namespace, and test whether metrics were actually dumped to stdout . here - https://github.com/awslabs/aws-lambda-powertools-python/blob/develop/example/tests/test_handler.py |
I want to test the business logic written in my handler method. Therefore I have to import it into my test_handler.py. What I don't want to test is the functionality of the aws_lambda_powertools lib.
But since they are decorators and loaded together with the handler function I have to patch /deactivate them somehow. If wouldn't have stubbed / monkeypatch them but injected a Mock()/MagicMock() the import of my handler would have failed. |
I just checked the test in your link. Looks promising. I am not familiar with pytest (yet) so I have to investigate a little bit. |
That's what I suspected :) You definitely don't need to mock/patch any of that - You only need two things to ensure your tests work correctly based on the snippet you shared:
Pytest makes this easier you'd only need the following, where import pytest
@pytest.fixture(auto_use=True)
def env_vars(monkeypatch):
monkeypatch.setenv("POWERTOOLS_METRICS_NAMESPACE", "example_namespace")
monkeypatch.setenv("POWERTOOLS_TRACE_DISABLED", "1") If stuck, either reply here OR come hang out with us on |
Thanks for your efforts, Heitor,
Unfortunately I still get an metrics and logger error since they are active:
When I delete the @logger.inject_lambda_context decorator the test works |
Ahhh, that's right, it's because you don't have a LambdaContext object - My bad - Here's your tests with a fake Lambda Context object to solve this and make no changes to your handler ...
@pytest.fixture
def lambda_context():
lambda_context = {
"function_name": "test",
"memory_limit_in_mb": 128,
"invoked_function_arn": "arn:aws:lambda:eu-west-1:809313241:function:test",
"aws_request_id": "52fdfc07-2182-154f-163f-5f0f9a621d72",
}
return namedtuple("LambdaContext", lambda_context.keys())(*lambda_context.values())
def test_lambda_handler(lambda_handler, lambda_context):
test_event = {'test': 'event'}
lambda_handler(test_event, lambda_context) # this will now have a Context object populated We launched LambdaContext data class for type hints in 1.5.0 - I will update the docs to create a Test guide to include these and the bits we discussed here |
Yes, now its working. Thanks, Heitor. |
Signed-off-by: heitorlessa <[email protected]>
Signed-off-by: heitorlessa <[email protected]>
Documentation improvements in 1.6.0 added to address this issue. @patrickwerz please let us know if you feel this issue should still be open! |
Hi guys,
thanks for this great tool. Unfortunately, I have some problems with the metric functionality. I get the following exception.
[ERROR] SchemaValidationError: Invalid format. Error: data._aws.CloudWatchMetrics[0].Dimensions[0] must contain at least 1 items, Invalid item
The documentation states that you can suppress this error with "warnings.filterwarnings("ignore", "No metrics to publish*")" . Unfortunately, I still get this Exception.
Where in my code do I have to put the filterwarning to get my handler working?
Thanks in advance,
Patrick
The text was updated successfully, but these errors were encountered: