-
Notifications
You must be signed in to change notification settings - Fork 421
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Provide timestamp per metric not per metrics set #166
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
feature-request
feature request
Comments
Hey Daniel,
Thanks for reaching out and raising this with the EMF team - I’ve added a
comment there now.
I agree timestamp should be more accurate to when metric was added, however
I disagree with logging multiple objects due to the cost customers will pay
for that on Log ingestion and storage — Extending EMF to support that,
along with things like Statistics would be best long term
…On Thu, 17 Sep 2020 at 09:04, Daniel Roschka ***@***.***> wrote:
Currently powertools doesn't track when a metric got added and uses the
time of flushing as timestamp in the EMF metadata. This causes inaccurate
time information when the function annotated with log_metrics runs longer
than the minimal resolution for metrics in CloudWatch Metrics.
Here is sample code to reproduce this problem:
#!/usr/bin/env python3
import json
import time
from aws_lambda_powertools import Metrics
from aws_lambda_powertools.metrics import MetricUnit
metrics = Metrics(namespace="Test", service="Test")
@metrics.log_metrics
def handler(event, context):
metrics.add_metric(name="TestMetric1", unit=MetricUnit.Count, value=1)
time.sleep(65)
metrics.add_metric(name="TestMetric2", unit=MetricUnit.Count, value=1)
handler(None, None)
This code produces the following output:
{
"_aws": {
"Timestamp": 1600326158339,
"CloudWatchMetrics": [
{
"Namespace": "Test",
"Dimensions": [
[
"service"
]
],
"Metrics": [
{
"Name": "TestMetric1",
"Unit": "Count"
},
{
"Name": "TestMetric2",
"Unit": "Count"
}
]
}
]
},
"service": "Test",
"TestMetric1": 1.0,
"TestMetric2": 1.0
}
Instead of a single log record with a single timestamp, I'd expect to get
two distinct log records with timestamps 65 seconds apart.
I also already reported the same problem for aws-embedded-metrics-python:
awslabs/aws-embedded-metrics-python#53
<awslabs/aws-embedded-metrics-python#53>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#166>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAZPQBFWQSWJF6IISMI2NELSGGYHDANCNFSM4RP63T6Q>
.
|
The records I expected to be written were based on the current EMF specification. I wasn't aware that changes to the EMF specification are a possibility as well and I agree, extending the specification might be a better solution. 👍 |
This issue was moved to a discussion.
You can continue the conversation there. Go to discussion →
Currently powertools doesn't track when a metric got added and uses the time of flushing as timestamp in the EMF metadata. This causes inaccurate time information when the function annotated with
log_metrics
runs longer than the minimal resolution for metrics in CloudWatch Metrics.Here is sample code to reproduce this problem:
This code produces the following output:
Instead of a single log record with a single timestamp, I'd expect to get two distinct log records with timestamps 65 seconds apart.
I also already reported the same problem for
aws-embedded-metrics-python
: awslabs/aws-embedded-metrics-python#53The text was updated successfully, but these errors were encountered: