-
Notifications
You must be signed in to change notification settings - Fork 45
Propagate Step Function Trace Context through Managed Services #573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
a38c77b
0a51475
5700d22
f524509
55ff075
d7daf0d
e490610
e31c8f8
dc4a283
f59de19
6222016
45ba945
dc43c9e
eec4743
1b350b9
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -146,9 +146,7 @@ def parse_event_source(event: dict) -> _EventSource: | |
if event.get("source") == "aws.events" or has_event_categories: | ||
event_source = _EventSource(EventTypes.CLOUDWATCH_EVENTS) | ||
|
||
if ( | ||
"_datadog" in event and event.get("_datadog").get("serverless-version") == "v1" | ||
) or ("Execution" in event and "StateMachine" in event and "State" in event): | ||
if is_step_function_event(event): | ||
event_source = _EventSource(EventTypes.STEPFUNCTIONS) | ||
|
||
event_record = get_first_record(event) | ||
|
@@ -369,3 +367,29 @@ def extract_http_status_code_tag(trigger_tags, response): | |
status_code = response.status_code | ||
|
||
return str(status_code) | ||
|
||
|
||
def is_step_function_event(event): | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Is there a way we can memoize this function? It looks like it can potentially be called several times in the course of a single invocation. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Hmm, or it looks like the function can be called multiple times per invocation, but with different "events" each time? If that's true, then we can probably leave it. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. That's a great idea! Correct me if I'm wrong but does the layer only handle one Just wondering to get an idea of how large to make the cache. I guess it can be pretty small anyway since each event is new and we don't repeat There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Each runtime instance will only ever handle one event at a time. It never handles two events concurrently. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Ah just realized we can't memoize it because We could serialize the dict and use that but I'm thinking that'd be much slower |
||
""" | ||
Check if the event is a step function that invoked the current lambda. | ||
|
||
The whole event can be wrapped in "Payload" in Legacy Lambda cases. There may also be a | ||
"_datadog" for JSONata style context propagation. | ||
|
||
The actual event must contain "Execution", "StateMachine", and "State" fields. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Really like these comments. For someone who hasn't work on step functions for a while, these comments help me recollect these historical context. It'll help future maintenance of the code as well. |
||
""" | ||
event = event.get("Payload", event) | ||
avedmala marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
# JSONPath style | ||
if "Execution" in event and "StateMachine" in event and "State" in event: | ||
return True | ||
|
||
# JSONata style | ||
dd_context = event.get("_datadog") | ||
return ( | ||
dd_context | ||
and "Execution" in dd_context | ||
and "StateMachine" in dd_context | ||
and "State" in dd_context | ||
and "serverless-version" in dd_context | ||
) |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -45,7 +45,6 @@ | |
is_authorizer_response, | ||
tracer, | ||
propagator, | ||
is_legacy_lambda_step_function, | ||
) | ||
from datadog_lambda.trigger import ( | ||
extract_trigger_tags, | ||
|
@@ -279,8 +278,6 @@ def _before(self, event, context): | |
self.response = None | ||
set_cold_start(init_timestamp_ns) | ||
submit_invocations_metric(context) | ||
if is_legacy_lambda_step_function(event): | ||
event = event["Payload"] | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Moved this unwrapping to happen inside of |
||
self.trigger_tags = extract_trigger_tags(event, context) | ||
# Extract Datadog trace context and source from incoming requests | ||
dd_context, trace_context_source, event_source = extract_dd_trace_context( | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am curious about how the concatenation of two queues (e.g., SFN → EventBridge → SQS → Lambda) is handled. Is it achieved by extracting two different contexts in the Python tracer? Does this mean that it also supports SFN → EventBridge → SQS → SNS → Lambda?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
SFN → EventBridge → SQS → Lambda is handled the following way
SFN → SNS → SQS → Lambda is handled very similarly with another explicitly check in the SQS extractor looking for SNS events nested
We don't handle SFN → SQS → SNS → Lambda AFAIK but we wouldn't be able to handle SFN → EventBridge → SQS → SNS → Lambda out of the box either
But this is only because it's not explicitly handled. The current python layer implementation is messy because it relies on explicit handling. I think a perfect solution would be one where it's all handled recursively and customers can nest an arbitrary number of supported services without explicit handling
I think AWS team would like to do something like this in bottlecap
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@avedmala Thanks for the explanation. Very informative. I am guessing that a recursive solution should not be that complicated? @purple4reina @joeyzhao2018
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Regarding the "recursive solution", is it written down in any RFC? it sounds interesting and might be able to solve some other problems.