Closed
Description
Expected Behaviour
When using two Metrics
objects and two nested call to log_metrics
I expected both sets of metrics to be serialized. The reason for two Metrics
object is that every metric is output twice, with a different set of dimensions. Using single_metric
for everything works but seems sub-optimal from a performance standpoint.
Example:
app = ... # FastAPI app
# API handler metrics with dimensions: Stage, service
api_metrics = Metrics()
api_metrics.set_default_dimensions(Stage=os.environ['STAGE'])
# API handler metrics with dimensions: Method, Resource, Stage, service
detailed_api_metrics = Metrics()
detailed_api_metrics.set_default_dimensions(Stage=os.environ['STAGE'])
handler = Mangum(app)
handler = logging.getLogger().inject_lambda_context(handler, clear_state=True)
# Add metrics last to properly flush metrics.
handler = api_metrics.log_metrics(handler, capture_cold_start_metric=True)
handler = detailed_api_metrics.log_metrics(handler)
Current Behaviour
Looking in the CloudWatch logs only the JSON for one set of metrics (api_metrics
above) is output.
Code snippet
from aws_lambda_powertools.metrics import Metrics, MetricUnit
# API handler metrics with dimensions: Stage, service
api_metrics = Metrics()
api_metrics.set_default_dimensions(Stage='Test')
# API handler metrics with dimensions: Method, Resource, Stage, service
detailed_api_metrics = Metrics()
detailed_api_metrics.set_default_dimensions(Stage='Test')
def handler(event, context):
detailed_api_metrics.add_dimension(
name='Method', value='GET')
detailed_api_metrics.add_dimension(
name='Resource', value='/some/path')
api_metrics.add_metric(name='Count',
unit=MetricUnit.Count, value=1)
detailed_api_metrics.add_metric(
name='Count', unit=MetricUnit.Count, value=1)
# Add metrics last to properly flush metrics.
handler = api_metrics.log_metrics(handler, capture_cold_start_metric=True)
handler = detailed_api_metrics.log_metrics(handler)
Possible Solution
No response
Steps to Reproduce
- Run the above code on
Lambda
. - Look in the logs.
- See that the JSON from one set of metrics is missing.
AWS Lambda Powertools for Python version
1.25.10
AWS Lambda function runtime
3.9
Packaging format used
PyPi
Debugging logs
I do see the two following things in the log:
/opt/python/aws_lambda_powertools/metrics/metrics.py:189: UserWarning: No metrics to publish, skipping
warnings.warn("No metrics to publish, skipping")
Note that the two Metrics
objects are used unconditionally right after each other so it would be weird for one to have metrics and one to have not.
{
"_aws": {
"Timestamp": 1666944145372,
"CloudWatchMetrics": [
{
"Namespace": "Benetics/Backend",
"Dimensions": [
[
"Stage",
"service"
]
],
"Metrics": [
{
"Name": "Count",
"Unit": "Count"
},
{
"Name": "4xx",
"Unit": "Count"
},
{
"Name": "5xx",
"Unit": "Count"
},
{
"Name": "Latency",
"Unit": "Milliseconds"
}
]
}
]
},
"Stage": "prod",
"service": "Api",
"Count": [
1,
1
],
"4xx": [
0,
0
],
"5xx": [
0,
0
],
"Latency": [
184,
184
]
}
Activity
boring-cyborg commentedon Oct 28, 2022
Thanks for opening your first issue here! We'll come back to you as soon as we can.
In the meantime, check out the #python channel on our AWS Lambda Powertools Discord: Invite link
tibbe commentedon Oct 28, 2022
Here's a standalone example with the logging turned up to 11 showing that only one set of metrics gets output:
Run with pytest.
tibbe commentedon Oct 28, 2022
The issue seems to be these 4 class attributes in
Metrics
:This causes sharing across
Metrics
instances, which makes things not work out.Just making these instance attributes fixes the issue. That being said I don't know what we lose. Presumably if people create
Metrics
instance all over the place instead of e.g. using one instance per dimension combination there will be e.g. extra allocation.heitorlessa commentedon Oct 28, 2022
We briefly discussed on Discord on why this being expected behaviour, and I'll provide a proper answer as soon as I can.
I'll rephrase my last comment on that Discord thread as a proper response here and gather your ideas on what UX would be good to unlock this and other ISV use cases (e.g., multiple namespaces).
heitorlessa commentedon Oct 31, 2022
Confirming that we'll be working on this for this Friday's release. In that thread (Discord), we weren't able to find a better class name. Based on what we know, the most suitable candidate would be a flag in the existing
Metrics
class.Metrics(singleton=False)
At a first glance, the only "side effect" is that
set_default_dimensions
method would have the same effect asadd_dimension
in this case -- whensingleton=True
(default),set_default_dimensions
ensures these dimensions are automatically added when metrics are flushed/cleared.If you do have a better naming idea for a separate Class altogether, please shout out!
7 remaining items