Description
I am publishing metrics like this
"""Logging metedata utils"""
import os
import time
from aws_lambda_powertools import Logger, Metrics
from aws_lambda_powertools.metrics import MetricUnit
METRICS = Metrics()
METRICS.add_dimension(name="environment", value=os.environ["NODE_ENV"])
LOGGER = Logger(child=True)
def log_processing_statistics(start_processing_timestamp, annotation_metadata):
"""
Logs some metrics on how long it took to process or receive items.
:param start_processing_timestamp: when we started processing an annotation
:param dict annotation_metadata: keys include end_timestamp and queue_arrival_timestamp
"""
now = time.time()
queue_arrival_timestamp = annotation_metadata["queue_arrival_timestamp"]
queue_to_processed_time = now - queue_arrival_timestamp
LOGGER.info("[META]:queue_to_processed_time: %.2f", queue_to_processed_time)
METRICS.add_metric(name="entrance_counter_queue_to_processed_time",
unit=MetricUnit.Seconds,
value=queue_to_processed_time)
Where my lambda handler top level function has been decorated like this:
METRICS = Metrics()
METRICS.add_dimension(name="environment", value=os.environ["NODE_ENV"])
LOGGER = Logger(datefmt="%Y-%m-%dT%H:%M:%SZ")
LOGGER._logger.handlers[0].formatter.converter = time.gmtime #pylint: disable = protected-access
log.initialize_structured_logging()
TRACER = Tracer() # Sets service via env var
@METRICS.log_metrics(capture_cold_start_metric=
os.environ.get("POWERTOOLS_METRICS_CAPTURE_COLD_START",
"false").lower() == "true")
@TRACER.capture_lambda_handler(capture_error=True)
I expect to see cloudwatch metrics here being logged every few minutes; but instead, I see them every 40 minutes or so; or at least at erratic times with a much larger interval than they are being logged.
e.g.
I have checked my cloudwatch logs and found the following two entries suggesting that on some level things are working as expected:
{
"_aws": {
"Timestamp": 1618965743152,
"CloudWatchMetrics": [
{
"Namespace": "ThriEntranceCounterLambdaGenesis",
"Dimensions": [
[
"service"
]
],
"Metrics": [
{
"Name": "entrance_counter_queue_to_processed_time",
"Unit": "Seconds"
}
]
}
]
},
"service": "ThriEntranceCounterLambdaGenesis",
"entrance_counter_queue_to_processed_time": [
60.368706703186035
]
}
and
{
"_aws": {
"Timestamp": 1618965868129,
"CloudWatchMetrics": [
{
"Namespace": "ThriEntranceCounterLambdaGenesis",
"Dimensions": [
[
"service"
]
],
"Metrics": [
{
"Name": "entrance_counter_queue_to_processed_time",
"Unit": "Seconds"
}
]
}
]
},
"service": "ThriEntranceCounterLambdaGenesis",
"entrance_counter_queue_to_processed_time": [
64.84194445610046
]
}
So would expect to see a metric at 1618965743152 and 1618965868129
or Wednesday, 21 April 2021 09:42:23.152 GMT+09:00 and Wednesday, 21 April 2021 09:44:28.129 GMT+09:00 respectively.
But instead, I see the following when aggregating at 1-second sum:
Am I using this functionality wrong? Or is there some sort of built in default aggregation over a large time period somewhere?
Environment
- Powertools version used:
aws-lambda-powertools==1.12.0
- Packaging format (Layers, PyPi)
A layer created with:
aws-lambda-powertools==1.12.0
boto3==1.17.8
botocore==1.20.8
pytz==2021.1
sentry-sdk==0.20.2
typeguard==2.10.0
- AWS Lambda function runtime:
Python 3.7
Metadata
Metadata
Assignees
Type
Projects
Status