Ship your AWS lambda telemetry logs anywhere. The no-code style.
Add the layer to your lambda (read how):
LAYER_ARN=arn:aws:lambda:<region>:114300393969:layer:lumigo-telemetry-shipper:1
.
(Optional) Choose a shipping method:
- S3 - Set the environment variable
LUMIGO_EXTENSION_LOG_S3_BUCKET
to your target bucket. Don't forget to add proper permissions.
(Optional) Choose other metrics handlers:
LUMIGO_EXTENSION_TIMEOUT_TARGET_METRIC
indicates the name of the cloudwatch metric that we will report timeouts to. Note that the lambda should have permission to add this metric through its role.
Please contribute or open us a ticket for more integrations.
(Optional) Choose shipping parameters:
LUMIGO_EXTENSION_LOG_BATCH_SIZE
(default1000
) indicates the target size (in bytes) of each log file. We will aggregate logs to at least this size before shipping.LUMIGO_EXTENSION_LOG_BATCH_TIME
(default60000
) indicates the target time (in milliseconds) before closing a log file. We will aggregate logs for at least this period before shipping.
Note: We will ship the logs immediately when the container is shutting down. Therefore, log files can be smaller and more frequent than the above configuration.
(Optional) Disable S3 logging prefix:
LUMIGO_EXTENSION_DISABLE_LOG_PREFIX
(defaultFalse
) indicates the target file should not include the timestampe and record type.
We use the new extensions feature to trigger a new process that handles your logs. This process is triggered by the LambdaService with the lambda's logs, which being aggregated in-memory and transferred to your chosen shipping method.
Our extension is written in python3.7, and our test environment is pytest and moto to test interaction AWS services.
To add another telemetry-handling method, open a PR with 4 file changes:
src/lambda_telemetry_shipper/telemetry_handlers/your_new_handler.py
- Contains a class that extendsTelemetryHandler
, and implements the method:def should_handle(self, record: TelemetryRecord)
andhandle(self, record: TelemetryRecord)
.test/lambda_telemetry_shipper/telemetry_handlers/test_your_new_handler.py
- Contains your tests- Update the file
src/lambda_telemetry_shipper/telemetry_handlers/__init__.py
With an import to your class src/lambda_telemetry_shipper/configuration.py
- Contains your new configuration properties.
Similarly, to add another log-shipping method, open a PR with 4 file changes:
src/lambda_telemetry_shipper/export_logs_handlers/your_new_handler.py
- Contains a class that extendsExportLogsHandler
, and implements the method:def handle_logs(self, records: List[LogRecord]) -> bool
test/lambda_telemetry_shipper/export_logs_handlers/test_your_new_handler.py
- Contains your tests- Update the file
src/lambda_telemetry_shipper/export_logs_handlers/__init__.py
With an import to your class src/lambda_telemetry_shipper/configuration.py
- Contains your new configuration properties.
You can run the tests with ./scripts/checks.sh
, which also checks linting and code conventions.
You can upload a private version of the extension with ./scripts/deploy.sh
for local testing. This script will output the ARN of your local layer.