-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add "aws.ecs.task.id" detection to "resourcedetection" processor #8274
Add "aws.ecs.task.id" detection to "resourcedetection" processor #8274
Comments
I have found a working workaround - I condfigure
Then, I configure
This makes Still, not having to do this, would be the preferred way, especially that the docs for |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping |
Bad, bad bot :P |
We also have a similar setup with app mesh + ecs fargate and are seeing the same issue. Thanks for also including the workaround @mkielar. I agree that |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping |
Bad bot. The issue is still relevant and everything I wrote on Nov 16th still holds. Pinging @open-telemetry/collector-contrib-triagers, as suggested (hahah, it doesn't work :P). |
Pinging code owners for processor/resourcedetection: @Aneurysm9 @dashpole. See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
I guess I'm just gonna keep it alive then ;) |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Bad bot :P |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Code owners, are you able to provide feedback on this one? |
I don't see any issues with this. Having @mkielar would you be willing to contribute a PR for this? I would expect that the implementation would be to extract the task-id out of the task arn similar to your regex. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hi @bryan-aguilar, I finally found time to learn Go, and made an attempt on implementing this feature. The PR is here: #29602 please have a look. Also, pinging code owners: @Aneurysm9 @dashpole |
…r ECS Tasks (#8274) (#29602) **Description:** The `resourcedetection` processor now populates `aws.ecs.task.id` property (in addition to other `aws.ecs.task.*` properties). This simplifies configuration of `awsemfexporter`, which internally searches for `aws.ecs.task.id` property when using `TaskId` placeholder in `loggroup` / `logstream` name template. **Link to tracking Issue:** #8274 **Testing:** ECS Task ARNs come in two versions. In the old one, the last part of the ARN contains only the `task/<task-id>`. In the new one, it contains `task/cluster-name/task-id`. Implementation and Unit Tests have been added to handle both cases. **Documentation:** `README.md` now also mentions `aws.ecs.task.id` as inferred property for ECS.
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Can this issue be closed now? I see it has been implemented here: #29602 |
Is your feature request related to a problem? Please describe.
I'm using OTEL as a sidecar with ECS Services. I use it to parse and filter StatsD Metrics, that AppMesh/Envoy produces, and then I use
emfexporter
to put the metrics to Cloudwatch via Cloudwatch Log Stream. This mostly works. However, when my ECS Service scales to multiple instances, I often see following error in my logs:This is caused by race-condition - now, two nodes write to the same log-stream in cloudwatch, and they corrupt each ther's
sequenceToken
that AWS API Required to put logs to CloudWatch.Describe the solution you'd like
I was hoping to additionally configure
resourcedetection
processor:so that I would be able to use the
{TaskId}
dynamic field when configuringemfexporter
, like this:However, when I run my service, I can see that only the following is detected by
resourcedetection
:Describe alternatives you've considered
Tried to use TaskARN, but that just lead to not having LogStream created at all. Most likely, the reason is that TaskARNs contain characters that are illegal for LogStream Name, the the
emfexporter
fails silently, not being able to create one.Additional context
N/A.
The text was updated successfully, but these errors were encountered: