You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have generated 20000 spans , spans are received by Open telemetry collector, but during export when i checked logs of collector pod every times it got struct at particular span number , with thisIi cannot see complete span data in the backend
this is the configuration i am using , with otel_version="v0.107.0" and its a Customized binary its contains both core and contrib repository plugins
configuration:
exporters:
debug:
verbosity: detailed
opensearch:
http:
endpoint: ${env:SE_SERVER_URLS}
tls:
ca_file: ${env:ROOT_CA_CERT}
cert_file: ${env:CLIENT_CRT}
key_file: ${env:CLIENT_KEY}
extensions:
memory_ballast: {}
health_check:
endpoint: ${env:MY_POD_IP}:13133
jaegerremotesampling:
source:
reload_interval: 0s
file: /etc/sampling/samplingstrategies.json
processors:
batch: {}
memory_limiter:
# Check_interval is the time between measurements of memory usage.
check_interval: 5s
# By default limit_mib is set to 80% of ".Values.resources.limits.memory"
limit_percentage: 80
# By default spike_limit_mib is set to 25% of ".Values.resources.limits.memory"
spike_limit_percentage: 25
probabilistic_sampler:
sampling_percentage: 100
receivers:
otlp:
protocols:
grpc:
endpoint: ${env:MY_POD_IP}:4317
tls:
cert_file: ${env:SERVER_CRT}
key_file: ${env:SERVER_KEY}
http:
endpoint: ${env:MY_POD_IP}:4318
tls:
cert_file: ${env:SERVER_CRT}
key_file: ${env:SERVER_KEY}
service:
extensions:
- memory_ballast
- health_check
- jaegerremotesampling
pipelines:
traces:
exporters:
- debug
- opensearch
processors:
- memory_limiter
- batch
- probabilistic_sampler
receivers:
- otlp
I have generated 20000 spans , spans are received by Open telemetry collector, but during export when i checked logs of collector pod every times it got struct at particular span number , with thisIi cannot see complete span data in the backend
this is the configuration i am using , with otel_version="v0.107.0" and its a Customized binary its contains both core and contrib repository plugins
configuration:
exporters:
debug:
verbosity: detailed
opensearch:
http:
endpoint: ${env:SE_SERVER_URLS}
tls:
ca_file: ${env:ROOT_CA_CERT}
cert_file: ${env:CLIENT_CRT}
key_file: ${env:CLIENT_KEY}
extensions:
memory_ballast: {}
health_check:
endpoint: ${env:MY_POD_IP}:13133
jaegerremotesampling:
source:
reload_interval: 0s
file: /etc/sampling/samplingstrategies.json
processors:
batch: {}
memory_limiter:
# Check_interval is the time between measurements of memory usage.
check_interval: 5s
# By default limit_mib is set to 80% of ".Values.resources.limits.memory"
limit_percentage: 80
# By default spike_limit_mib is set to 25% of ".Values.resources.limits.memory"
spike_limit_percentage: 25
probabilistic_sampler:
sampling_percentage: 100
receivers:
otlp:
protocols:
grpc:
endpoint: ${env:MY_POD_IP}:4317
tls:
cert_file: ${env:SERVER_CRT}
key_file: ${env:SERVER_KEY}
http:
endpoint: ${env:MY_POD_IP}:4318
tls:
cert_file: ${env:SERVER_CRT}
key_file: ${env:SERVER_KEY}
service:
extensions:
- memory_ballast
- health_check
- jaegerremotesampling
pipelines:
traces:
exporters:
- debug
- opensearch
processors:
- memory_limiter
- batch
- probabilistic_sampler
receivers:
- otlp
Kubernetes resources specifications:
resources:
telemetry-collector:
requests:
memory: 64Mi
cpu: 250m
limits:
memory: 128Mi
cpu: 500m
Server starting logs:
2024-10-30T12:15:48.331Z info memorylimiter/memorylimiter.go:151 Using percentage memory limiter {"kind": "processor", "name": "memory_limiter", "pipeline": "traces", "total_memory_mib": 128, "limit_percentage": 80, "spike_limit_percentage": 25}
2024-10-30T12:15:48.331Z info memorylimiter/memorylimiter.go:75 Memory limiter configured {"kind": "processor", "name": "memory_limiter", "pipeline": "traces", "limit_mib": 102, "spike_limit_mib": 32, "check_interval": 5}
2024-10-30T12:15:48.333Z info service@v0.107.0/service.go:195 Starting otelcol-custom... {"Version": "1.0.0", "NumCPU": 8}
The text was updated successfully, but these errors were encountered: