An exporter is how data gets sent to different systems/back-ends. Generally, an exporter translates the internal format into another defined format.
Supported trace exporters (sorted alphabetically):
Supported metric exporters (sorted alphabetically):
Supported local exporters (sorted alphabetically):
The contributors repository has more exporters that can be added to custom builds of the Collector.
Beyond standard YAML configuration as outlined in the sections that follow, exporters that leverage the net/http package (all do today) also respect the following proxy environment variables:
- HTTP_PROXY
- HTTPS_PROXY
- NO_PROXY
If set at Collector start time then exporters, regardless of protocol, will or will not proxy traffic as defined by these environment variables.
When multiple exporters are configured to send the same data (e.g. by configuring multiple
exporters for the same pipeline) the exporters will have a shared access to the data.
Exporters get access to this shared data when ConsumeTraceData
/ConsumeMetricsData
function is called. Exporters MUST NOT modify the TraceData
/MetricsData
argument of
these functions. If the exporter needs to modify the data while performing the exporting
the exporter can clone the data and perform the modification on the clone or use a
copy-on-write approach for individual sub-parts of TraceData
/MetricsData
argument.
Any approach that does not mutate the original TraceData
/MetricsData
argument
(including referenced data, such as Node
, Resource
, Spans
, etc) is allowed.
Exports trace data to Jaeger collectors accepting one of the following protocols:
Each different supported protocol has its own configuration settings.
The following settings are required:
endpoint
(no default): target to which the exporter is going to send Jaeger trace data, using the gRPC protocol. The valid syntax is described at https://github.com/grpc/grpc/blob/master/doc/naming.md
The following settings can be optionally configured:
cert_pem_file
: certificate file for TLS credentials of gRPC client. Should only be used ifsecure
is set to true.keepalive
: keepalive parameters for client gRPC. See grpc.WithKeepaliveParams().secure
: whether to enable client transport security for the exporter's gRPC connection. See grpc.WithInsecure().server_name_override
: If set to a non empty string, it will override the virtual host name of authority (e.g. :authority header field) in requests (typically used for testing).
Example:
exporters:
jaeger_grpc:
endpoint: jaeger-all-in-one:14250
cert_pem_file: /my-cert.pem
server_name_override: opentelemetry.io
The full list of settings exposed for this exporter are documented here with detailed sample configurations here.
The following settings are required:
url
(no default): target to which the exporter is going to send Jaeger trace data, using the Thrift HTTP protocol.
The following settings can be optionally configured:
timeout
(default = 5s): the maximum time to wait for a HTTP request to completeheaders
(no default): headers to be added to the HTTP request
Example:
exporters:
jaeger:
url: "http://some.other.location/api/traces"
timeout: 2s
headers:
added-entry: "added value"
dot.test: test
The full list of settings exposed for this exporter are documented here with detailed sample configurations here.
Exports traces and/or metrics to another Collector via gRPC using OpenCensus format.
The following settings are required:
endpoint
: target to which the exporter is going to send traces or metrics, using the gRPC protocol. The valid syntax is described at https://github.com/grpc/grpc/blob/master/doc/naming.md.
The following settings can be optionally configured:
cert_pem_file
: certificate file for TLS credentials of gRPC client. Should only be used ifsecure
is set to true.compression
: compression key for supported compression types within collector. Currently the only supported mode isgzip
.headers
: the headers associated with gRPC requests.keepalive
: keepalive parameters for client gRPC. See grpc.WithKeepaliveParams().num_workers
(default = 2): number of workers that send the gRPC requests. Optional.reconnection_delay
: time period between each reconnection performed by the exporter.secure
: whether to enable client transport security for the exporter's gRPC connection. See grpc.WithInsecure().
Example:
exporters:
opencensus:
endpoint: localhost:14250
reconnection_delay: 60s
secure: false
The full list of settings exposed for this exporter are documented here with detailed sample configurations here.
Exports trace data to a Zipkin back-end.
The following settings are required:
format
(default = JSON): The format to sent events in. Can be set to JSON or proto.url
(no default): URL to which the exporter is going to send Zipkin trace data.
The following settings can be optionally configured:
- (temporary flag)
export_resource_labels
(default = true): Whether Resource labels are going to be merged with span attributes Note: this flag was added to aid the migration to new (fixed and symmetric) behavior and is going to be removed soon. See open-telemetry#595 for more details defaultservicename
(no default): What to name services missing this information
Example:
exporters:
zipkin:
url: "http://some.url:9411/api/v2/spans"
The full list of settings exposed for this exporter are documented here with detailed sample configurations here.
The OpenCensus exporter supports both traces and metrics. Configuration information can be found under the trace section here.
Exports metric data to a Prometheus back-end.
The following settings are required:
endpoint
(no default): Where to send metric data
The following settings can be optionally configured:
constlabels
(no default): key/values that are applied for every exported metric.namespace
(no default): if set, exports metrics under the provided value.
Example:
exporters:
prometheus:
endpoint: "1.2.3.4:1234"
namespace: test-space
const_labels:
label1: value1
"another label": spaced value
The full list of settings exposed for this exporter are documented here with detailed sample configurations here.
Local exporters send data to a local endpoint such as the console or a log file.
This exporter will write the pipeline data to a JSON file. The data is written in Protobuf JSON encoding (https://developers.google.com/protocol-buffers/docs/proto3#json). Note that there are no compatibility guarantees for this format, since it just a dump of internal structures which can be changed over time. This intended for primarily for debugging Collector without setting up backends.
The following settings are required:
path
(no default): where to write information.
Example:
exporters:
file:
path: ./filename.json
The full list of settings exposed for this exporter are documented here with detailed sample configurations here.
Exports traces and/or metrics to the console via zap.Logger.
The following settings can be configured:
loglevel
: the log level of the logging export (debug|info|warn|error). Default isinfo
.
Example:
exporters:
logging:
The full list of settings exposed for this exporter are documented here with detailed sample configurations here.