Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[chore] removing duplicated data in readme #20061

Merged
merged 1 commit into from
Mar 21, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 0 additions & 2 deletions exporter/pulsarexporter/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,8 +9,6 @@
Pulsar exporter exports logs, metrics, and traces to Pulsar. This exporter uses a synchronous producer
that blocks and able to batch messages.

Supported pipeline types: logs, metrics, traces

## Get Started

The following settings can be optionally configured:
Expand Down
2 changes: 0 additions & 2 deletions processor/deltatorateprocessor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@

**Status: under development; Not recommended for production usage.**

Supported pipeline types: metrics

## Description

The delta to rate processor (`deltatorateprocessor`) converts delta sum metrics to rate metrics. This rate is a gauge.
Expand Down
4 changes: 1 addition & 3 deletions processor/probabilisticsamplerprocessor/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,14 +7,12 @@
| Supported pipeline types | traces, logs |
| Distributions | [core], [contrib] |

Supported pipeline types: traces, logs

The probabilistic sampler supports two types of sampling for traces:

1. `sampling.priority` [semantic
convention](https://github.com/opentracing/specification/blob/master/semantic_conventions.md#span-tags-table)
as defined by OpenTracing
2. Trace ID hashing
1. Trace ID hashing

The `sampling.priority` semantic convention takes priority over trace ID hashing. As the name
implies, trace ID hashing samples based on hash values determined by trace IDs. See [Hashing](#hashing) for more information.
Expand Down
4 changes: 1 addition & 3 deletions receiver/googlecloudspannerreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,7 @@ by exposing via [Total and Top N built-in tables](https://cloud.google.com/spann
_Note_: Total and Top N built-in tables are used with 1 minute statistics granularity.

The ultimate goal of Google Cloud Spanner Receiver is to collect and transform those statistics into metrics
that would be convenient for further analysis by users.

Supported pipeline types: metrics
that would be convenient for further analysis by users.

## Getting Started

Expand Down
2 changes: 0 additions & 2 deletions receiver/haproxyreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,8 +13,6 @@

The HAProxy receiver generates metrics by polling periodically the HAProxy process through a dedicated socket or HTTP URL.

Supported pipeline types: metrics

## Getting Started

## Configuration
Expand Down
2 changes: 0 additions & 2 deletions receiver/oracledbreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,6 @@ This receiver collects metrics from an Oracle Database.

The receiver connects to a database host and performs periodically queries.

Supported pipeline types: metrics

## Getting Started

The following settings are required:
Expand Down
2 changes: 0 additions & 2 deletions receiver/otlpjsonfilereceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,8 +18,6 @@ the receiver will read it in its entirety again.
Please note that there is no guarantee that exact field names will remain stable.
This intended for primarily for debugging Collector without setting up backends.

Supported pipeline types: traces, metrics, logs

## Getting Started

The following settings are required:
Expand Down
2 changes: 0 additions & 2 deletions receiver/prometheusexecreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,6 @@ also supports starting binaries with flags and environment variables,
retrying them with exponential backoff if they crash, string templating, and
random port assignments.

Supported pipeline types: metrics

> :information_source: If you do not need to spawn the binaries locally,
please consider using the [core Prometheus
receiver](../prometheusreceiver)
Expand Down
2 changes: 0 additions & 2 deletions receiver/pulsarreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@

Pulsar receiver receives logs, metrics, and traces from Pulsar.

Supported pipeline types: logs, metrics, traces

## Getting Started

The following settings can be optionally configured:
Expand Down
2 changes: 0 additions & 2 deletions receiver/purefareceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@

The Pure Storage FlashArray receiver, receives metrics from Pure Storage internal services hosts.

Supported pipeline types: metrics

## Configuration

The following settings are required:
Expand Down
2 changes: 0 additions & 2 deletions receiver/purefbreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,6 @@

The Pure Storage FlashBlade receiver, receives metrics from Pure Storage FlashBlade via the [Pure Storage FlashBlade OpenMetrics Exporter](https://github.com/PureStorage-OpenConnect/pure-fb-openmetrics-exporter)

Supported pipeline types: metrics

## Configuration

The following settings are required:
Expand Down
2 changes: 0 additions & 2 deletions receiver/simpleprometheusreceiver/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,8 +11,6 @@ receiver](../prometheusreceiver).
This receiver provides a simple configuration interface to configure the
prometheus receiver to scrape metrics from a single target.

Supported pipeline types: metrics

## Configuration

The following settings are required:
Expand Down