Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for yaml-formatted configs #408

Merged
merged 1 commit into from
Apr 1, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Added support for yaml-formatted configs
  • Loading branch information
Marina Frank committed Apr 1, 2024
commit c2482cb298c713402a72db428cd487638a646731
3 changes: 2 additions & 1 deletion .dockerignore
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,8 @@ alpline
.*
tests
*-example.toml
*-example.y*ml
.golangci.yml
*.md
*.pc
dist
dist
2 changes: 1 addition & 1 deletion Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ OS_TYPE ?= $(shell uname -s | tr '[:upper:]' '[:lower:]')
ARCH_TYPE ?= $(subst x86_64,amd64,$(patsubst i%86,386,$(ARCH)))
GOOS ?= $(shell go env GOOS)
GOARCH ?= $(shell go env GOARCH)
VERSION ?= 0.5.2
VERSION ?= 0.6.0
LDFLAGS := -X main.Version=$(VERSION)
GOFLAGS := -ldflags "$(LDFLAGS) -s -w"
BUILD_ARGS = --build-arg VERSION=$(VERSION)
Expand Down
92 changes: 60 additions & 32 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,11 +6,13 @@

##### Table of Contents

[Description](#description)
[Installation](#installation)
[Running](#running)
[Grafana](#grafana)
[Troubleshooting](#troubleshooting)
[Description](#description)
[Installation](#installation)
[Running](#running)
[Usage](#usage)
[Grafana](#integration-with-grafana)
[Build](#build)
[Troubleshooting](#faqtroubleshooting)
[Operating principles](operating-principles.md)

## Description
Expand Down Expand Up @@ -101,9 +103,9 @@ Pre-compiled versions for Linux 64 bit and Mac OSX 64 bit can be found under [re
In order to run, you'll need the [Oracle Instant Client Basic](http://www.oracle.com/technetwork/database/features/instant-client/index-097480.html)
for your operating system. Only the basic version is required for execution.

#### Running
## Running
Ensure that the environment variable DATA_SOURCE_NAME is set correctly before starting.
DATA_SOURCE_NAME should be in Oracle Database connection string format:
DATA_SOURCE_NAME should be in Oracle Database connection string format:

```conn
oracle://user:pass@server/service_name[?OPTION1=VALUE1[&OPTIONn=VALUEn]...]
Expand Down Expand Up @@ -131,6 +133,7 @@ Version 0.5+ of the exporter is using a go lang driver that don't need the binar
Basicaly, it consist to follow this convention:
- Add a string `oracle://` in front of the string
- Replace the slash (`/`) between user and password by a colon (`:`)
- special characters should be url-escaped, like in this jinja example template: `{{ password|urlencode()|regex_replace('/','%2F') }}`

Here is some example:

Expand All @@ -139,7 +142,7 @@ Here is some example:
| `system/password@oracle-sid` | `oracle://system:password@oracle-sid` |
| `user/password@myhost:1521/service` | `oracle://user:password@myhost:1521/service` |

## Default-metrics requirement
### Default-metrics requirement
Make sure to grant `SYS` privilege on `SELECT` statement for the monitoring user, on the following tables.
```
dba_tablespace_usage_metrics
Expand All @@ -154,15 +157,15 @@ v$session
v$resource_limit
```

#### Integration with System D
### Integration with System D

Create `oracledb_exporter` user with disabled login and `oracledb_exporter` group then run the following commands:

```bash
mkdir /etc/oracledb_exporter
chown root:oracledb_exporter /etc/oracledb_exporter
chmod 775 /etc/oracledb_exporter
Put config files to **/etc/oracledb_exporter**
chown root:oracledb_exporter /etc/oracledb_exporter
chmod 775 /etc/oracledb_exporter
Put config files to **/etc/oracledb_exporter**
Put binary to **/usr/local/bin**
```

Expand Down Expand Up @@ -205,9 +208,9 @@ Usage of oracledb_exporter:
--log.level value
Only log messages with the given severity or above. Valid levels: [debug, info, warn, error, fatal].
--custom.metrics string
File that may contain various custom metrics in a TOML file.
File that may contain various custom metrics in a toml or yaml format.
--default.metrics string
Default TOML file metrics.
Default metrics file in a toml or yaml format.
--web.systemd-socket
Use systemd socket activation listeners instead of port listeners (Linux only).
--web.listen-address string
Expand All @@ -222,24 +225,30 @@ Usage of oracledb_exporter:
Connection string to a data source. (default "env: DATA_SOURCE_NAME")
--web.config.file
Path to configuration file that can enable TLS or authentication.
--query.timeout
Query timeout (in seconds). (default "5")
--scrape.interval
Interval between each scrape. Default "0s" is to scrape on collect requests
```

## Default metrics
### Default metrics config file

This exporter comes with a set of default metrics defined in **default-metrics.toml**. You can modify this file or
provide a different one using `default.metrics` option.
This exporter comes with a set of default metrics: [**default-metrics.toml**](./default-metrics.toml)/[**default-metrics.yaml**](./default-metrics.yaml).\
You can modify this file or provide a different one using `default.metrics` option.

### Custom metrics
### Custom metrics config file

> NOTE: Do not put a `;` at the end of your SQL queries as this will **NOT** work.

This exporter does not have the metrics you want? You can provide new one using TOML file. To specify this file to the
This exporter does not have the metrics you want? You can provide new one using custom metrics config file in a toml or yaml format. To specify this file to the
exporter, you can:

- Use `--custom.metrics` flag followed by the TOML file
- Export CUSTOM_METRICS variable environment (`export CUSTOM_METRICS=my-custom-metrics.toml`)
- Use `--custom.metrics` flag followed by your custom config file
- Export CUSTOM_METRICS variable environment (`export CUSTOM_METRICS=<path-to-custom-configfile>`)

This file must contain the following elements:
### Config file TOML syntax

The file must contain the following elements:

- One or several metric section (`[[metric]]`)
- For each section a context, a request and a map between a field of your request and a comment.
Expand Down Expand Up @@ -311,17 +320,36 @@ metricstype = { value_1 = "counter" }
This TOML file will produce the following result:

```
# HELP oracledb_test_value_1 Simple test example returning always 1 as counter.
# TYPE oracledb_test_value_1 counter
oracledb_test_value_1 1
# HELP oracledb_test_value_2 Same test but returning always 2 as gauge.
# TYPE oracledb_test_value_2 gauge
oracledb_test_value_2 2
# HELP oracledb_context_with_labels_value_1 Simple example returning always 1 as counter.
# TYPE oracledb_context_with_labels_value_1 counter
oracledb_context_with_labels_value_1{label_1="First label",label_2="Second label"} 1
# HELP oracledb_context_with_labels_value_2 Same but returning always 2 as gauge.
# TYPE oracledb_context_with_labels_value_2 gauge
oracledb_context_with_labels_value_2{label_1="First label",label_2="Second label"} 2

```

You can find [here](./custom-metrics-example/custom-metrics.toml) a working example of custom metrics for slow queries, big queries and top 100 tables.

# Customize metrics in a docker image
### Config file YAML syntax

yaml format has the same as the above requirements regarding optional and mandatory fields and their meaning, but needs a root element `metric`:
```yaml
metric:
- context: "context_with_labels"
labels: [label_1,label_2]
metricsdesc:
value_1: "Simple example returning always 1 as counter."
value_2: "Same but returning always 2 as gauge."
request: "SELECT 'First label' as label_1, 'Second label' as label_2,
1 as value_1, 2 as value_2
FROM DUAL"
metricstype:
value_1: "counter"
```

For more practical examples, see [custom-metrics.yaml](./custom-metrics-example/custom-metrics.yaml)
### Customize metrics in a docker image

If you run the exporter as a docker image and want to customize the metrics, you can use the following example:

Expand All @@ -333,7 +361,7 @@ COPY custom-metrics.toml /
ENTRYPOINT ["/oracledb_exporter", "--custom.metrics", "/custom-metrics.toml"]
```

## Using a multiple host data source name
### Using a multiple host data source name

> NOTE: This has been tested with v0.2.6a and will most probably work on versions above.

Expand Down Expand Up @@ -371,7 +399,7 @@ database =
- `TNS_ADMIN`: Path you choose for the tns admin folder (`/path/to/tns_admin` in the example file above)
- `DATA_SOURCE_NAME`: Datasource pointing to the `TNS_ENTRY` (`user:password@database` in the example file above)

## TLS connection to database
### TLS connection to database

First, set the following variables:

Expand Down Expand Up @@ -490,7 +518,7 @@ metricsdesc = { current_utilization= "Generic counter metric from v$resource_lim
request="SELECT resource_name,current_utilization,CASE WHEN TRIM(limit_value) LIKE 'UNLIMITED' THEN '-1' ELSE TRIM(limit_value) END as limit_value FROM v$resource_limit"
```

If the value of limite_value is 'UNLIMITED', the request send back the value -1.
If the value of limit_value is 'UNLIMITED', the request send back the value -1.

You can increase the log level (`--log.level debug`) in order to get the statement generating this error.

Expand Down
54 changes: 39 additions & 15 deletions collector/collector.go
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,7 @@ import (
"github.com/go-kit/log"
"github.com/go-kit/log/level"
"github.com/prometheus/client_golang/prometheus"
"sigs.k8s.io/yaml"
)

// Exporter collects Oracle DB metrics. It implements prometheus.Collector.
Expand Down Expand Up @@ -73,7 +74,7 @@ type Metric struct {

// Metrics is a container structure for prometheus metrics
type Metrics struct {
Metric []Metric
Metric []Metric `json:"metrics"`
}

var (
Expand Down Expand Up @@ -281,22 +282,22 @@ func (e *Exporter) scrape(ch chan<- prometheus.Metric) {
defer wg.Done()

level.Debug(e.logger).Log("About to scrape metric: ")
level.Debug(e.logger).Log("- Metric MetricsDesc: ", metric.MetricsDesc)
level.Debug(e.logger).Log("- Metric MetricsDesc: ", fmt.Sprintf("%+v", metric.MetricsDesc))
level.Debug(e.logger).Log("- Metric Context: ", metric.Context)
level.Debug(e.logger).Log("- Metric MetricsType: ", metric.MetricsType)
level.Debug(e.logger).Log("- Metric MetricsBuckets: ", metric.MetricsBuckets, "(Ignored unless Histogram type)")
level.Debug(e.logger).Log("- Metric Labels: ", metric.Labels)
level.Debug(e.logger).Log("- Metric MetricsType: ", fmt.Sprintf("%+v", metric.MetricsType))
level.Debug(e.logger).Log("- Metric MetricsBuckets: ", fmt.Sprintf("%+v", metric.MetricsBuckets), "(Ignored unless Histogram type)")
level.Debug(e.logger).Log("- Metric Labels: ", fmt.Sprintf("%+v", metric.Labels))
level.Debug(e.logger).Log("- Metric FieldToAppend: ", metric.FieldToAppend)
level.Debug(e.logger).Log("- Metric IgnoreZeroResult: ", metric.IgnoreZeroResult)
level.Debug(e.logger).Log("- Metric IgnoreZeroResult: ", fmt.Sprintf("%+v", metric.IgnoreZeroResult))
level.Debug(e.logger).Log("- Metric Request: ", metric.Request)

if len(metric.Request) == 0 {
level.Error(e.logger).Log("Error scraping for ", metric.MetricsDesc, ". Did you forget to define request in your toml file?")
level.Error(e.logger).Log("Error scraping for ", metric.MetricsDesc, ". Did you forget to define request in your metrics config file?")
return
}

if len(metric.MetricsDesc) == 0 {
level.Error(e.logger).Log("Error scraping for query", metric.Request, ". Did you forget to define metricsdesc in your toml file?")
level.Error(e.logger).Log("Error scraping for query", metric.Request, ". Did you forget to define metricsdesc in your metrics config file?")
return
}

Expand All @@ -312,7 +313,7 @@ func (e *Exporter) scrape(ch chan<- prometheus.Metric) {

scrapeStart := time.Now()
if err = e.ScrapeMetric(e.db, ch, metric); err != nil {
level.Error(e.logger).Log("error scraping for", metric.Context, "_", metric.MetricsDesc, time.Since(scrapeStart), ":", err.Error())
level.Error(e.logger).Log("scrapeMetricContext", metric.Context, "ScrapeDuration", time.Since(scrapeStart), "msg", err.Error())
e.scrapeErrors.WithLabelValues(metric.Context).Inc()
} else {
level.Debug(e.logger).Log("successfully scraped metric: ", metric.Context, metric.MetricsDesc, time.Since(scrapeStart))
Expand Down Expand Up @@ -383,19 +384,42 @@ func (e *Exporter) reloadMetrics() {
// If custom metrics, load it
if strings.Compare(e.config.CustomMetrics, "") != 0 {
for _, _customMetrics := range strings.Split(e.config.CustomMetrics, ",") {
if _, err := toml.DecodeFile(_customMetrics, &additionalMetrics); err != nil {
level.Error(e.logger).Log(err)
panic(errors.New("Error while loading " + _customMetrics))
if strings.HasSuffix(_customMetrics, "toml") {
if err := loadTomlMetricsConfig(_customMetrics, &additionalMetrics); err != nil {
panic(err)
}
} else {
level.Info(e.logger).Log("Successfully loaded custom metrics from: " + _customMetrics)
if err := loadYamlMetricsConfig(_customMetrics, &additionalMetrics); err != nil {
panic(err)
}
}
level.Info(e.logger).Log("event", "Successfully loaded custom metrics from "+_customMetrics)
level.Debug(e.logger).Log("custom metrics parsed content", fmt.Sprintf("%+v", additionalMetrics))
e.metricsToScrape.Metric = append(e.metricsToScrape.Metric, additionalMetrics.Metric...)
}
} else {
level.Debug(e.logger).Log("No custom metrics defined.")
}
}

func loadYamlMetricsConfig(_metricsFileName string, metrics *Metrics) error {
yamlBytes, err := os.ReadFile(_metricsFileName)
if err != nil {
return fmt.Errorf("cannot read the metrics config %s: %w", _metricsFileName, err)
}
if err := yaml.Unmarshal(yamlBytes, metrics); err != nil {
return fmt.Errorf("cannot unmarshal the metrics config %s: %w", _metricsFileName, err)
}
return nil
}

func loadTomlMetricsConfig(_customMetrics string, metrics *Metrics) error {
if _, err := toml.DecodeFile(_customMetrics, metrics); err != nil {
return fmt.Errorf("cannot read the metrics config %s: %w", _customMetrics, err)
}
return nil
}

// ScrapeMetric is an interface method to call scrapeGenericValues using Metric struct values
func (e *Exporter) ScrapeMetric(db *sql.DB, ch chan<- prometheus.Metric, metricDefinition Metric) error {
level.Debug(e.logger).Log("calling function ScrapeGenericValues()")
Expand All @@ -420,8 +444,7 @@ func (e *Exporter) scrapeGenericValues(db *sql.DB, ch chan<- prometheus.Metric,
value, err := strconv.ParseFloat(strings.TrimSpace(row[metric]), 64)
// If not a float, skip current metric
if err != nil {
level.Error(e.logger).Log("Unable to convert current value to float (metric=" + metric +
",metricHelp=" + metricHelp + ",value=<" + row[metric] + ">)")
level.Error(e.logger).Log("msg", "Unable to convert current value to float", "metric", metric, "metricHelp", metricHelp, "value", row[metric])
continue
}
level.Debug(e.logger).Log("Query result looks like: ", value)
Expand Down Expand Up @@ -576,6 +599,7 @@ func getMetricType(metricType string, metricsType map[string]string) prometheus.

func cleanName(s string) string {
s = strings.ReplaceAll(s, " ", "_") // Remove spaces
s = strings.ReplaceAll(s, "-", "_") // Remove hyphens
s = strings.ReplaceAll(s, "(", "") // Remove open parenthesis
s = strings.ReplaceAll(s, ")", "") // Remove close parenthesis
s = strings.ReplaceAll(s, "/", "") // Remove forward slashes
Expand Down
18 changes: 12 additions & 6 deletions collector/default_metrics.go
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,7 @@ package collector

import (
"errors"
"fmt"
"path/filepath"
"strings"

"github.com/BurntSushi/toml"
"github.com/go-kit/log/level"
Expand Down Expand Up @@ -76,15 +75,22 @@ ORDER by tablespace
// DefaultMetrics is a somewhat hacky way to load the default metrics
func (e *Exporter) DefaultMetrics() Metrics {
var metricsToScrape Metrics
var err error
if e.config.DefaultMetricsFile != "" {
if _, err := toml.DecodeFile(filepath.Clean(e.config.DefaultMetricsFile), &metricsToScrape); err != nil {
level.Error(e.logger).Log(fmt.Sprintf("there was an issue while loading specified default metrics file at: "+e.config.DefaultMetricsFile+", proceeding to run with default metrics."), err)
if strings.HasSuffix(e.config.DefaultMetricsFile, "toml") {
err = loadTomlMetricsConfig(e.config.DefaultMetricsFile, &metricsToScrape)
} else {
err = loadYamlMetricsConfig(e.config.DefaultMetricsFile, &metricsToScrape)
}
return metricsToScrape
if err == nil {
return metricsToScrape
}
level.Error(e.logger).Log("defaultMetricsFile", e.config.DefaultMetricsFile, "msg", err)
level.Warn(e.logger).Log("msg", "proceeding to run with default metrics")
}

if _, err := toml.Decode(defaultMetricsConst, &metricsToScrape); err != nil {
level.Error(e.logger).Log(err)
level.Error(e.logger).Log("msg", err.Error())
panic(errors.New("Error while loading " + defaultMetricsConst))
}
return metricsToScrape
Expand Down
Loading
Loading