Skip to content

Using custom queries is deprecated in postgres_exporter #81

Closed
@jkuester

Description

@jkuester

intro

So, apparently the custom queries functionality in postgres_exporter which allows us to collect metrics based on the data in the postgres database (aka pretty much the only reason we are using the postgres_exporter) has been deprecated. It seems that the maintainers of postgres_exporter view the main purpose of the project as to provide metrics specific to the inner workings of the Postgres instance. They recommend using a different exporter for collecting metrics from the actual data in the Postgres database.

Test setup

pinning these to the top of the ticket, @mrjones-plip to keep up to date:

  1. check out these repos
  2. using script/docker-helper-4.x directory in CHT Core repo, start a docker helper instance of CHT core - note the URL and Port.
  3. assuming docker helper gave you a URL and port of 192-168-68-17.local-ip.medicmobile.org:10464 - start your couch2pg instance by cd into the cht-couch2pg directory and running:
    COUCH2PG_SLEEP_MINS=0.1 \
       COUCHDB_URL=https://medic:password@172-17-0-1.local-ip.medicmobile.org:10464/medic \
       docker compose up -d
  4. you can optionally connect with a postgres client to localhost:5432 with username cht_couch2pg , password cht_couch2pg_password to database cht to ensure connection is working
  5. cd into watchdog repo directory and check out 81-sql-exporter repo
  6. still in watchdog repo, update your cht-instances.ylm to have the URL from step 1. In this example it's 192-168-68-17.local-ip.medicmobile.org:10464
  7. copy exporters/postgres/sql_servers_example.yml to exporters/postgres/sql_servers.yml. Update the value in 172-17-0-1.local-ip.medicmobile.org:10464 to match the CHT Core URL from step 2 above.
  8. still in the top level cht-watchdog directory, run the restart script:
     ./development/kill.start.ips.sh

Test steps

  1. From step 8's output above, look for the one called *-sql_exporter-* and go to that URL (http://172.30.0.4:9399/metrics in this case).
     Services:
    
     cht-watchdog-grafana-1          http://172.30.0.3:3000
     cht-watchdog-prometheus-1       http://172.30.0.5:9090/targets?search=
     cht-watchdog-json-exporter-1    http://172.30.0.2:7979/metrics
     cht-watchdog-sql_exporter-1     http://172.30.0.4:9399/metrics
  2. ensure the web page looks like this:
     # HELP couch2pg couch2pg backlog.
     # TYPE couch2pg gauge
     couch2pg{db="_users",job="db_targets",target="local-cht"} 1
     couch2pg{db="medic",job="db_targets",target="local-cht"} 186
     couch2pg{db="medic-logs",job="db_targets",target="local-cht"} 10
     couch2pg{db="medic-sentinel",job="db_targets",target="local-cht"} 79
     couch2pg{db="medic-users-meta",job="db_targets",target="local-cht"} 3
     # HELP scrape_duration_seconds How long it took to scrape the target in seconds
     # TYPE scrape_duration_seconds gauge
     scrape_duration_seconds{job="db_targets",target="local-cht"} 0.008943289
     # HELP up 1 if the target is reachable, or 0 if the scrape failed
     # TYPE up gauge
     up{job="db_targets",target="local-cht"} 1
    
  3. log into the dev watchdog instance at http://localhost:3000 (user medic password password) and go to the main "admin overview" dashboard. ensure the "Couch2pg Backlog" panel is working. It should show a backlog of 0`:
    image
  4. using the name of the container on step 1 above in the section, stop the couch2pg container. (eg docker stop cht-couch2pg-cht-couch2pg-1).
  5. add a household to your cht instance. after a few min you should see a backlog greater than zero
  6. start the couch2pg container. (eg docker start cht-couch2pg-cht-couch2pg-1) and you should see the backlog go back to 0

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions