This example demonstrates how to create a custom metric for Stackdriver Monitoring. This example
estimates the number of IP's consumed in a CloudSQL private services subnet.
A Stackdriver log sink at the organization level populates BigQuery with logs when a CloudSQL instance is created or deleted. The metric app periodically querries BigQuery to determine which projects have CloudSQL instances with private networks. This means the app can avoid polling the organization for a list of projects then iterating over each project to determine if there are relevant CloudSQL instances.
Once the projects with CloudSQL instances are known, then the app querries the CloudSQL Admin API for more information on each CloudSQL instance. A list of instances is aggregated for the organization and a count is calculated.
This information is then converted to a StackDriver Monitoring TimeSeries which is fed to the StackDriver Monitoring API.
This code is a prototype and not engineered for production use. Error handling is incomplete or inappropriate for usage beyond a development sample.
This requires GKE. The following scopes must be enabled on the nodes in the resource pool.
- BigQuery - Enabled
- Loging - Write Only
- Monitoring - Full
- CloudSQL - Enabled
Log in to gcloud
glcoud auth login
Enable the necessary API's
gcloud services enable bigquery-json.googleapis.com \
cloudapis.googleapis.com \
cloudtrace.googleapis.com \
compute.googleapis.com \
container.googleapis.com \
containerregistry.googleapis.com \
logging.googleapis.com \
monitoring.googleapis.com \
sourcerepo.googleapis.com \
stackdriver.googleapis.com \
stackdriverprovisioning.googleapis.com \
sqladmin.googleapis.com
Export the dataset
export DATASET=<your-data-set-id>
Create the BigQuery Dataset
bq --location=US mk -d --description "This is a dataset for the CloudSQL Sink." $DATASET
Export the organization ID
export ORG_ID=<your-organizaion-id>
Export the project ID.
export PROJECT_ID=<your-project-id>
Create the Logging Sink at the organization level
gcloud logging sinks create cloud-sql-sink \
bigquery.googleapis.com/projects/$PROJECT_ID/datasets/$DATASET \
--log-filter='resource.type="cloudsql_database" AND (protoPayload.methodName="cloudsql.instances.create" OR protoPayload.methodName="cloudsql.instances.delete")' --include-children --organization=$ORG_ID
The output of the command will provide a service account. Give this service account bigquery.dataOwner. However, if you are running this in GKE, this step isn't necessary.
Change the zonal and regional increments as desired. Feel free to change the periodicity as well. With the command below, the job will run every two minutes.
kubectl run cloud-sql-monitor --schedule="0/2 * * * ?" --image=gcr.io/$PROJECT_ID/cloud-sql-monitor:latest --restart=Never --env="ZONAL_INCREMENT=1" --env="REGIONAL_INCREMENT=1" -- --project $PROJECT_ID --dataset $DATASET --all
- Open the Google Cloud Console in your browser
- Open the "hamburger menu" in the upper left
- Scroll down to Stackdriver
- Click on Monitoring
This will bring up the Stackdriver Monitoring console. If necessary, create a new Stackdriver workspace.
- Create a new dashboard or open an existing dashboard.
- In the upper right corner of the dashboard, click "Add Chart".
- Fill in the information as seen in the following graphic.
Note: It is recommended that you choose an alignment period larger than the CronJob period.