|
| 1 | +--- |
| 2 | +title: BigQuery rule |
| 3 | +meta_description: "Stream realtime event data from Ably into Google BigQuery using the Firehose BigQuery rule. Configure, and analyze your data efficiently." |
| 4 | +--- |
| 5 | + |
| 6 | +Stream events published to Ably directly into a table in "BigQuery":https://cloud.google.com/bigquery?utm_source=google&utm_medium=cpc&utm_campaign=emea-es-all-en-dr-bkws-all-all-trial-e-gcp-1707574&utm_content=text-ad-none-any-dev_c-cre_574561258287-adgp_Hybrid+%7C+BKWS+-+EXA+%7C+Txt+-+Data+Analytics+-+BigQuery+-+v1-kwid_43700072692462237-kwd-12297987241-userloc_1005419&utm_term=kw_big+query-net_g-plac_&&gad_source=1&gclid=Cj0KCQiAwtu9BhC8ARIsAI9JHanslQbN6f8Ho6rvEvozknlBMbqaea0s6ILK-VA9YpQhRr_IUrVz6rYaAtXeEALw_wcB&gclsrc=aw.ds&hl=en for analytical or archival purposes. General use cases include: |
| 7 | + |
| 8 | +* Realtime analytics on message data. |
| 9 | +* Centralized storage for raw event data, enabling downstream processing. |
| 10 | +* Historical auditing of messages. |
| 11 | + |
| 12 | +<aside data-type='note'> |
| 13 | +<p>Ably's BigQuery integration rule for "Firehose":/integrations/streaming is in development status.</p> |
| 14 | +</aside> |
| 15 | + |
| 16 | +h3(#create-rule). Create a BigQuery rule |
| 17 | + |
| 18 | +Set up the necessary BigQuery resources, permissions, and authentication to enable Ably to securely write data to a BigQuery table: |
| 19 | + |
| 20 | +* Create or select a BigQuery dataset in the Google Cloud Console. |
| 21 | +* Create a BigQuery table in that dataset: |
| 22 | +** Use the "JSON schema":#schema. |
| 23 | +** For large datasets, partition the table by ingestion time, with daily partitioning recommended for optimal performance. |
| 24 | +* Create a Google Cloud Platform (GCP) "service account":https://cloud.google.com/iam/docs/service-accounts-create?utm_source=chatgpt.com with the minimal required BigQuery permissions. |
| 25 | +* Grant the service account table-level access control to allow access to the specific table. |
| 26 | +** @bigquery.tables.get@: to read table metadata. |
| 27 | +** @bigquery.tables.updateData@: to insert records. |
| 28 | +* Generate and securely store the JSON key file for the service account. |
| 29 | +** Ably requires this key file to authenticate and write data to your table. |
| 30 | + |
| 31 | +h3(#settings). BigQuery rule settings |
| 32 | + |
| 33 | +|_. Section |_. Purpose | |
| 34 | +| *Source* | Defines the type of event(s) for delivery. | |
| 35 | +| *Channel filter* | A regular expression to filter which channels to capture. Only events on channels matching this regex are streamed into BigQuery. | |
| 36 | +| *Table* | The full destination table path in BigQuery, typically in the format @project_id.dataset_id.table_id@. | |
| 37 | +| *Service account Key* | A JSON key file Ably uses to authenticate with Google Cloud. You must upload or provide the contents of this key file. | |
| 38 | +| *Partitioning* | _(Optional)_ The table must be created with the desired partitioning settings in BigQuery before making the rule in Ably. | |
| 39 | +| *Advanced settings* | Any additional configuration or custom fields relevant to your BigQuery setup (for future enhancements). | |
| 40 | + |
| 41 | +h4(#dashboard). Create a BigQuery rule in the Dashboard |
| 42 | + |
| 43 | +The following steps to create a BigQuery rule using the Ably dashboard: |
| 44 | + |
| 45 | +* Log in to the "Ably dashboard":https://ably.com/accounts/any and select the application you want to stream data from. |
| 46 | +* Navigate to the *Integrations* tab. |
| 47 | +* Click *New integration rule*. |
| 48 | +* Select *Firehose*. |
| 49 | +* Choose *BigQuery* from the list of available Firehose integrations. |
| 50 | +* Configure the rule settings as described below.Then, click *Create*. |
| 51 | + |
| 52 | +h4(#api-rule). Create a BigQuery rule using the Control API |
| 53 | + |
| 54 | +The following steps to create a BigQuery rule using the Control API: |
| 55 | + |
| 56 | +* Using the required "rules":/control-api#examples-rules to specify the following parameters: |
| 57 | +** @ruleType@: Set this to "bigquery" to define the rule as a BigQuery integration. |
| 58 | +** destinationTable: Specify the BigQuery table where the data will be stored. |
| 59 | +** @serviceAccountCredentials@: Provide the necessary GCP service account JSON key to authenticate and authorize data insertion. |
| 60 | +** @channelFilter@ (optional): Use a regular expression to apply the rule to specific channels. |
| 61 | +** @format@ (optional): Define the data format based on how you want messages to be structured. |
| 62 | +* Make an HTTP request to the Control API to create the rule. |
| 63 | + |
| 64 | + |
| 65 | +h3(#schema). JSON Schema |
| 66 | + |
| 67 | +You can run queries directly against the Ably-managed BigQuery table. For example, if the message payloads are stored as raw JSON in the data column, you can parse them using the following query: |
| 68 | + |
| 69 | +```[json] |
| 70 | +{ |
| 71 | +“name”: “id”, |
| 72 | +“type”: “STRING”, |
| 73 | +“mode”: “REQUIRED”, |
| 74 | +“description”: “Unique ID assigned by Ably to this message. Can optionally be assigned by the client.” |
| 75 | +} |
| 76 | +``` |
| 77 | + |
| 78 | +h3(#queries). Direct queries |
| 79 | + |
| 80 | +Run queries directly against the Ably-managed table. For instance, to parse JSON payloads stored in @data@: |
| 81 | + |
| 82 | +```[sql] |
| 83 | +SELECT |
| 84 | +PARSE_JSON(CAST(data AS STRING)) AS parsed_payload |
| 85 | +FROM project_id.dataset_id.table_id |
| 86 | +WHERE channel = “my-channel” |
| 87 | +``` |
| 88 | + |
| 89 | +The following explains the components of the query: |
| 90 | + |
| 91 | +|. Query Function |. Purpose | |
| 92 | +| CAST(data AS STRING) | Converts the data column from BYTES (if applicable) into a STRING format. | |
| 93 | +| PARSE_JSON(…) | Parses the string into a structured JSON object for easier querying. | |
| 94 | +| WHERE channel = “my-channel” | Filters results to retrieve messages only from a specific Ably channel. | |
| 95 | + |
| 96 | +<aside data-type='note'> |
| 97 | +<p>Parsing JSON at query time can be computationally expensive for large datasets. If your queries need frequent JSON parsing, consider pre-processing and storing structured fields in a secondary table using an ETL pipeline for better performance.</p> |
| 98 | +</aside> |
| 99 | + |
| 100 | +h4(#etl). Extract, Transform, Load (ETL) |
| 101 | + |
| 102 | +ETL is recommended for large-scale analytics and performance optimization, ensuring data is structured, deduplicated, and efficiently stored for querying. Transform raw data (JSON or BYTES) into a more structured format, remove duplicates, and write it into a secondary table optimized for analytics: |
| 103 | + |
| 104 | +* Convert data from raw (BYTES/JSON) into structured columns for example geospatial data fields or numeric data types, for detailed analysis. |
| 105 | +* Write transformed records to a new optimized table tailored for query performance. |
| 106 | +* Deduplicate records using the unique ID field to ensure data integrity. |
| 107 | +* Automate the process using BigQuery scheduled queries or an external workflow to run transformations at regular intervals. |
| 108 | + |
0 commit comments