Skip to content

Commit 54b2537

Browse files
EDU-1502: Adds bigQuery page
1 parent df0b4b1 commit 54b2537

File tree

1 file changed

+109
-0
lines changed

1 file changed

+109
-0
lines changed

content/bigquery.textile

Lines changed: 109 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,109 @@
1+
---
2+
title: BigQuery rule
3+
meta_description: "Stream realtime event data from Ably into Google BigQuery using the Firehose BigQuery rule. Configure, and analyze your data efficiently."
4+
---
5+
6+
Stream events published to Ably directly into a table in "BigQuery":https://cloud.google.com/bigquery?utm_source=google&utm_medium=cpc&utm_campaign=emea-es-all-en-dr-bkws-all-all-trial-e-gcp-1707574&utm_content=text-ad-none-any-dev_c-cre_574561258287-adgp_Hybrid+%7C+BKWS+-+EXA+%7C+Txt+-+Data+Analytics+-+BigQuery+-+v1-kwid_43700072692462237-kwd-12297987241-userloc_1005419&utm_term=kw_big+query-net_g-plac_&&gad_source=1&gclid=Cj0KCQiAwtu9BhC8ARIsAI9JHanslQbN6f8Ho6rvEvozknlBMbqaea0s6ILK-VA9YpQhRr_IUrVz6rYaAtXeEALw_wcB&gclsrc=aw.ds&hl=en for analytical or archival purposes. General use cases include:
7+
8+
* Realtime analytics on message data.
9+
* Centralized storage for raw event data, enabling downstream processing.
10+
* Historical auditing of messages.
11+
12+
<aside data-type='note'>
13+
<p>Ably's BigQuery integration rule for "Firehose":/integrations/streaming is in development status.</p>
14+
</aside>
15+
16+
h3(#create-rule). Create a BigQuery rule
17+
18+
Set up the necessary BigQuery resources, permissions, and authentication to enable Ably to securely write data to a BigQuery table:
19+
20+
* Create or select a BigQuery dataset in the Google Cloud Console.
21+
* Create a BigQuery table in that dataset:
22+
** Use the "JSON schema":#schema.
23+
** For large datasets, partition the table by ingestion time, with daily partitioning recommended for optimal performance.
24+
* Create a Google Cloud Platform (GCP) "service account":https://cloud.google.com/iam/docs/service-accounts-create?utm_source=chatgpt.com with the minimal required BigQuery permissions.
25+
* Grant the service account table-level access control to allow access to the specific table.
26+
** @bigquery.tables.get@: to read table metadata.
27+
** @bigquery.tables.updateData@: to insert records.
28+
* Generate and securely store the JSON key file for the service account.
29+
** Ably requires this key file to authenticate and write data to your table.
30+
31+
h3(#settings). BigQuery rule settings
32+
33+
The following explains the components of the BigQuery rule settings:
34+
35+
|_. Section |_. Purpose |
36+
| *Source* | Defines the type of event(s) for delivery. |
37+
| *Channel filter* | A regular expression to filter which channels to capture. Only events on channels matching this regex are streamed into BigQuery. |
38+
| *Table* | The full destination table path in BigQuery, typically in the format @project_id.dataset_id.table_id@. |
39+
| *Service account Key* | A JSON key file Ably uses to authenticate with Google Cloud. You must upload or provide the contents of this key file. |
40+
| *Partitioning* | _(Optional)_ The table must be created with the desired partitioning settings in BigQuery before making the rule in Ably. |
41+
| *Advanced settings* | Any additional configuration or custom fields relevant to your BigQuery setup (for future enhancements). |
42+
43+
h4(#dashboard). Create a BigQuery rule in the dashboard
44+
45+
The following steps to create a BigQuery rule using the Ably dashboard:
46+
47+
* Log in to the "Ably dashboard":https://ably.com/accounts/any and select the application you want to stream data from.
48+
* Navigate to the *Integrations* tab.
49+
* Click *New integration rule*.
50+
* Select *Firehose*.
51+
* Choose *BigQuery* from the list of available Firehose integrations.
52+
* Configure the rule settings as described below.Then, click *Create*.
53+
54+
h4(#api-rule). Create a BigQuery rule using the Control API
55+
56+
The following steps to create a BigQuery rule using the "Control API:":https://ably.com/docs/api#control-api
57+
58+
* Using the required "rules":/control-api#examples-rules to specify the following parameters:
59+
** @ruleType@: Set this to "bigquery" to define the rule as a BigQuery integration.
60+
** destinationTable: Specify the BigQuery table where the data will be stored.
61+
** @serviceAccountCredentials@: Provide the necessary GCP service account JSON key to authenticate and authorize data insertion.
62+
** @channelFilter@ (optional): Use a regular expression to apply the rule to specific channels.
63+
** @format@ (optional): Define the data format based on how you want messages to be structured.
64+
* Make an HTTP request to the Control API to create the rule.
65+
66+
h3(#schema). JSON Schema
67+
68+
You can run queries directly against the Ably-managed BigQuery table. For example, if the message payloads are stored as raw JSON in the data column, you can parse them using the following query:
69+
70+
```[json]
71+
{
72+
“name”: “id”,
73+
“type”: “STRING”,
74+
“mode”: “REQUIRED”,
75+
“description”: “Unique ID assigned by Ably to this message. Can optionally be assigned by the client.”
76+
}
77+
```
78+
79+
h3(#queries). Direct queries
80+
81+
Run queries directly against the Ably-managed table. For instance, to parse JSON payloads stored in @data@:
82+
83+
```[sql]
84+
SELECT
85+
PARSE_JSON(CAST(data AS STRING)) AS parsed_payload
86+
FROM project_id.dataset_id.table_id
87+
WHERE channel = “my-channel”
88+
```
89+
90+
The following explains the components of the query:
91+
92+
|_. Query Function |_. Purpose |
93+
| @CAST(data AS STRING)@ | Converts the data column from BYTES (if applicable) into a STRING format. |
94+
| @PARSE_JSON(…)@ | Parses the string into a structured JSON object for easier querying. |
95+
| @WHERE channel = “my-channel”@ | Filters results to retrieve messages only from a specific Ably channel. |
96+
97+
<aside data-type='note'>
98+
<p>Parsing JSON at query time can be computationally expensive for large datasets. If your queries need frequent JSON parsing, consider pre-processing and storing structured fields in a secondary table using an ETL pipeline for better performance.</p>
99+
</aside>
100+
101+
h4(#etl). Extract, Transform, Load (ETL)
102+
103+
ETL is recommended for large-scale analytics and performance optimization, ensuring data is structured, deduplicated, and efficiently stored for querying. Transform raw data (JSON or BYTES) into a more structured format, remove duplicates, and write it into a secondary table optimized for analytics:
104+
105+
* Convert data from raw (BYTES/JSON) into structured columns for example geospatial data fields or numeric data types, for detailed analysis.
106+
* Write transformed records to a new optimized table tailored for query performance.
107+
* Deduplicate records using the unique ID field to ensure data integrity.
108+
* Automate the process using BigQuery scheduled queries or an external workflow to run transformations at regular intervals.
109+

0 commit comments

Comments
 (0)