Skip to content

Commit

Permalink
Merge branch 'next' of https://github.com/firebase/extensions into @i…
Browse files Browse the repository at this point in the history
…nvertase/@jwerner08/gcp-option
  • Loading branch information
jauntybrain committed Jan 12, 2024
2 parents 5edec86 + 371be9c commit da58e79
Show file tree
Hide file tree
Showing 72 changed files with 2,779 additions and 23,366 deletions.
75 changes: 75 additions & 0 deletions .github/workflows/readmes-updated.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
name: Check READMEs are up to date

on:
pull_request:
types:
- opened
- synchronize
branches:
- "next"
- "master"

concurrency:
group:
${{ github.workflow }}-${{ github.event.pull_request.number || github.ref }}
cancel-in-progress: true

env:
BRANCH_NAME: ${{ github.head_ref || github.ref_name }}

jobs:
build:
runs-on: ubuntu-latest

steps:
- name: Checkout repository
uses: actions/checkout@v3
with:
fetch-depth: 0
token: ${{ secrets.GITHUB_TOKEN }}
ref: ${{ github.event.pull_request.head.ref }}
repository: ${{ github.event.pull_request.head.repo.full_name }}
- name: Set up Node.js
uses: actions/setup-node@v3
with:
node-version: 18
cache: "npm"
cache-dependency-path: "**/functions/package-lock.json"

- name: Set up global dependencies directory
id: global-deps-setup
run: |
mkdir -p ~/.npm-global
npm config set prefix '~/.npm-global'
echo "::set-output name=dir::$(npm config get prefix)"
- name: Cache global dependencies
uses: actions/cache@v2
with:
path: ${{ steps.global-deps-setup.outputs.dir }}
key:
${{ runner.os }}-npm-global-deps-${{
hashFiles('**/package-lock.json') }}
restore-keys: |
${{ runner.os }}-npm-global-deps-
- name: Install Firebase and Lerna
run: |
echo "${{ steps.global-deps-setup.outputs.dir }}/bin" >> $GITHUB_PATH
npm install -g firebase-tools lerna
- name: Install local dependencies
run: npm ci

- name: Run Lerna generate-readme
run: lerna run --parallel generate-readme

- name: Check READMEs are up to date and push changes if possible.
run: |
changed_files=$(git status -s -- '**/README.md' | cut -c4-)
if [[ ! -z "$changed_files" ]]; then
echo "Changes detected in README.md files:"
echo "$changed_files"
echo "Please run 'lerna run --parallel generate-readme' locally and commit the changes."
exit 1
fi
8 changes: 3 additions & 5 deletions .github/workflows/test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,16 @@ name: Testing

on:
push:
branches:
- "**"
branches: [next]
pull_request:
branches:
- "**"
branches: ["**"]

jobs:
nodejs:
runs-on: ubuntu-latest
strategy:
matrix:
node: ["14", "16"]
node: ["16", "18"]
name: node.js_${{ matrix.node }}_test
steps:
- uses: actions/checkout@v3
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/validate.yml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ jobs:
- name: Setup node
uses: actions/setup-node@v3
with:
node-version: 14
node-version: 18
- name: NPM install
run: SKIP_POSTINSTALL=yes npm i
- name: Prettier Lint Check
Expand Down
3 changes: 2 additions & 1 deletion _emulator/extensions/firestore-send-email.env.local
Original file line number Diff line number Diff line change
Expand Up @@ -7,4 +7,5 @@ DEFAULT_FROM=fakeemail@gmail.com
DEFAULT_REPLY_TO=fakeemail@gmail.com
TESTING=true
TTL_EXPIRE_TYPE=day
TTL_EXPIRE_VALUE=5
TTL_EXPIRE_VALUE=5
TLS_OPTIONS={}
5 changes: 3 additions & 2 deletions _emulator/extensions/storage-resize-images.env.local
Original file line number Diff line number Diff line change
@@ -1,10 +1,11 @@
LOCATION=europe-west2
IMG_BUCKET=${STORAGE_BUCKET}
IMG_SIZES=200x200
IMG_SIZES=300x300
DELETE_ORIGINAL_FILE=true
MAKE_PUBLIC=true
RESIZED_IMAGES_PATH=thumbnails
FAILED_IMAGES_PATH=failed
IMAGE_TYPE=webp
IS_ANIMATED=true
FUNCTION_MEMORY=1024
DO_BACKFILL=false
SHARP_OPTIONS='{"fit":"cover", "position": "top", "animated": false}'
4 changes: 1 addition & 3 deletions auth-mailchimp-sync/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,15 +34,13 @@ Usage of this extension also requires you to have a Mailchimp account. You are r

**Configuration Parameters:**

* Cloud Functions location: Where do you want to deploy the functions created for this extension?

* Mailchimp API key: What is your Mailchimp API key? To obtain a Mailchimp API key, go to your [Mailchimp account](https://admin.mailchimp.com/account/api/).

* Audience ID: What is the Mailchimp Audience ID to which you want to subscribe new users? To find your Audience ID: visit https://admin.mailchimp.com/lists, click on the desired audience or create a new audience, then select **Settings**. Look for **Audience ID** (for example, `27735fc60a`).

* Contact status: When the extension adds a new user to the Mailchimp audience, what is their initial status? This value can be `subscribed` or `pending`. `subscribed` means the user can receive campaigns; `pending` means the user still needs to opt-in to receive campaigns.

* Import existing users into Mailchimp audience: Do you want to add existing users to the Mailchimp audience?
* Import existing users into Mailchimp audience: Do you want to add existing users to the Mailchimp audience when you install or update this extension?



Expand Down
4 changes: 4 additions & 0 deletions delete-user-data/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,7 @@
## Version 0.1.20

fix - update regex for RTDB instance param

## Version 0.1.19

chore(delete-user-data): remove firebase-tools dependency
Expand Down
2 changes: 1 addition & 1 deletion delete-user-data/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ For example, if you have the collections `users` and `admins`, and each collecti

* Cloud Firestore delete mode: (Only applicable if you use the `Cloud Firestore paths` parameter.) How do you want to delete Cloud Firestore documents? To also delete documents in subcollections, set this parameter to `recursive`.

* Realtime Database instance: From which Realtime Database instance do you want to delete data keyed on a user ID?
* Realtime Database instance: What is the ID of the Realtime Database instance from which you want to delete user data (keyed on user ID)?


* Realtime Database location: (Only applicable if you provided the `Realtime Database instance` parameter.) From which Realtime Database location do you want to delete data keyed on a user ID?
Expand Down
8 changes: 4 additions & 4 deletions delete-user-data/extension.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# limitations under the License.

name: delete-user-data
version: 0.1.19
version: 0.1.20
specVersion: v1beta

displayName: Delete User Data
Expand Down Expand Up @@ -116,11 +116,11 @@ params:
- param: SELECTED_DATABASE_INSTANCE
label: Realtime Database instance
description: >
From which Realtime Database instance do you want to delete data keyed on a user ID?
What is the ID of the Realtime Database instance from which you want to delete user data (keyed on user ID)?
type: string
example: my-instance
validationRegex: ^([0-9a-z_.-]*)$
validationErrorMessage: Invalid database instance
validationRegex: ^[^\.\$\#\]\[\/\x00-\x1F\x7F]+$
validationErrorMessage: Invalid database instance. Make sure that you have entered just the instance ID, and not the entire database URL.
required: false

- param: SELECTED_DATABASE_LOCATION
Expand Down
2 changes: 1 addition & 1 deletion docs/firestore-bigquery-export/Clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ Through the extension, adding clustering is as simple as adding a comma-separate

Clustering allows up to a maximum of four fields and can be configured similar to

`document_id, timestamp, event_id, data`
`document_id, document_name, timestamp, event_id, data`

![example](/docs/firestore-bigquery-export/media/clustering.png)

Expand Down
2 changes: 1 addition & 1 deletion docs/firestore-bigquery-export/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ During installation, you will be prompted to specify a number of configuration p

This parameter will allow you to set up Clustering for the BigQuery Table
created by the extension. (for example: `data,document_id,timestamp`- no whitespaces). You can select up to 4 comma-separated fields(order matters).
Available schema extensions table fields for clustering: `document_id, timestamp, event_id, operation, data`.
Available schema extensions table fields for clustering: `document_id, document_name, timestamp, event_id, operation, data`.

- **Backup Collection Name:**

Expand Down
24 changes: 24 additions & 0 deletions firestore-bigquery-export/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,27 @@
## Version 0.1.43

fix - correctly partition when only "timestamp" is selected for partition options

## Version 0.1.42

fix - correctly extract timestamps from firestore fields to partition columns

## Version 0.1.41

fix - rollback backfill feature

## Version 0.1.40

fix - correct default value for use collection group query param

## Version 0.1.39

fix - rollback timestamp serialization

## Version 0.1.38

fix - backfill value mismatch

## Version 0.1.37

fix - serialize timestamps to date string
Expand Down
7 changes: 7 additions & 0 deletions firestore-bigquery-export/PREINSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,13 @@ Enabling wildcard references will provide an additional STRING based column. The

`Partition` settings cannot be updated on a pre-existing table, if these options are required then a new table must be created.

Note: To enable partitioning for a Big Query database, the following fields are required:

- Time Partitioning option type
- Time partitioning column name
- Time partiitioning table schema
- Firestore document field name

`Clustering` will not need to create or modify a table when adding clustering options, this will be updated automatically.


Expand Down
17 changes: 8 additions & 9 deletions firestore-bigquery-export/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,13 @@ Enabling wildcard references will provide an additional STRING based column. The

`Partition` settings cannot be updated on a pre-existing table, if these options are required then a new table must be created.

Note: To enable partitioning for a Big Query database, the following fields are required:

- Time Partitioning option type
- Time partitioning column name
- Time partiitioning table schema
- Firestore document field name

`Clustering` will not need to create or modify a table when adding clustering options, this will be updated automatically.


Expand Down Expand Up @@ -129,7 +136,7 @@ To install an extension, your project must be on the [Blaze (pay as you go) plan

* BigQuery SQL Time Partitioning table schema field(column) type: Parameter for BigQuery SQL schema field type for the selected Time Partitioning Firestore Document field option. Cannot be changed if Table is already partitioned.

* BigQuery SQL table clustering: This parameter will allow you to set up Clustering for the BigQuery Table created by the extension. (for example: `data,document_id,timestamp`- no whitespaces). You can select up to 4 comma separated fields. The order of the specified columns determines the sort order of the data. Available schema extensions table fields for clustering: `document_id, timestamp, event_id, operation, data`.
* BigQuery SQL table clustering: This parameter will allow you to set up Clustering for the BigQuery Table created by the extension. (for example: `data,document_id,timestamp`- no whitespaces). You can select up to 4 comma separated fields. The order of the specified columns determines the sort order of the data. Available schema extensions table fields for clustering: `document_id, document_name, timestamp, event_id, operation, data`.

* Maximum number of synced documents per second: This parameter will set the maximum number of syncronised documents per second with BQ. Please note, any other external updates to a Big Query table will be included within this quota. Ensure that you have a set a low enough number to componsate. Defaults to 10.

Expand All @@ -139,14 +146,6 @@ To install an extension, your project must be on the [Blaze (pay as you go) plan

* Use new query syntax for snapshots: If enabled, snapshots will be generated with the new query syntax, which should be more performant, and avoid potential resource limitations.

* Import existing Firestore documents into BigQuery?: Do you want to import existing documents from your Firestore collection into BigQuery? These documents will have each have a special changelog with the operation of `IMPORT` and the timestamp of epoch. This ensures that any operation on an imported document supersedes the import record.

* Existing documents collection: What is the path of the the Cloud Firestore Collection you would like to import from? (This may, or may not, be the same Collection for which you plan to mirror changes.) If you want to use a collectionGroup query, provide the collection name value here, and set 'Use Collection Group query' to true.

* Use Collection Group query: Do you want to use a [collection group](https://firebase.google.com/docs/firestore/query-data/queries#collection-group-query) query for importing existing documents? Warning: A collectionGroup query will target every collection in your Firestore project that matches the 'Existing documents collection'. For example, if you have 10,000 documents with a sub-collection named: landmarks, this will query every document in 10,000 landmarks collections.

* Docs per backfill: When importing existing documents, how many should be imported at once? The default value of 200 should be ok for most users. If you are using a transform function or have very large documents, you may need to set this to a lower number. If the lifecycle event function times out, lower this value.

* Cloud KMS key name: Instead of Google managing the key encryption keys that protect your data, you control and manage key encryption keys in Cloud KMS. If this parameter is set, the extension will specify the KMS key name when creating the BQ table. See the PREINSTALL.md for more details.


Expand Down
65 changes: 5 additions & 60 deletions firestore-bigquery-export/extension.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@
# limitations under the License.

name: firestore-bigquery-export
version: 0.1.37
version: 0.1.43
specVersion: v1beta

displayName: Stream Firestore to BigQuery
Expand Down Expand Up @@ -66,7 +66,7 @@ resources:
Imports exisitng documents from the specified collection into BigQuery. Imported documents will have
a special changelog with the operation of `IMPORT` and the timestamp of epoch.
properties:
runtime: nodejs14
runtime: nodejs18
taskQueueTrigger: {}

- name: syncBigQuery
Expand Down Expand Up @@ -291,7 +291,7 @@ params:
description: >-
This parameter will allow you to set up Clustering for the BigQuery Table
created by the extension. (for example: `data,document_id,timestamp`- no whitespaces). You can select up to 4 comma separated fields. The order of the specified columns determines the sort order of the data.
Available schema extensions table fields for clustering: `document_id, timestamp, event_id, operation, data`.
Available schema extensions table fields for clustering: `document_id, document_name, timestamp, event_id, operation, data`.
type: string
validationRegex: ^[^,\s]+(?:,[^,\s]+){0,3}$
validationErrorMessage: No whitespaces. Max 4 fields. e.g. `data,timestamp,event_id,operation`
Expand All @@ -304,8 +304,8 @@ params:
This parameter will set the maximum number of syncronised documents per second with BQ. Please note, any other external updates to a Big Query table will be included within this quota.
Ensure that you have a set a low enough number to componsate. Defaults to 10.
type: string
validationRegex: ^(?:[1-9]|\d{2,3}|[1-4]\d{3})$
validationErrorMessage: Please select a number between 1 and 100
validationRegex: ^([1-9]|[1-9][0-9]|[1-4][0-9]{2}|500)$
validationErrorMessage: Please select a number between 1 and 500
example: 10
required: false

Expand Down Expand Up @@ -338,61 +338,6 @@ params:
value: no
default: no
required: true

- param: DO_BACKFILL
label: Import existing Firestore documents into BigQuery?
description: >-
Do you want to import existing documents from your Firestore collection into BigQuery? These documents
will have each have a special changelog with the operation of `IMPORT` and the timestamp of epoch.
This ensures that any operation on an imported document supersedes the import record.
type: select
required: true
options:
- label: Yes
value: yes
- label: No
value: no

- param: IMPORT_COLLECTION_PATH
label: Existing documents collection
description: >-
What is the path of the the Cloud Firestore Collection you would like to import from?
(This may, or may not, be the same Collection for which you plan to mirror changes.)
If you want to use a collectionGroup query, provide the collection name value here,
and set 'Use Collection Group query' to true.
type: string
validationRegex: "^[^/]+(/[^/]+/[^/]+)*$"
validationErrorMessage: Firestore collection paths must be an odd number of segments separated by slashes, e.g. "path/to/collection".
example: posts
required: false

- param: USE_COLLECTION_GROUP_QUERY
label: Use Collection Group query
description: >-
Do you want to use a [collection group](https://firebase.google.com/docs/firestore/query-data/queries#collection-group-query) query for importing existing documents?
Warning: A collectionGroup query will target every collection in your Firestore project that matches the 'Existing documents collection'.
For example, if you have 10,000 documents with a sub-collection named: landmarks, this will query every document in 10,000 landmarks collections.
type: select
default: false
options:
- label: Yes
value: true
- label: No
value: false
- param: DOCS_PER_BACKFILL
label: Docs per backfill
description: >-
When importing existing documents, how many should be imported at once?
The default value of 200 should be ok for most users.
If you are using a transform function or have very large documents, you may need to set this to a lower number.
If the lifecycle event function times out, lower this value.
type: string
example: 200
validationRegex: "^[1-9][0-9]*$"
validationErrorMessage: Must be a postive integer.
default: 200
required: true


- param: KMS_KEY_NAME
label: Cloud KMS key name
Expand Down
Loading

0 comments on commit da58e79

Please sign in to comment.