Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
89 commits
Select commit Hold shift + click to select a range
82e904e
Add DLT destination capabilities tags to documentation files
alkaline-0 Sep 24, 2025
d19d6fe
Enhance documentation by adding destination capabilities sections
alkaline-0 Sep 29, 2025
459af5c
Add new script for inserting DLT destination capabilities
alkaline-0 Sep 24, 2025
9a1f1fd
Update package.json and package-lock.json to include new script for i…
alkaline-0 Sep 25, 2025
a4aec8a
Revert "Update package.json and package-lock.json to include new scri…
alkaline-0 Sep 25, 2025
847a092
Add script for inserting destination capabilities into documentation
alkaline-0 Sep 25, 2025
72d7679
Add destination capabilities execution
alkaline-0 Sep 25, 2025
1d8577c
Enhance destination capabilities insertion script
alkaline-0 Sep 29, 2025
1a367f2
Refactor destination capabilities insertion script
alkaline-0 Sep 29, 2025
2a4203c
Refactor and enhance destination capabilities insertion script
alkaline-0 Sep 29, 2025
8e0c5c6
Refactor and improve destination capabilities insertion script
alkaline-0 Sep 30, 2025
68db0b5
Remove destination capabilities sections from various documentation f…
alkaline-0 Sep 30, 2025
b6407c9
Add destination capabilities sections to various documentation files
alkaline-0 Sep 30, 2025
9e452b7
Update documentation for various destinations with formatting improve…
alkaline-0 Sep 30, 2025
cfd22c2
Remove destination capabilities sections from various documentation f…
alkaline-0 Sep 30, 2025
39c86db
Update destinations with capabilities marker
alkaline-0 Sep 30, 2025
34e0ba3
Added type guard to guard against Any
alkaline-0 Oct 1, 2025
712bc24
Temporarily commit preprocessed docs
alkaline-0 Oct 1, 2025
64d0ce4
Add new constants for documentation preprocessing and update requirem…
alkaline-0 Oct 6, 2025
41a96d8
Add tuba links processing script and remove unused line from constants
alkaline-0 Oct 6, 2025
327ab12
Refactor tuba link processing and extract utility function
alkaline-0 Oct 6, 2025
5f4b233
Add snippet processing functionality for documentation
alkaline-0 Oct 6, 2025
2422dac
Add example processing script for documentation generation
alkaline-0 Oct 6, 2025
3d07fc2
Enhance documentation preprocessing with Python integration and new s…
alkaline-0 Oct 6, 2025
7a2d9c4
Refactor documentation preprocessing scripts for improved async handl…
alkaline-0 Oct 7, 2025
18cec11
Refactor documentation preprocessing scripts for improved efficiency …
alkaline-0 Oct 7, 2025
222f4f5
Refactor file change handling in documentation preprocessing scripts
alkaline-0 Oct 7, 2025
3fa025a
Add destination capabilities processing and refactor related scripts
alkaline-0 Oct 7, 2025
882d000
Update package-lock.json and package.json for improved documentation …
alkaline-0 Oct 7, 2025
ce51fb6
Add processed docs entry to .gitignore
alkaline-0 Oct 7, 2025
874f020
Stop tracking docs_processed directory
alkaline-0 Oct 7, 2025
44a8788
Remove the `preprocess_docs.js` script, which handled documentation p…
alkaline-0 Oct 7, 2025
584c69c
Refactor destination capabilities processing script for type hinting …
alkaline-0 Oct 7, 2025
669b20c
Refactor documentation processing scripts by removing unnecessary arg…
alkaline-0 Oct 7, 2025
3e0c566
Update package.json to streamline documentation processing scripts
alkaline-0 Oct 7, 2025
a2884e7
Added dependency installement in start
alkaline-0 Oct 7, 2025
c9521e4
Refactor package.json scripts for improved documentation processing
alkaline-0 Oct 7, 2025
91de733
Add type checking configurations for additional modules in mypy.ini
alkaline-0 Oct 7, 2025
8f78c38
Enhance type hinting in preprocessing scripts for improved clarity
alkaline-0 Oct 7, 2025
8343d91
Update dependencies and refactor documentation processing scripts
alkaline-0 Oct 8, 2025
5c72fd3
Remove requirements.txt and clean up whitespace in preprocess_docs.py
alkaline-0 Oct 8, 2025
67a1355
Update documentation for Databricks and DuckLake destinations
alkaline-0 Oct 8, 2025
64d8bcb
Enhance documentation for various destinations and add requirements.t…
alkaline-0 Oct 8, 2025
f59698a
Fix typo in DuckDB documentation regarding spatial extension installa…
alkaline-0 Oct 8, 2025
2a24eae
Remove destination capabilities section from AWS Athena documentation
alkaline-0 Oct 8, 2025
e18e72b
Feat/adds workspace (#3171)
rudolfix Oct 8, 2025
ce77726
Fix build scripts for Cloudflare integration in package.json
alkaline-0 Oct 9, 2025
90cd6c6
Fix preprocess-docs:cloudflare script to use python directly instead …
alkaline-0 Oct 9, 2025
9efba85
Restore preprocess-docs scripts in package.json for consistency
alkaline-0 Oct 9, 2025
aa1a78a
Update preprocess-docs:cloudflare script to include requirements inst…
alkaline-0 Oct 9, 2025
8abc1b8
Update preprocess-docs:cloudflare script to include requirements inst…
alkaline-0 Oct 9, 2025
5c7d3ce
Add __init__.py file to tools directory
alkaline-0 Oct 9, 2025
f95d4e1
Refactor import statements to use relative imports in preprocessing s…
alkaline-0 Oct 9, 2025
2fe4c2c
Update import statements to use absolute paths for consistency across…
alkaline-0 Oct 9, 2025
991d9fc
Add mypy configuration for additional modules to ignore missing imports
alkaline-0 Oct 9, 2025
3e6c62a
Removed duplicated line
alkaline-0 Oct 13, 2025
c546d07
Add mypy configuration to ignore missing imports for tools module
alkaline-0 Oct 13, 2025
7fcdced
Update ducklake.md
alkaline-0 Oct 13, 2025
f40fedc
temporarily add netlify build command back
sh-rp Oct 14, 2025
7eccff6
fix typing in snippets and update mypy.ini a bit
sh-rp Oct 14, 2025
6bf1ac9
reverse build commands back to previous order
sh-rp Oct 14, 2025
47246b3
Fixed watch by changing implementation into queue and locks
alkaline-0 Oct 14, 2025
e8308ac
Refactor package.json for improved script organization and maintainab…
alkaline-0 Oct 14, 2025
9de656f
Add mypy configuration to ignore missing imports for additional modules
alkaline-0 Oct 14, 2025
5424307
Add mypy configuration to ignore missing imports for more modules
alkaline-0 Oct 14, 2025
f4b0e22
Remove mypy configuration for preprocess_examples to streamline settings
alkaline-0 Oct 14, 2025
aec87bd
Update mypy configuration: rename dlt hub section to dlt plus and rem…
alkaline-0 Oct 14, 2025
d546daf
Refactor import statements to remove 'tools' prefix, improving module…
alkaline-0 Oct 14, 2025
54ad02a
Refactor import statements in preprocessing scripts to use relative i…
alkaline-0 Oct 14, 2025
6c8ea12
Refactor import statements in preprocessing scripts to use absolute i…
alkaline-0 Oct 14, 2025
d213e49
Update mypy.ini
alkaline-0 Oct 14, 2025
b7e6d33
Fix formatting in _generate_doc_link function by removing unnecessary…
alkaline-0 Oct 14, 2025
6f1393d
fix linting and script execution
sh-rp Oct 20, 2025
66adebc
remove sleeping after preprocessing in favor of predictable processin…
sh-rp Oct 20, 2025
c092ff5
remove unnecessary whitespace in preprocess_docs.py for cleaner code
alkaline-0 Oct 20, 2025
7e9903c
Update deployment script in package.json and enhance file change hand…
alkaline-0 Oct 21, 2025
623d7d6
Refactor preprocess_docs.py to improve file change handling; replace …
alkaline-0 Oct 21, 2025
371ee5d
Enhance capabilities table generation in preprocess_destination_capab…
alkaline-0 Oct 21, 2025
c84c4e1
Remove destination capabilities sections from multiple destination do…
alkaline-0 Oct 21, 2025
9b2263e
Fix formatting in start script of package.json for improved readability
alkaline-0 Oct 21, 2025
f825a7b
Enhance capabilities table generation by improving destination name f…
alkaline-0 Oct 21, 2025
9460ae2
update files incrementally only when in watcher mode
sh-rp Oct 22, 2025
12c4f50
fix duplicate page at examples error
sh-rp Oct 22, 2025
009e565
remove outdated docs deploy action
sh-rp Oct 22, 2025
ed402e1
add build docs action for better debugability
sh-rp Oct 22, 2025
17fbf5c
revert unintential change to md file
sh-rp Oct 22, 2025
10ca810
add info about where capabilities links should go
sh-rp Oct 22, 2025
2ff888f
refactor: improve documentation link generation for capabilities
alkaline-0 Oct 22, 2025
f803f83
fix: update documentation link for replace strategy and improve link …
alkaline-0 Oct 22, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
36 changes: 36 additions & 0 deletions .github/workflows/build_docs.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
name: docs | build docs

on:
workflow_call:
workflow_dispatch:

jobs:
build_docs:
name: docs | build docs
runs-on: ubuntu-latest

steps:
- name: Check out
uses: actions/checkout@master

- uses: pnpm/action-setup@v2
with:
version: 9.13.2

- uses: actions/setup-node@v5
with:
node-version: '22'

- name: Setup Python
uses: actions/setup-python@v5
with:
python-version: "3.11"

- name: Install node dependencies
run: cd docs/website && npm install

- name: Install python dependencies
run: cd docs/website && pip install -r requirements.txt

- name: Build docs
run: cd docs/website && npm run build:cloudflare
5 changes: 5 additions & 0 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,11 @@ jobs:
test_docs_snippets:
name: test snippets in docs
uses: ./.github/workflows/test_docs_snippets.yml

# NOTE: we build docs the same way as on cloudflare, so we can catch problems early
build_docs:
name: build docs
uses: ./.github/workflows/build_docs.yml

lint:
name: lint on all python versions
Expand Down
20 changes: 0 additions & 20 deletions .github/workflows/tools_deploy_docs.yml

This file was deleted.

6 changes: 6 additions & 0 deletions docs/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/athena.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,8 @@ keywords: [aws, athena, glue catalog]

The Athena destination stores data as Parquet files in S3 buckets and creates [external tables in AWS Athena](https://docs.aws.amazon.com/athena/latest/ug/creating-tables.html). You can then query those tables with Athena SQL commands, which will scan the entire folder of Parquet files and return the results. This destination works very similarly to other SQL-based destinations, with the exception that the merge write disposition is not supported at this time. The `dlt` metadata will be stored in the same bucket as the Parquet files, but as iceberg tables. Athena also supports writing individual data tables as Iceberg tables, so they may be manipulated later. A common use case would be to strip GDPR data from them.

<!--@@@DLT_DESTINATION_CAPABILITIES athena-->

## Install dlt with Athena
**To install the dlt library with Athena dependencies:**
```sh
Expand Down
5 changes: 3 additions & 2 deletions docs/website/docs/dlt-ecosystem/destinations/bigquery.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ keywords: [bigquery, destination, data warehouse]
```sh
pip install "dlt[bigquery]"
```
<!--@@@DLT_DESTINATION_CAPABILITIES bigquery-->

## Setup guide

Expand Down Expand Up @@ -228,8 +229,8 @@ BigQuery supports the following [column hints](../../general-usage/schema#tables

:::warning
**Deprecation Notice:**
Per-column `cluster` hints are deprecated and will be removed in a future release.
**To migrate, use the `cluster` argument of the `bigquery_adapter` instead.**
Per-column `cluster` hints are deprecated and will be removed in a future release.
**To migrate, use the `cluster` argument of the `bigquery_adapter` instead.**
See the [example below](#use-an-adapter-to-apply-hints-to-a-resource) for how to specify clustering columns with the adapter.
:::

Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/clickhouse.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ keywords: [ clickhouse, destination, data warehouse ]
pip install "dlt[clickhouse]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES clickhouse-->

## Setup guide

### 1. Initialize the dlt project
Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/databricks.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ Databricks supports both **Delta** (default) and **Apache Iceberg** table format
pip install "dlt[databricks]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES databricks-->

## Set up your Databricks workspace

To use the Databricks destination, you need:
Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/destination.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,6 +19,8 @@ To install `dlt` without additional dependencies:
pip install dlt
```

<!--@@@DLT_DESTINATION_CAPABILITIES destination-->

## Set up a destination function for your pipeline

The custom destination decorator differs from other destinations in that you do not need to provide connection credentials, but rather you provide a function that gets called for all items loaded during a pipeline run or load operation. With the `@dlt.destination`, you can convert any function that takes two arguments into a `dlt` destination.
Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/dremio.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ keywords: [dremio, iceberg, aws, glue catalog]
pip install "dlt[dremio,s3]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES dremio-->

## Setup guide
### 1. Initialize the dlt project

Expand Down
4 changes: 3 additions & 1 deletion docs/website/docs/dlt-ecosystem/destinations/duckdb.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ keywords: [duckdb, destination, data warehouse]
pip install "dlt[duckdb]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES duckdb-->

## Setup guide

**1. Initialize a project with a pipeline that loads to DuckDB by running:**
Expand Down Expand Up @@ -280,7 +282,7 @@ dest_ = dlt.destinations.duckdb(
DuckDbCredentials("duck.db", extensions=["spatial"], local_config={"errors_as_json": True})
)
```
Code above install **spatial** (`dlt` only loads extension) and passes duckdb credentials to the destination constructor. Database file is **duck.db**, logging and error messages as `json` are enabled.
Code above install **spatial** (`dlt` only loads extension) and passes duckdb credentials to the destination constructor. Database file is **duck.db**, logging and error messages as `json` are enabled.

## Data access after loading
After loading, it is available in **read/write** mode via `with pipeline.sql_client() as con:`, which is a wrapper over `DuckDBPyConnection`. See [duckdb docs](https://duckdb.org/docs/api/python/overview#persistent-storage) for details. If you want to **read** data, use [pipeline.dataset()](../../general-usage/dataset-access/dataset) instead of `sql_client`.
Expand Down
6 changes: 4 additions & 2 deletions docs/website/docs/dlt-ecosystem/destinations/ducklake.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@ In order to use ducklake you must provide the following infrastructure:

If you are looking for a managed ducklake infra, check the [Motherduck Ducklake support](motherduck.md#ducklake-setup). `dlt` is also able to set-up a local ducklake with `sqlite` as catalog fully automatically.

<!--@@@DLT_DESTINATION_CAPABILITIES ducklake-->

## Quick start

- Install dlt with DuckDB dependencies:
Expand Down Expand Up @@ -152,7 +154,7 @@ destination = dlt.destinations.ducklake(credentials=credentials)
import dlt
from dlt.sources.credentials import ConnectionStringCredentials

# set catalog name using connection string credentials
# set catalog name using connection string credentials
catalog_credentials = ConnectionStringCredentials()
# use duckdb with the default name
catalog_credentials.drivername = "duckdb"
Expand Down Expand Up @@ -215,7 +217,7 @@ with pipeline.sql_client() as client:
All write dispositions are supported. `upsert` is supported on **duckdb 1.4.x** (without hard deletes for now)

## Data loading
By default, Parquet files and the `COPY` command are used to move local files to the remote storage,
By default, Parquet files and the `COPY` command are used to move local files to the remote storage,

The **INSERT** format is also supported and will execute large INSERT queries directly into the remote database. This method is significantly slower and may exceed the maximum query size, so it is not advised.

Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/filesystem.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ pip install s3fs
so pip does not fail on backtracking.
:::

<!--@@@DLT_DESTINATION_CAPABILITIES filesystem-->

## Initialize the dlt project

Let's start by initializing a new dlt project as follows:
Expand Down
5 changes: 4 additions & 1 deletion docs/website/docs/dlt-ecosystem/destinations/lancedb.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ keywords: [ lancedb, vector database, destination, dlt ]
[LanceDB](https://lancedb.com/) is an open-source, high-performance vector database. It allows you to store data objects and perform similarity searches over them.
This destination helps you load data into LanceDB from [dlt resources](../../general-usage/resource.md).


<!--@@@DLT_DESTINATION_CAPABILITIES lancedb-->

## Setup guide

### Choose a model provider
Expand Down Expand Up @@ -209,7 +212,7 @@ If you plan to use `merge` write disposition, remember to [enable load ids](../v

## Access loaded data

You can access the data that got loaded in many ways. You can create lancedb client yourself, pass it to `dlt` pipeline
You can access the data that got loaded in many ways. You can create lancedb client yourself, pass it to `dlt` pipeline
for loading and then use it for querying:
```py
import dlt
Expand Down
5 changes: 4 additions & 1 deletion docs/website/docs/dlt-ecosystem/destinations/motherduck.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ workers=3
or export the **LOAD__WORKERS=3** env variable. See more in [performance](../../reference/performance.md)
:::


<!--@@@DLT_DESTINATION_CAPABILITIES motherduck-->

## Setup guide

**1. Initialize a project with a pipeline that loads to MotherDuck by running**
Expand Down Expand Up @@ -69,7 +72,7 @@ python3 chess_pipeline.py
```

### DuckLake setup
DuckLake can be used to manage and persist your MotherDuck databases on external object storage like S3. This is especially useful if you want more control over where your data is stored or if you’re integrating with your own cloud infrastructure.
DuckLake can be used to manage and persist your MotherDuck databases on external object storage like S3. This is especially useful if you want more control over where your data is stored or if you’re integrating with your own cloud infrastructure.
The steps below show how to set up a DuckLake-managed database backed by S3.

**1. Create the S3-Backed DuckLake Database**
Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/mssql.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ keywords: [mssql, sqlserver, destination, data warehouse]
pip install "dlt[mssql]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES mssql-->

## Setup guide

### Prerequisites
Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/postgres.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ keywords: [postgres, destination, data warehouse]
pip install "dlt[postgres]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES postgres-->

## Setup guide

**1. Initialize a project with a pipeline that loads to Postgres by running:**
Expand Down
3 changes: 3 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/qdrant.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ keywords: [qdrant, vector database, destination, dlt]
[Qdrant](https://qdrant.tech/) is an open-source, high-performance vector search engine/database. It deploys as an API service, providing a search for the nearest high-dimensional vectors.
This destination helps you load data into Qdrant from [dlt resources](../../general-usage/resource.md).


<!--@@@DLT_DESTINATION_CAPABILITIES qdrant-->

## Setup guide

1. To use Qdrant as a destination, make sure `dlt` is installed with the `qdrant` extra:
Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/redshift.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ keywords: [redshift, destination, data warehouse]
pip install "dlt[redshift]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES redshift-->

## Setup guide
### 1. Initialize the dlt project

Expand Down
3 changes: 3 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/snowflake.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,12 +6,15 @@ keywords: [Snowflake, destination, data warehouse]

# Snowflake


## Install `dlt` with Snowflake
**To install the `dlt` library with Snowflake dependencies, run:**
```sh
pip install "dlt[snowflake]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES snowflake-->

## Setup guide

**1. Initialize a project with a pipeline that loads to Snowflake by running:**
Expand Down
4 changes: 3 additions & 1 deletion docs/website/docs/dlt-ecosystem/destinations/sqlalchemy.md
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,8 @@ pip install mysqlclient

Refer to the [SQLAlchemy documentation on dialects](https://docs.sqlalchemy.org/en/20/dialects/) for information about client libraries required for supported databases.

<!--@@@DLT_DESTINATION_CAPABILITIES sqlalchemy-->

### Create a pipeline

**1. Initialize a project with a pipeline that loads to MS SQL by running:**
Expand Down Expand Up @@ -140,7 +142,7 @@ Please report issues with particular dialects. We'll try to make them work.

### Trino limitations
* Trino dialect does not case fold identifiers. Use `snake_case` naming convention only.
* Trino does not support merge/scd2 write disposition (or you somehow create PRIMARY KEYs on engine tables)
* Trino does not support merge/scd2 write disposition (or you somehow create PRIMARY KEYs on engine tables)
* We convert JSON and BINARY types are cast to STRING (dialect seems to have a conversion bug)
* Trino does not support PRIMARY/UNIQUE constraints

Expand Down
2 changes: 2 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/synapse.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,8 @@ keywords: [synapse, destination, data warehouse]
pip install "dlt[synapse]"
```

<!--@@@DLT_DESTINATION_CAPABILITIES synapse-->

## Setup guide

### Prerequisites
Expand Down
3 changes: 3 additions & 0 deletions docs/website/docs/dlt-ecosystem/destinations/weaviate.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ keywords: [weaviate, vector database, destination, dlt]
[Weaviate](https://weaviate.io/) is an open-source vector database. It allows you to store data objects and perform similarity searches over them.
This destination helps you load data into Weaviate from [dlt resources](../../general-usage/resource.md).


<!--@@@DLT_DESTINATION_CAPABILITIES weaviate-->

## Setup guide

1. To use Weaviate as a destination, make sure dlt is installed with the 'weaviate' extra:
Expand Down
2 changes: 1 addition & 1 deletion docs/website/docs/walkthroughs/create-new-destination.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,7 +183,7 @@ We can quickly repurpose existing GitHub source and `secrets.toml` already prese
```py
import dlt

from github import github_repo_events
from github import github_repo_events # type: ignore[attr-defined]
from presto import presto # importing destination factory

def load_airflow_events() -> None:
Expand Down
Loading
Loading