diff --git a/docs/source/tutorial/namespacing_pipelines.md b/docs/source/tutorial/namespacing_pipelines.md index d0f50f8d49..b1e1a6a25e 100644 --- a/docs/source/tutorial/namespacing_pipelines.md +++ b/docs/source/tutorial/namespacing_pipelines.md @@ -5,10 +5,10 @@ This section covers the following: * A brief introduction to namespaces and modular pipelines * How to convert the existing spaceflights project into a namespaced one -Adding namespaces to [modular pipelines](https://kedro.readthedocs.io/en/stable/06_nodes_and_pipelines/03_modular_pipelines.html#modular-pipelines) unlocks some sophisticated functionality in Kedro +Adding namespaces to [modular pipelines](../nodes_and_pipelines/modular_pipelines.md) unlocks some sophisticated functionality in Kedro -1. You are able to [instantiate the same pipeline structure multiple times](https://kedro.readthedocs.io/en/stable/06_nodes_and_pipelines/03_modular_pipelines.html#how-to-use-a-modular-pipeline-twice), but provide different inputs/outputs. -2. You can unlock the full power of [micro-packaging](https://kedro.readthedocs.io/en/stable/06_nodes_and_pipelines/03_modular_pipelines.html#how-to-share-a-modular-pipeline). +1. You are able to [instantiate the same pipeline structure multiple times](../nodes_and_pipelines/modular_pipelines.md#how-to-use-a-modular-pipeline-twice), but provide different inputs/outputs. +2. You can unlock the full power of [micro-packaging](../nodes_and_pipelines/modular_pipelines.md#How-to-share-a-modular-pipeline). 3. You can de-clutter your mental model with Kedro-Viz rendering collapsible components. ![collapsible](../meta/images/collapsible.gif) diff --git a/docs/source/tutorial/package_a_project.md b/docs/source/tutorial/package_a_project.md index 5aae3d9294..3407653297 100644 --- a/docs/source/tutorial/package_a_project.md +++ b/docs/source/tutorial/package_a_project.md @@ -44,4 +44,4 @@ There are several methods to deploy packaged pipelines via 1st party plugins and * [Kedro-Docker](https://github.com/kedro-org/kedro-plugins/tree/main/kedro-docker) plugin for packaging and shipping Kedro projects within [Docker](https://www.docker.com/) containers. * [Kedro-Airflow](https://github.com/kedro-org/kedro-plugins/tree/main/kedro-airflow) to convert your Kedro project into an [Airflow](https://airflow.apache.org/) project. -* The [Deployment guide](../10_deployment/01_deployment_guide) touches on other deployment targets such as AWS Batch and Prefect. +* The [Deployment guide](../deployment/deployment_guide) touches on other deployment targets such as AWS Batch and Prefect. diff --git a/docs/source/tutorial/tutorial_template.md b/docs/source/tutorial/tutorial_template.md index 8bc6e76937..81131b4b9d 100644 --- a/docs/source/tutorial/tutorial_template.md +++ b/docs/source/tutorial/tutorial_template.md @@ -55,7 +55,7 @@ wheel>=0.35, <0.37 # The reference implementation of the Python wheel packaging The dependencies above may be sufficient for some projects, but for the spaceflights project, you need to add some extra requirements. * In this tutorial, we work with different data formats including CSV, Excel and Parquet and want to visualise our pipeline so we will need to provide extra dependencies. -* By running `kedro install` on a blank template we generate a new file at `src/requirements.in`. You can read more about the benefits of compiling dependencies [here](../04_kedro_project_setup/01_dependencies.md) +* By running `kedro install` on a blank template we generate a new file at `src/requirements.in`. You can read more about the benefits of compiling dependencies [here](../kedro_project_setup/dependencies.md) * The most important point to learn here is that you should edit the `requirements.in` file going forward. Add the following requirements to your `src/requirements.in` lock file: diff --git a/docs/source/tutorial/visualise_pipeline.md b/docs/source/tutorial/visualise_pipeline.md index 856474b843..b6089fa656 100644 --- a/docs/source/tutorial/visualise_pipeline.md +++ b/docs/source/tutorial/visualise_pipeline.md @@ -104,7 +104,7 @@ And this will visualise the pipeline visualisation saved as `my_shareable_pipeli Kedro-Viz aims to help users communicate different aspects of their workflow through an interactive flowchart. With the Plotly integration, we take one step further in this direction to allow our users to effectively share their data insights while exploring the pipeline. -We have also used the Plotly integration to allow users to [visualise metrics from experiments](https://kedro.readthedocs.io/en/stable/08_logging/02_experiment_tracking.html?highlight=experiment%20tracking). +We have also used the Plotly integration to allow users to [visualise metrics from experiments](../logging/experiment_tracking.md). ```eval_rst diff --git a/kedro/extras/datasets/json/json_dataset.py b/kedro/extras/datasets/json/json_dataset.py index 8c596a305f..83bc67f7a1 100644 --- a/kedro/extras/datasets/json/json_dataset.py +++ b/kedro/extras/datasets/json/json_dataset.py @@ -21,9 +21,7 @@ class JSONDataSet(AbstractVersionedDataSet): """``JSONDataSet`` loads/saves data from/to a JSON file using an underlying filesystem (e.g.: local, S3, GCS). It uses native json to handle the JSON file. - Example adding a catalog entry with - `YAML API `_: + Example adding a catalog entry with the``YAML API``: .. code-block:: yaml diff --git a/kedro/extras/datasets/pandas/excel_dataset.py b/kedro/extras/datasets/pandas/excel_dataset.py index 0a03191a48..9313aaf31d 100644 --- a/kedro/extras/datasets/pandas/excel_dataset.py +++ b/kedro/extras/datasets/pandas/excel_dataset.py @@ -26,10 +26,8 @@ class ExcelDataSet(AbstractVersionedDataSet): """``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file. - Example adding a catalog entry with - `YAML API `_: - + Example adding a catalog entry with the``YAML API``: + .. code-block:: yaml >>> rockets: diff --git a/kedro/extras/datasets/pandas/gbq_dataset.py b/kedro/extras/datasets/pandas/gbq_dataset.py index 49777ff590..1599a66320 100644 --- a/kedro/extras/datasets/pandas/gbq_dataset.py +++ b/kedro/extras/datasets/pandas/gbq_dataset.py @@ -177,9 +177,7 @@ class GBQQueryDataSet(AbstractDataSet): internally to read from BigQuery table. Therefore it supports all allowed pandas options on ``read_gbq``. - Example adding a catalog entry with - `YAML API `_: + Example adding a catalog entry with the``YAML API``: .. code-block:: yaml diff --git a/tools/ipython/README.md b/tools/ipython/README.md index 87c8a9c9cf..2031d13d32 100644 --- a/tools/ipython/README.md +++ b/tools/ipython/README.md @@ -5,4 +5,4 @@ This script helps to locate `.ipython` directory and run IPython startup scripts in it when working with Jupyter Notebooks and IPython sessions. This script will automatically locate `.ipython/profile_default/startup` directory starting from the current working directory and going up the directory tree. If the directory was found, all Python scripts in it are be executed. -The details can be found in [the user guide](https://kedro.readthedocs.io/en/stable/04_user_guide/11_ipython.html#ipython-loader). +The details can be found in [the user guide](../user_guide/ipython.html#ipython-loader).