Skip to content

Commit

Permalink
Fix broken links
Browse files Browse the repository at this point in the history
Signed-off-by: datajoely <joel.schwarzmann@quantumblack.com>
  • Loading branch information
datajoely committed Feb 8, 2022
1 parent f922899 commit 9b5e3ae
Show file tree
Hide file tree
Showing 8 changed files with 11 additions and 17 deletions.
6 changes: 3 additions & 3 deletions docs/source/tutorial/namespacing_pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ This section covers the following:
* A brief introduction to namespaces and modular pipelines
* How to convert the existing spaceflights project into a namespaced one

Adding namespaces to [modular pipelines](https://kedro.readthedocs.io/en/stable/06_nodes_and_pipelines/03_modular_pipelines.html#modular-pipelines) unlocks some sophisticated functionality in Kedro
Adding namespaces to [modular pipelines](../nodes_and_pipelines/modular_pipelines.md) unlocks some sophisticated functionality in Kedro

1. You are able to [instantiate the same pipeline structure multiple times](https://kedro.readthedocs.io/en/stable/06_nodes_and_pipelines/03_modular_pipelines.html#how-to-use-a-modular-pipeline-twice), but provide different inputs/outputs.
2. You can unlock the full power of [micro-packaging](https://kedro.readthedocs.io/en/stable/06_nodes_and_pipelines/03_modular_pipelines.html#how-to-share-a-modular-pipeline).
1. You are able to [instantiate the same pipeline structure multiple times](../nodes_and_pipelines/modular_pipelines.md#how-to-use-a-modular-pipeline-twice), but provide different inputs/outputs.
2. You can unlock the full power of [micro-packaging](../nodes_and_pipelines/modular_pipelines.md#How-to-share-a-modular-pipeline).
3. You can de-clutter your mental model with Kedro-Viz rendering collapsible components.

![collapsible](../meta/images/collapsible.gif)
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial/package_a_project.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,4 +44,4 @@ There are several methods to deploy packaged pipelines via 1st party plugins and

* [Kedro-Docker](https://github.com/kedro-org/kedro-plugins/tree/main/kedro-docker) plugin for packaging and shipping Kedro projects within [Docker](https://www.docker.com/) containers.
* [Kedro-Airflow](https://github.com/kedro-org/kedro-plugins/tree/main/kedro-airflow) to convert your Kedro project into an [Airflow](https://airflow.apache.org/) project.
* The [Deployment guide](../10_deployment/01_deployment_guide) touches on other deployment targets such as AWS Batch and Prefect.
* The [Deployment guide](../deployment/deployment_guide) touches on other deployment targets such as AWS Batch and Prefect.
2 changes: 1 addition & 1 deletion docs/source/tutorial/tutorial_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ wheel>=0.35, <0.37 # The reference implementation of the Python wheel packaging
The dependencies above may be sufficient for some projects, but for the spaceflights project, you need to add some extra requirements.

* In this tutorial, we work with different data formats including CSV, Excel and Parquet and want to visualise our pipeline so we will need to provide extra dependencies.
* By running `kedro install` on a blank template we generate a new file at `src/requirements.in`. You can read more about the benefits of compiling dependencies [here](../04_kedro_project_setup/01_dependencies.md)
* By running `kedro install` on a blank template we generate a new file at `src/requirements.in`. You can read more about the benefits of compiling dependencies [here](../kedro_project_setup/dependencies.md)
* The most important point to learn here is that you should edit the `requirements.in` file going forward.

Add the following requirements to your `src/requirements.in` lock file:
Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial/visualise_pipeline.md
Original file line number Diff line number Diff line change
Expand Up @@ -104,7 +104,7 @@ And this will visualise the pipeline visualisation saved as `my_shareable_pipeli

Kedro-Viz aims to help users communicate different aspects of their workflow through an interactive flowchart. With the Plotly integration, we take one step further in this direction to allow our users to effectively share their data insights while exploring the pipeline.

We have also used the Plotly integration to allow users to [visualise metrics from experiments](https://kedro.readthedocs.io/en/stable/08_logging/02_experiment_tracking.html?highlight=experiment%20tracking).
We have also used the Plotly integration to allow users to [visualise metrics from experiments](../logging/experiment_tracking.md).


```eval_rst
Expand Down
4 changes: 1 addition & 3 deletions kedro/extras/datasets/json/json_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,7 @@ class JSONDataSet(AbstractVersionedDataSet):
"""``JSONDataSet`` loads/saves data from/to a JSON file using an underlying
filesystem (e.g.: local, S3, GCS). It uses native json to handle the JSON file.
Example adding a catalog entry with
`YAML API <https://kedro.readthedocs.io/en/stable/05_data/\
01_data_catalog.html#using-the-data-catalog-with-the-yaml-api>`_:
Example adding a catalog entry with the``YAML API``:
.. code-block:: yaml
Expand Down
6 changes: 2 additions & 4 deletions kedro/extras/datasets/pandas/excel_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,10 +26,8 @@ class ExcelDataSet(AbstractVersionedDataSet):
"""``ExcelDataSet`` loads/saves data from/to a Excel file using an underlying
filesystem (e.g.: local, S3, GCS). It uses pandas to handle the Excel file.
Example adding a catalog entry with
`YAML API <https://kedro.readthedocs.io/en/stable/data/\
data_catalog.html#using-the-data-catalog-with-the-yaml-api>`_:
Example adding a catalog entry with the``YAML API``:
.. code-block:: yaml
>>> rockets:
Expand Down
4 changes: 1 addition & 3 deletions kedro/extras/datasets/pandas/gbq_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,9 +177,7 @@ class GBQQueryDataSet(AbstractDataSet):
internally to read from BigQuery table. Therefore it supports all allowed
pandas options on ``read_gbq``.
Example adding a catalog entry with
`YAML API <https://kedro.readthedocs.io/en/stable/05_data/\
01_data_catalog.html#using-the-data-catalog-with-the-yaml-api>`_:
Example adding a catalog entry with the``YAML API``:
.. code-block:: yaml
Expand Down
2 changes: 1 addition & 1 deletion tools/ipython/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@

This script helps to locate `.ipython` directory and run IPython startup scripts in it when working with Jupyter Notebooks and IPython sessions. This script will automatically locate `.ipython/profile_default/startup` directory starting from the current working directory and going up the directory tree. If the directory was found, all Python scripts in it are be executed.

The details can be found in [the user guide](https://kedro.readthedocs.io/en/stable/04_user_guide/11_ipython.html#ipython-loader).
The details can be found in [the user guide](../user_guide/ipython.html#ipython-loader).

0 comments on commit 9b5e3ae

Please sign in to comment.