From 07ca46cbf67114b81000caa0fbae70d098022021 Mon Sep 17 00:00:00 2001 From: Zain Patel <52913697+ZainPatelQB@users.noreply.github.com> Date: Tue, 18 Feb 2020 21:12:17 +0000 Subject: [PATCH] [KED-1396] Switch default documentation version to master/stable (#438) --- CONTRIBUTING.md | 4 ++-- README.md | 22 +++++++++---------- RELEASE.md | 2 +- docs/README.md | 4 ++-- docs/source/04_user_guide/04_data_catalog.md | 2 +- docs/source/04_user_guide/06_pipelines.md | 2 +- extras/README.md | 2 +- kedro/contrib/io/pyspark/README.md | 6 ++--- kedro/extras/ipython/README.md | 2 +- kedro/io/data_catalog.py | 2 +- kedro/io/partitioned_data_set.py | 6 ++--- .../{{ cookiecutter.repo_name }}/README.md | 2 +- .../conf/README.md | 2 +- .../conf/base/catalog.yml | 4 ++-- 14 files changed, 31 insertions(+), 31 deletions(-) diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 173eec63e6..17e18c0fc0 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -35,7 +35,7 @@ If you're unsure where to begin contributing to Kedro, please start by looking t We focus on three areas for contribution: `core`, [`contrib`](/kedro/contrib/) or `plugin`: - `core` refers to the primary Kedro library - [`contrib`](/kedro/contrib/) refers to features that could be added to `core` that do not introduce too many depencies or require new Kedro CLI commands to be created e.g. adding a new dataset to the `io` data management module -- [`plugin`](https://kedro.readthedocs.io/en/latest/04_user_guide/10_developing_plugins.html) refers to new functionality that requires a Kedro CLI command e.g. adding in Airflow functionality +- [`plugin`](https://kedro.readthedocs.io/en/stable/04_user_guide/10_developing_plugins.html) refers to new functionality that requires a Kedro CLI command e.g. adding in Airflow functionality Typically, we only accept small contributions for the `core` Kedro library but accept new features as `plugin`s or additions to the [`contrib`](/kedro/contrib/) module. We regularly review [`contrib`](/kedro/contrib/) and may migrate modules to `core` if they prove to be essential for the functioning of the framework or if we believe that they are used by most projects. @@ -109,7 +109,7 @@ You can add new work to `contrib` if you do not need to create a new Kedro CLI c ## `plugin` contribution process -See the [`plugin` development documentation](https://kedro.readthedocs.io/en/latest/04_user_guide/10_developing_plugins.html) for guidance on how to design and develop a Kedro `plugin`. +See the [`plugin` development documentation](https://kedro.readthedocs.io/en/stable/04_user_guide/10_developing_plugins.html) for guidance on how to design and develop a Kedro `plugin`. ## CI / CD and running checks locally To run E2E tests you need to install the test requirements which includes `behave`. diff --git a/README.md b/README.md index f0611e4823..81927b40f5 100644 --- a/README.md +++ b/README.md @@ -32,7 +32,7 @@ Kedro is a development workflow framework that implements software engineering b pip install kedro ``` -See more detailed installation instructions, including how to setup Python virtual environments, in our [installation guide](https://kedro.readthedocs.io/en/latest/02_getting_started/02_install.html) and get started with our ["Hello Word"](https://kedro.readthedocs.io/en/latest/02_getting_started/04_hello_world.html) example. +See more detailed installation instructions, including how to setup Python virtual environments, in our [installation guide](https://kedro.readthedocs.io/en/stable/02_getting_started/02_install.html) and get started with our ["Hello Word"](https://kedro.readthedocs.io/en/stable/02_getting_started/04_hello_world.html) example. ## Why does Kedro exist? @@ -66,20 +66,20 @@ Kedro was originally designed by [Aris Valtazanos](https://github.com/arisvqb) a ## How do I use Kedro? -Our [documentation](https://kedro.readthedocs.io/en/latest/) explains: +Our [documentation](https://kedro.readthedocs.io/en/stable/) explains: -- Best-practice on how to [get started using Kedro](https://kedro.readthedocs.io/en/latest/02_getting_started/01_prerequisites.html) -- A ["Hello World" data and ML pipeline example](https://kedro.readthedocs.io/en/latest/02_getting_started/04_hello_world.html) based on the **Iris dataset** -- A two-hour [Spaceflights tutorial](https://kedro.readthedocs.io/en/latest/03_tutorial/01_workflow.html) that teaches you beginner to intermediate functionality -- How to [use the CLI](https://kedro.readthedocs.io/en/latest/06_resources/03_commands_reference.html) offered by `kedro_cli.py` (`kedro new`, `kedro run`, ...) -- An overview of [Kedro architecture](https://kedro.readthedocs.io/en/latest/06_resources/02_architecture_overview.html) -- [Frequently asked questions (FAQs)](https://kedro.readthedocs.io/en/latest/06_resources/01_faq.html) +- Best-practice on how to [get started using Kedro](https://kedro.readthedocs.io/en/stable/02_getting_started/01_prerequisites.html) +- A ["Hello World" data and ML pipeline example](https://kedro.readthedocs.io/en/stable/02_getting_started/04_hello_world.html) based on the **Iris dataset** +- A two-hour [Spaceflights tutorial](https://kedro.readthedocs.io/en/stable/03_tutorial/01_workflow.html) that teaches you beginner to intermediate functionality +- How to [use the CLI](https://kedro.readthedocs.io/en/stable/06_resources/03_commands_reference.html) offered by `kedro_cli.py` (`kedro new`, `kedro run`, ...) +- An overview of [Kedro architecture](https://kedro.readthedocs.io/en/stable/06_resources/02_architecture_overview.html) +- [Frequently asked questions (FAQs)](https://kedro.readthedocs.io/en/stable/06_resources/01_faq.html) -Documentation for the latest stable release can be found [here](https://kedro.readthedocs.io/en/latest/). You can also run `kedro docs` from your CLI and open the documentation for your current version of Kedro in a browser. +Documentation for the latest stable release can be found [here](https://kedro.readthedocs.io/en/stable/). You can also run `kedro docs` from your CLI and open the documentation for your current version of Kedro in a browser. > *Note:* The CLI is a convenient tool for being able to run `kedro` commands but you can also invoke the Kedro CLI as a Python module with `python -m kedro` -*Note:* Read our [FAQs](https://kedro.readthedocs.io/en/latest/06_resources/01_faq.html#how-does-kedro-compare-to-other-projects) to learn how we differ from workflow managers like Airflow and Luigi. +*Note:* Read our [FAQs](https://kedro.readthedocs.io/en/stable/06_resources/01_faq.html#how-does-kedro-compare-to-other-projects) to learn how we differ from workflow managers like Airflow and Luigi. ## Can I contribute? @@ -89,7 +89,7 @@ Yes! Want to help build Kedro? Check out our guide to [contributing](https://git ## Where can I learn more? -There is a growing community around Kedro. Have a look at our [FAQs](https://kedro.readthedocs.io/en/latest/06_resources/01_faq.html#where-can-i-learn-more) to find projects using Kedro and links to articles, podcasts and talks. +There is a growing community around Kedro. Have a look at our [FAQs](https://kedro.readthedocs.io/en/stable/06_resources/01_faq.html#where-can-i-learn-more) to find projects using Kedro and links to articles, podcasts and talks. ## What licence do you use? diff --git a/RELEASE.md b/RELEASE.md index 0dd5dae16b..e30b585047 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -18,7 +18,7 @@ - `IncrementalDataSet` dataset, which inherits from `PartitionedDataSet` and also remembers the last processed partition. * Enabled loading a particular version of a dataset in Jupyter Notebooks and ipython, using `catalog.load("dataset_name", version="<2019-12-13T15.08.09.255Z>")`. * Added http(s) protocol support for `JSONDataSet`. -* Added property `run_id` on `ProjectContext`, used for versioning using the [`Journal`](https://kedro.readthedocs.io/en/latest/04_user_guide/13_journal.html). To customise your journal `run_id` you can override the private method `_get_run_id()`. +* Added property `run_id` on `ProjectContext`, used for versioning using the [`Journal`](https://kedro.readthedocs.io/en/stable/04_user_guide/13_journal.html). To customise your journal `run_id` you can override the private method `_get_run_id()`. * Added the ability to install all optional kedro dependencies via `pip install "kedro[all]"`. * `JSONDataSet`, `CSVBlobDataSet`, `JSONBlobDataSet`, `SQLQueryDataSet` and `SQLTableDataSet` datasets copied to `kedro.extras.datasets.pandas`. * `SparkDataSet`, `SparkHiveDataSet` and `SparkJDBCDataSet` datasets copied to `kedro.extras.datasets.spark`. diff --git a/docs/README.md b/docs/README.md index 5e4074656e..15f9b48a5d 100644 --- a/docs/README.md +++ b/docs/README.md @@ -2,7 +2,7 @@ # Kedro documentation style guide -This is the style guide we have used to create [documentation about Kedro](https://kedro.readthedocs.io/en/latest/). +This is the style guide we have used to create [documentation about Kedro](https://kedro.readthedocs.io/en/stable/). When you are writing documentation for your own project, you may find it useful to follow these rules. We will also ask anyone kind enough to contribute to the Kedro documentation to follow our preferred style to maintain consistency and simplicity. However, we are not over-proscriptive and are happy to take contributions regardless, as long as you are happy if we edit your text to follow these rules. @@ -24,7 +24,7 @@ If you are in doubt, take a look at how we've written the Kedro documentation. I ## How do I build your documentation? -If you have installed Kedro, the documentation can be found by running `kedro docs` from the command line or following [this link](https://kedro.readthedocs.io/en/latest/). +If you have installed Kedro, the documentation can be found by running `kedro docs` from the command line or following [this link](https://kedro.readthedocs.io/en/stable/). If you make changes to our documentation, which is stored in the `docs/` folder of your Kedro installation, you can rebuild them within a Unix-like environment (with `pandoc` installed) with: diff --git a/docs/source/04_user_guide/04_data_catalog.md b/docs/source/04_user_guide/04_data_catalog.md index 3308850b43..26e01fdc3a 100644 --- a/docs/source/04_user_guide/04_data_catalog.md +++ b/docs/source/04_user_guide/04_data_catalog.md @@ -306,7 +306,7 @@ for loading, so the first node should output a `pyspark.sql.DataFrame`, while th Transformers intercept the load and save operations on Kedro `DataSet`s. Use cases that transformers enable include: - Performing data validation, - Tracking operation performance, - - And, converting a data format (although we would recommend [Transcoding](https://kedro.readthedocs.io/en/latest/04_user_guide/04_data_catalog.html#transcoding-datasets) for this). + - And, converting a data format (although we would recommend [Transcoding](https://kedro.readthedocs.io/en/stable/04_user_guide/04_data_catalog.html#transcoding-datasets) for this). ### Applying built-in transformers diff --git a/docs/source/04_user_guide/06_pipelines.md b/docs/source/04_user_guide/06_pipelines.md index 5b1d955625..6f9d34df3c 100644 --- a/docs/source/04_user_guide/06_pipelines.md +++ b/docs/source/04_user_guide/06_pipelines.md @@ -173,7 +173,7 @@ pipeline1 = mp1.create_pipeline() Here is a list of recommendations for developing a modular pipeline: * A modular pipeline should include a `README.md`, with all the information regarding the execution of the pipeline for the end users -* A modular pipeline _may_ have external dependencies specified in `requirements.txt`. These dependencies are _not_ currently installed by the [`kedro install`](https://kedro.readthedocs.io/en/latest/06_resources/03_commands_reference.html#kedro-install) command, so the users of your pipeline would have to run `pip install -r src//pipelines//requirements.txt` +* A modular pipeline _may_ have external dependencies specified in `requirements.txt`. These dependencies are _not_ currently installed by the [`kedro install`](https://kedro.readthedocs.io/en/stable/06_resources/03_commands_reference.html#kedro-install) command, so the users of your pipeline would have to run `pip install -r src//pipelines//requirements.txt` * To ensure portability, modular pipelines should use relative imports when accessing their own objects and absolute imports otherwise. Look at an example from `src/new_kedro_project/pipelines/modular_pipeline_1/pipeline.py` below: ```python diff --git a/extras/README.md b/extras/README.md index 90afb11513..2ac135845b 100644 --- a/extras/README.md +++ b/extras/README.md @@ -6,4 +6,4 @@ WARNING: This script will be deprecated in future releases. Please refer to repl This script helps to locate `.ipython` directory and run IPython startup scripts in it when working with Jupyter Notebooks and IPython sessions. This script will automatically locate `.ipython/profile_default/startup` directory starting from the current working directory and going up the directory tree. If the directory was found, all Python scripts in it are be executed. -The details can be found in [the user guide](https://kedro.readthedocs.io/en/latest/04_user_guide/11_ipython.html#ipython-loader). +The details can be found in [the user guide](https://kedro.readthedocs.io/en/stable/04_user_guide/11_ipython.html#ipython-loader). diff --git a/kedro/contrib/io/pyspark/README.md b/kedro/contrib/io/pyspark/README.md index b8b7c942a5..e83dd022c3 100644 --- a/kedro/contrib/io/pyspark/README.md +++ b/kedro/contrib/io/pyspark/README.md @@ -5,7 +5,7 @@ In this tutorial we talk about how Kedro integrates with `pyspark` using the `Sp We also present brief instructions on how to set-up pyspark to read from `AWS S3` and `Azure Blob storage`. Relevant API: -[`SparkDataSet`](https://kedro.readthedocs.io/en/latest/kedro.contrib.io.pyspark.html) +[`SparkDataSet`](https://kedro.readthedocs.io/en/stable/kedro.contrib.io.pyspark.html) ## Install spark @@ -53,7 +53,7 @@ spark = SparkSession.builder\ ## `SparkDataSet` -Loading and saving spark `DataFrame`s using Kedro can be easily done using the [`SparkDataSet`](https://kedro.readthedocs.io/en/latest/kedro.contrib.io.pyspark.html) class, as shown below: +Loading and saving spark `DataFrame`s using Kedro can be easily done using the [`SparkDataSet`](https://kedro.readthedocs.io/en/stable/kedro.contrib.io.pyspark.html) class, as shown below: ### Load a csv from your local disk @@ -83,7 +83,7 @@ parquet.save(df) ## Using `SparkDataSet` with the `DataCatalog` -Since `SparkDataSet` is a concrete implementation of [`AbstractDataSet`](https://kedro.readthedocs.io/en/latest/kedro.io.AbstractDataSet.html), it integrates nicely with the `DataCatalog` and with Kedro's pipelines. +Since `SparkDataSet` is a concrete implementation of [`AbstractDataSet`](https://kedro.readthedocs.io/en/stable/kedro.io.AbstractDataSet.html), it integrates nicely with the `DataCatalog` and with Kedro's pipelines. Similarly to all other datasets, you can specify your spark datasets in `catalog.yml` as follows: diff --git a/kedro/extras/ipython/README.md b/kedro/extras/ipython/README.md index 5848c67561..87c8a9c9cf 100644 --- a/kedro/extras/ipython/README.md +++ b/kedro/extras/ipython/README.md @@ -5,4 +5,4 @@ This script helps to locate `.ipython` directory and run IPython startup scripts in it when working with Jupyter Notebooks and IPython sessions. This script will automatically locate `.ipython/profile_default/startup` directory starting from the current working directory and going up the directory tree. If the directory was found, all Python scripts in it are be executed. -The details can be found in [the user guide](https://kedro.readthedocs.io/en/latest/04_user_guide/11_ipython.html#ipython-loader). +The details can be found in [the user guide](https://kedro.readthedocs.io/en/stable/04_user_guide/11_ipython.html#ipython-loader). diff --git a/kedro/io/data_catalog.py b/kedro/io/data_catalog.py index eb5ceb6107..47997a1242 100644 --- a/kedro/io/data_catalog.py +++ b/kedro/io/data_catalog.py @@ -77,7 +77,7 @@ def _get_credentials( raise KeyError( "Unable to find credentials '{}': check your data " "catalog and credentials configuration. See " - "https://kedro.readthedocs.io/en/latest/kedro.io.DataCatalog.html " + "https://kedro.readthedocs.io/en/stable/kedro.io.DataCatalog.html " "for an example.".format(credentials_name) ) diff --git a/kedro/io/partitioned_data_set.py b/kedro/io/partitioned_data_set.py index 1111b8aa75..1ed977290c 100644 --- a/kedro/io/partitioned_data_set.py +++ b/kedro/io/partitioned_data_set.py @@ -134,7 +134,7 @@ def __init__( # pylint: disable=too-many-arguments **Note:** ``dataset_credentials`` key has now been deprecated and should not be specified. All possible credentials management scenarios are documented here: - https://kedro.readthedocs.io/en/latest/04_user_guide/08_advanced_io.html#partitioned-dataset-credentials + https://kedro.readthedocs.io/en/stable/04_user_guide/08_advanced_io.html#partitioned-dataset-credentials load_args: Keyword arguments to be passed into ``find()`` method of the filesystem implementation. @@ -357,7 +357,7 @@ def __init__( with the corresponding dataset definition including ``filepath`` (unlike ``dataset`` argument). Checkpoint configuration is described here: - https://kedro.readthedocs.io/en/latest/04_user_guide/08_advanced_io.html#checkpoint-configuration + https://kedro.readthedocs.io/en/stable/04_user_guide/08_advanced_io.html#checkpoint-configuration Credentials for the checkpoint can be explicitly specified in this configuration. filepath_arg: Underlying dataset initializer argument that will @@ -372,7 +372,7 @@ def __init__( the dataset or the checkpoint configuration contains explicit credentials spec, then such spec will take precedence. All possible credentials management scenarios are documented here: - https://kedro.readthedocs.io/en/latest/04_user_guide/08_advanced_io.html#partitioned-dataset-credentials + https://kedro.readthedocs.io/en/stable/04_user_guide/08_advanced_io.html#partitioned-dataset-credentials load_args: Keyword arguments to be passed into ``find()`` method of the filesystem implementation. diff --git a/kedro/template/{{ cookiecutter.repo_name }}/README.md b/kedro/template/{{ cookiecutter.repo_name }}/README.md index d864d722e5..cc6c3c44f9 100644 --- a/kedro/template/{{ cookiecutter.repo_name }}/README.md +++ b/kedro/template/{{ cookiecutter.repo_name }}/README.md @@ -14,7 +14,7 @@ Take a look at the [documentation](https://kedro.readthedocs.io) to get started. In order to get the best out of the template: * Please don't remove any lines from the `.gitignore` file provided - * Make sure your results can be reproduced by following a data engineering convention, e.g. the one we suggest [here](https://kedro.readthedocs.io/en/latest/06_resources/01_faq.html#what-is-data-engineering-convention) + * Make sure your results can be reproduced by following a data engineering convention, e.g. the one we suggest [here](https://kedro.readthedocs.io/en/stable/06_resources/01_faq.html#what-is-data-engineering-convention) * Don't commit any data to your repository * Don't commit any credentials or local configuration to your repository * Keep all credentials or local configuration in `conf/local/` diff --git a/kedro/template/{{ cookiecutter.repo_name }}/conf/README.md b/kedro/template/{{ cookiecutter.repo_name }}/conf/README.md index 0234d27e41..bc9d103ca4 100644 --- a/kedro/template/{{ cookiecutter.repo_name }}/conf/README.md +++ b/kedro/template/{{ cookiecutter.repo_name }}/conf/README.md @@ -23,4 +23,4 @@ WARNING: Please do not put access credentials in the base configuration folder. # Find out more -You can find out more about configuration from the [user guide documentation](https://kedro.readthedocs.io/en/latest/04_user_guide/03_configuration.html). +You can find out more about configuration from the [user guide documentation](https://kedro.readthedocs.io/en/stable/04_user_guide/03_configuration.html). diff --git a/kedro/template/{{ cookiecutter.repo_name }}/conf/base/catalog.yml b/kedro/template/{{ cookiecutter.repo_name }}/conf/base/catalog.yml index bd3ebb7e0f..39c5356424 100644 --- a/kedro/template/{{ cookiecutter.repo_name }}/conf/base/catalog.yml +++ b/kedro/template/{{ cookiecutter.repo_name }}/conf/base/catalog.yml @@ -1,7 +1,7 @@ # Here you can define all your data sets by using simple YAML syntax. # # Documentation for this file format can be found in "The Data Catalog" -# Link: https://kedro.readthedocs.io/en/latest/04_user_guide/04_data_catalog.html +# Link: https://kedro.readthedocs.io/en/stable/04_user_guide/04_data_catalog.html # # We support interacting with a variety of data stores including local file systems, cloud, network and HDFS # @@ -37,7 +37,7 @@ # # The Data Catalog supports being able to reference the same file using two different DataSet implementations # (transcoding), templating and a way to reuse arguments that are frequently repeated. See more here: -# https://kedro.readthedocs.io/en/latest/04_user_guide/04_data_catalog.html +# https://kedro.readthedocs.io/en/stable/04_user_guide/04_data_catalog.html {% if cookiecutter.include_example == "True" %} # # This is a data set used by the "Hello World" example pipeline provided with the project