Skip to content

Commit

Permalink
[KED-1396] Switch default documentation version to master/stable (#438)
Browse files Browse the repository at this point in the history
  • Loading branch information
Zain Patel committed Feb 18, 2020
1 parent f8bb1e0 commit 07ca46c
Show file tree
Hide file tree
Showing 14 changed files with 31 additions and 31 deletions.
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ If you're unsure where to begin contributing to Kedro, please start by looking t
We focus on three areas for contribution: `core`, [`contrib`](/kedro/contrib/) or `plugin`:
- `core` refers to the primary Kedro library
- [`contrib`](/kedro/contrib/) refers to features that could be added to `core` that do not introduce too many depencies or require new Kedro CLI commands to be created e.g. adding a new dataset to the `io` data management module
- [`plugin`](https://kedro.readthedocs.io/en/latest/04_user_guide/10_developing_plugins.html) refers to new functionality that requires a Kedro CLI command e.g. adding in Airflow functionality
- [`plugin`](https://kedro.readthedocs.io/en/stable/04_user_guide/10_developing_plugins.html) refers to new functionality that requires a Kedro CLI command e.g. adding in Airflow functionality

Typically, we only accept small contributions for the `core` Kedro library but accept new features as `plugin`s or additions to the [`contrib`](/kedro/contrib/) module. We regularly review [`contrib`](/kedro/contrib/) and may migrate modules to `core` if they prove to be essential for the functioning of the framework or if we believe that they are used by most projects.

Expand Down Expand Up @@ -109,7 +109,7 @@ You can add new work to `contrib` if you do not need to create a new Kedro CLI c
## `plugin` contribution process

See the [`plugin` development documentation](https://kedro.readthedocs.io/en/latest/04_user_guide/10_developing_plugins.html) for guidance on how to design and develop a Kedro `plugin`.
See the [`plugin` development documentation](https://kedro.readthedocs.io/en/stable/04_user_guide/10_developing_plugins.html) for guidance on how to design and develop a Kedro `plugin`.

## CI / CD and running checks locally
To run E2E tests you need to install the test requirements which includes `behave`.
Expand Down
22 changes: 11 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ Kedro is a development workflow framework that implements software engineering b
pip install kedro
```

See more detailed installation instructions, including how to setup Python virtual environments, in our [installation guide](https://kedro.readthedocs.io/en/latest/02_getting_started/02_install.html) and get started with our ["Hello Word"](https://kedro.readthedocs.io/en/latest/02_getting_started/04_hello_world.html) example.
See more detailed installation instructions, including how to setup Python virtual environments, in our [installation guide](https://kedro.readthedocs.io/en/stable/02_getting_started/02_install.html) and get started with our ["Hello Word"](https://kedro.readthedocs.io/en/stable/02_getting_started/04_hello_world.html) example.


## Why does Kedro exist?
Expand Down Expand Up @@ -66,20 +66,20 @@ Kedro was originally designed by [Aris Valtazanos](https://github.com/arisvqb) a

## How do I use Kedro?

Our [documentation](https://kedro.readthedocs.io/en/latest/) explains:
Our [documentation](https://kedro.readthedocs.io/en/stable/) explains:

- Best-practice on how to [get started using Kedro](https://kedro.readthedocs.io/en/latest/02_getting_started/01_prerequisites.html)
- A ["Hello World" data and ML pipeline example](https://kedro.readthedocs.io/en/latest/02_getting_started/04_hello_world.html) based on the **Iris dataset**
- A two-hour [Spaceflights tutorial](https://kedro.readthedocs.io/en/latest/03_tutorial/01_workflow.html) that teaches you beginner to intermediate functionality
- How to [use the CLI](https://kedro.readthedocs.io/en/latest/06_resources/03_commands_reference.html) offered by `kedro_cli.py` (`kedro new`, `kedro run`, ...)
- An overview of [Kedro architecture](https://kedro.readthedocs.io/en/latest/06_resources/02_architecture_overview.html)
- [Frequently asked questions (FAQs)](https://kedro.readthedocs.io/en/latest/06_resources/01_faq.html)
- Best-practice on how to [get started using Kedro](https://kedro.readthedocs.io/en/stable/02_getting_started/01_prerequisites.html)
- A ["Hello World" data and ML pipeline example](https://kedro.readthedocs.io/en/stable/02_getting_started/04_hello_world.html) based on the **Iris dataset**
- A two-hour [Spaceflights tutorial](https://kedro.readthedocs.io/en/stable/03_tutorial/01_workflow.html) that teaches you beginner to intermediate functionality
- How to [use the CLI](https://kedro.readthedocs.io/en/stable/06_resources/03_commands_reference.html) offered by `kedro_cli.py` (`kedro new`, `kedro run`, ...)
- An overview of [Kedro architecture](https://kedro.readthedocs.io/en/stable/06_resources/02_architecture_overview.html)
- [Frequently asked questions (FAQs)](https://kedro.readthedocs.io/en/stable/06_resources/01_faq.html)

Documentation for the latest stable release can be found [here](https://kedro.readthedocs.io/en/latest/). You can also run `kedro docs` from your CLI and open the documentation for your current version of Kedro in a browser.
Documentation for the latest stable release can be found [here](https://kedro.readthedocs.io/en/stable/). You can also run `kedro docs` from your CLI and open the documentation for your current version of Kedro in a browser.

> *Note:* The CLI is a convenient tool for being able to run `kedro` commands but you can also invoke the Kedro CLI as a Python module with `python -m kedro`
*Note:* Read our [FAQs](https://kedro.readthedocs.io/en/latest/06_resources/01_faq.html#how-does-kedro-compare-to-other-projects) to learn how we differ from workflow managers like Airflow and Luigi.
*Note:* Read our [FAQs](https://kedro.readthedocs.io/en/stable/06_resources/01_faq.html#how-does-kedro-compare-to-other-projects) to learn how we differ from workflow managers like Airflow and Luigi.


## Can I contribute?
Expand All @@ -89,7 +89,7 @@ Yes! Want to help build Kedro? Check out our guide to [contributing](https://git

## Where can I learn more?

There is a growing community around Kedro. Have a look at our [FAQs](https://kedro.readthedocs.io/en/latest/06_resources/01_faq.html#where-can-i-learn-more) to find projects using Kedro and links to articles, podcasts and talks.
There is a growing community around Kedro. Have a look at our [FAQs](https://kedro.readthedocs.io/en/stable/06_resources/01_faq.html#where-can-i-learn-more) to find projects using Kedro and links to articles, podcasts and talks.


## What licence do you use?
Expand Down
2 changes: 1 addition & 1 deletion RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
- `IncrementalDataSet` dataset, which inherits from `PartitionedDataSet` and also remembers the last processed partition.
* Enabled loading a particular version of a dataset in Jupyter Notebooks and ipython, using `catalog.load("dataset_name", version="<2019-12-13T15.08.09.255Z>")`.
* Added http(s) protocol support for `JSONDataSet`.
* Added property `run_id` on `ProjectContext`, used for versioning using the [`Journal`](https://kedro.readthedocs.io/en/latest/04_user_guide/13_journal.html). To customise your journal `run_id` you can override the private method `_get_run_id()`.
* Added property `run_id` on `ProjectContext`, used for versioning using the [`Journal`](https://kedro.readthedocs.io/en/stable/04_user_guide/13_journal.html). To customise your journal `run_id` you can override the private method `_get_run_id()`.
* Added the ability to install all optional kedro dependencies via `pip install "kedro[all]"`.
* `JSONDataSet`, `CSVBlobDataSet`, `JSONBlobDataSet`, `SQLQueryDataSet` and `SQLTableDataSet` datasets copied to `kedro.extras.datasets.pandas`.
* `SparkDataSet`, `SparkHiveDataSet` and `SparkJDBCDataSet` datasets copied to `kedro.extras.datasets.spark`.
Expand Down
4 changes: 2 additions & 2 deletions docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

# Kedro documentation style guide

This is the style guide we have used to create [documentation about Kedro](https://kedro.readthedocs.io/en/latest/).
This is the style guide we have used to create [documentation about Kedro](https://kedro.readthedocs.io/en/stable/).

When you are writing documentation for your own project, you may find it useful to follow these rules. We will also ask anyone kind enough to contribute to the Kedro documentation to follow our preferred style to maintain consistency and simplicity. However, we are not over-proscriptive and are happy to take contributions regardless, as long as you are happy if we edit your text to follow these rules.

Expand All @@ -24,7 +24,7 @@ If you are in doubt, take a look at how we've written the Kedro documentation. I

## How do I build your documentation?

If you have installed Kedro, the documentation can be found by running `kedro docs` from the command line or following [this link](https://kedro.readthedocs.io/en/latest/).
If you have installed Kedro, the documentation can be found by running `kedro docs` from the command line or following [this link](https://kedro.readthedocs.io/en/stable/).

If you make changes to our documentation, which is stored in the `docs/` folder of your Kedro installation, you can rebuild them within a Unix-like environment (with `pandoc` installed) with:

Expand Down
2 changes: 1 addition & 1 deletion docs/source/04_user_guide/04_data_catalog.md
Original file line number Diff line number Diff line change
Expand Up @@ -306,7 +306,7 @@ for loading, so the first node should output a `pyspark.sql.DataFrame`, while th
Transformers intercept the load and save operations on Kedro `DataSet`s. Use cases that transformers enable include:
- Performing data validation,
- Tracking operation performance,
- And, converting a data format (although we would recommend [Transcoding](https://kedro.readthedocs.io/en/latest/04_user_guide/04_data_catalog.html#transcoding-datasets) for this).
- And, converting a data format (although we would recommend [Transcoding](https://kedro.readthedocs.io/en/stable/04_user_guide/04_data_catalog.html#transcoding-datasets) for this).

### Applying built-in transformers

Expand Down
2 changes: 1 addition & 1 deletion docs/source/04_user_guide/06_pipelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ pipeline1 = mp1.create_pipeline()
Here is a list of recommendations for developing a modular pipeline:

* A modular pipeline should include a `README.md`, with all the information regarding the execution of the pipeline for the end users
* A modular pipeline _may_ have external dependencies specified in `requirements.txt`. These dependencies are _not_ currently installed by the [`kedro install`](https://kedro.readthedocs.io/en/latest/06_resources/03_commands_reference.html#kedro-install) command, so the users of your pipeline would have to run `pip install -r src/<python_package>/pipelines/<pipeline_name>/requirements.txt`
* A modular pipeline _may_ have external dependencies specified in `requirements.txt`. These dependencies are _not_ currently installed by the [`kedro install`](https://kedro.readthedocs.io/en/stable/06_resources/03_commands_reference.html#kedro-install) command, so the users of your pipeline would have to run `pip install -r src/<python_package>/pipelines/<pipeline_name>/requirements.txt`
* To ensure portability, modular pipelines should use relative imports when accessing their own objects and absolute imports otherwise. Look at an example from `src/new_kedro_project/pipelines/modular_pipeline_1/pipeline.py` below:

```python
Expand Down
2 changes: 1 addition & 1 deletion extras/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,4 +6,4 @@ WARNING: This script will be deprecated in future releases. Please refer to repl

This script helps to locate `.ipython` directory and run IPython startup scripts in it when working with Jupyter Notebooks and IPython sessions. This script will automatically locate `.ipython/profile_default/startup` directory starting from the current working directory and going up the directory tree. If the directory was found, all Python scripts in it are be executed.

The details can be found in [the user guide](https://kedro.readthedocs.io/en/latest/04_user_guide/11_ipython.html#ipython-loader).
The details can be found in [the user guide](https://kedro.readthedocs.io/en/stable/04_user_guide/11_ipython.html#ipython-loader).
6 changes: 3 additions & 3 deletions kedro/contrib/io/pyspark/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ In this tutorial we talk about how Kedro integrates with `pyspark` using the `Sp
We also present brief instructions on how to set-up pyspark to read from `AWS S3` and `Azure Blob storage`.

Relevant API:
[`SparkDataSet`](https://kedro.readthedocs.io/en/latest/kedro.contrib.io.pyspark.html)
[`SparkDataSet`](https://kedro.readthedocs.io/en/stable/kedro.contrib.io.pyspark.html)


## Install spark
Expand Down Expand Up @@ -53,7 +53,7 @@ spark = SparkSession.builder\

## `SparkDataSet`

Loading and saving spark `DataFrame`s using Kedro can be easily done using the [`SparkDataSet`](https://kedro.readthedocs.io/en/latest/kedro.contrib.io.pyspark.html) class, as shown below:
Loading and saving spark `DataFrame`s using Kedro can be easily done using the [`SparkDataSet`](https://kedro.readthedocs.io/en/stable/kedro.contrib.io.pyspark.html) class, as shown below:

### Load a csv from your local disk

Expand Down Expand Up @@ -83,7 +83,7 @@ parquet.save(df)

## Using `SparkDataSet` with the `DataCatalog`

Since `SparkDataSet` is a concrete implementation of [`AbstractDataSet`](https://kedro.readthedocs.io/en/latest/kedro.io.AbstractDataSet.html), it integrates nicely with the `DataCatalog` and with Kedro's pipelines.
Since `SparkDataSet` is a concrete implementation of [`AbstractDataSet`](https://kedro.readthedocs.io/en/stable/kedro.io.AbstractDataSet.html), it integrates nicely with the `DataCatalog` and with Kedro's pipelines.

Similarly to all other datasets, you can specify your spark datasets in `catalog.yml` as follows:

Expand Down
2 changes: 1 addition & 1 deletion kedro/extras/ipython/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,4 +5,4 @@

This script helps to locate `.ipython` directory and run IPython startup scripts in it when working with Jupyter Notebooks and IPython sessions. This script will automatically locate `.ipython/profile_default/startup` directory starting from the current working directory and going up the directory tree. If the directory was found, all Python scripts in it are be executed.

The details can be found in [the user guide](https://kedro.readthedocs.io/en/latest/04_user_guide/11_ipython.html#ipython-loader).
The details can be found in [the user guide](https://kedro.readthedocs.io/en/stable/04_user_guide/11_ipython.html#ipython-loader).
2 changes: 1 addition & 1 deletion kedro/io/data_catalog.py
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ def _get_credentials(
raise KeyError(
"Unable to find credentials '{}': check your data "
"catalog and credentials configuration. See "
"https://kedro.readthedocs.io/en/latest/kedro.io.DataCatalog.html "
"https://kedro.readthedocs.io/en/stable/kedro.io.DataCatalog.html "
"for an example.".format(credentials_name)
)

Expand Down
6 changes: 3 additions & 3 deletions kedro/io/partitioned_data_set.py
Original file line number Diff line number Diff line change
Expand Up @@ -134,7 +134,7 @@ def __init__( # pylint: disable=too-many-arguments
**Note:** ``dataset_credentials`` key has now been deprecated
and should not be specified.
All possible credentials management scenarios are documented here:
https://kedro.readthedocs.io/en/latest/04_user_guide/08_advanced_io.html#partitioned-dataset-credentials
https://kedro.readthedocs.io/en/stable/04_user_guide/08_advanced_io.html#partitioned-dataset-credentials
load_args: Keyword arguments to be passed into ``find()`` method of
the filesystem implementation.
Expand Down Expand Up @@ -357,7 +357,7 @@ def __init__(
with the corresponding dataset definition including ``filepath``
(unlike ``dataset`` argument). Checkpoint configuration is
described here:
https://kedro.readthedocs.io/en/latest/04_user_guide/08_advanced_io.html#checkpoint-configuration
https://kedro.readthedocs.io/en/stable/04_user_guide/08_advanced_io.html#checkpoint-configuration
Credentials for the checkpoint can be explicitly specified
in this configuration.
filepath_arg: Underlying dataset initializer argument that will
Expand All @@ -372,7 +372,7 @@ def __init__(
the dataset or the checkpoint configuration contains explicit
credentials spec, then such spec will take precedence.
All possible credentials management scenarios are documented here:
https://kedro.readthedocs.io/en/latest/04_user_guide/08_advanced_io.html#partitioned-dataset-credentials
https://kedro.readthedocs.io/en/stable/04_user_guide/08_advanced_io.html#partitioned-dataset-credentials
load_args: Keyword arguments to be passed into ``find()`` method of
the filesystem implementation.
Expand Down
2 changes: 1 addition & 1 deletion kedro/template/{{ cookiecutter.repo_name }}/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Take a look at the [documentation](https://kedro.readthedocs.io) to get started.

In order to get the best out of the template:
* Please don't remove any lines from the `.gitignore` file provided
* Make sure your results can be reproduced by following a data engineering convention, e.g. the one we suggest [here](https://kedro.readthedocs.io/en/latest/06_resources/01_faq.html#what-is-data-engineering-convention)
* Make sure your results can be reproduced by following a data engineering convention, e.g. the one we suggest [here](https://kedro.readthedocs.io/en/stable/06_resources/01_faq.html#what-is-data-engineering-convention)
* Don't commit any data to your repository
* Don't commit any credentials or local configuration to your repository
* Keep all credentials or local configuration in `conf/local/`
Expand Down
2 changes: 1 addition & 1 deletion kedro/template/{{ cookiecutter.repo_name }}/conf/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,4 +23,4 @@ WARNING: Please do not put access credentials in the base configuration folder.


# Find out more
You can find out more about configuration from the [user guide documentation](https://kedro.readthedocs.io/en/latest/04_user_guide/03_configuration.html).
You can find out more about configuration from the [user guide documentation](https://kedro.readthedocs.io/en/stable/04_user_guide/03_configuration.html).
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Here you can define all your data sets by using simple YAML syntax.
#
# Documentation for this file format can be found in "The Data Catalog"
# Link: https://kedro.readthedocs.io/en/latest/04_user_guide/04_data_catalog.html
# Link: https://kedro.readthedocs.io/en/stable/04_user_guide/04_data_catalog.html
#
# We support interacting with a variety of data stores including local file systems, cloud, network and HDFS
#
Expand Down Expand Up @@ -37,7 +37,7 @@
#
# The Data Catalog supports being able to reference the same file using two different DataSet implementations
# (transcoding), templating and a way to reuse arguments that are frequently repeated. See more here:
# https://kedro.readthedocs.io/en/latest/04_user_guide/04_data_catalog.html
# https://kedro.readthedocs.io/en/stable/04_user_guide/04_data_catalog.html
{% if cookiecutter.include_example == "True" %}
#
# This is a data set used by the "Hello World" example pipeline provided with the project
Expand Down

0 comments on commit 07ca46c

Please sign in to comment.