Skip to content

Commit

Permalink
FIX Update/remove references to master in docs
Browse files Browse the repository at this point in the history
  • Loading branch information
dillon-cullinan committed Aug 20, 2020
1 parent 4459ec2 commit 6a4bc1d
Show file tree
Hide file tree
Showing 31 changed files with 56 additions and 56 deletions.
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2054,7 +2054,7 @@
- PR #1433 Fix NVStrings/categories includes
- PR #1432 Update NVStrings to 0.7.* to coincide with 0.7 development
- PR #1483 Modify CSV reader to avoid cropping blank quoted characters in non-string fields
- PR #1446 Merge 1275 hotfix from master into branch-0.7
- PR #1446 Merge 1275 hotfix from main into branch-0.7
- PR #1447 Fix legacy groupby apply docstring
- PR #1451 Fix hash join estimated result size is not correct
- PR #1454 Fix local build script improperly change directory permissions
Expand Down
8 changes: 4 additions & 4 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -276,7 +276,7 @@ For detailed information on usage of this script, see [here](ci/local/README.md)

## Automated Build in Docker Container

A Dockerfile is provided with a preconfigured conda environment for building and installing cuDF from source based off of the master branch.
A Dockerfile is provided with a preconfigured conda environment for building and installing cuDF from source based off of the main branch.

### Prerequisites

Expand Down Expand Up @@ -313,7 +313,7 @@ flag. Below is a list of the available arguments and their purpose:
| `LINUX_VERSION` | ubuntu16.04 | ubuntu18.04 | set Ubuntu version |
| `CC` & `CXX` | 5 | 7 | set gcc/g++ version; **NOTE:** gcc7 requires Ubuntu 18.04 |
| `CUDF_REPO` | This repo | Forks of cuDF | set git URL to use for `git clone` |
| `CUDF_BRANCH` | master | Any branch name | set git branch to checkout of `CUDF_REPO` |
| `CUDF_BRANCH` | main | Any branch name | set git branch to checkout of `CUDF_REPO` |
| `NUMBA_VERSION` | newest | >=0.40.0 | set numba version |
| `NUMPY_VERSION` | newest | >=1.14.3 | set numpy version |
| `PANDAS_VERSION` | newest | >=0.23.4 | set pandas version |
Expand All @@ -325,5 +325,5 @@ flag. Below is a list of the available arguments and their purpose:
---

## Attribution
Portions adopted from https://github.com/pytorch/pytorch/blob/master/CONTRIBUTING.md
Portions adopted from https://github.com/dask/dask/blob/master/docs/source/develop.rst
Portions adopted from https://github.com/pytorch/pytorch/blob/main/CONTRIBUTING.md
Portions adopted from https://github.com/dask/dask/blob/main/docs/source/develop.rst
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@

[![Build Status](https://gpuci.gpuopenanalytics.com/job/rapidsai/job/gpuci/job/cudf/job/branches/job/cudf-branch-pipeline/badge/icon)](https://gpuci.gpuopenanalytics.com/job/rapidsai/job/gpuci/job/cudf/job/branches/job/cudf-branch-pipeline/)

**NOTE:** For the latest stable [README.md](https://github.com/rapidsai/cudf/blob/master/README.md) ensure you are on the `master` branch.
**NOTE:** For the latest stable [README.md](https://github.com/rapidsai/cudf/blob/main/README.md) ensure you are on the `main` branch.

Built based on the [Apache Arrow](http://arrow.apache.org/) columnar memory format, cuDF is a GPU DataFrame library for loading, joining, aggregating, filtering, and otherwise manipulating data.

Expand All @@ -13,7 +13,7 @@ For example, the following snippet downloads a CSV, then uses the GPU to parse i
import cudf, io, requests
from io import StringIO

url = "https://github.com/plotly/datasets/raw/master/tips.csv"
url = "https://github.com/plotly/datasets/raw/main/tips.csv"
content = requests.get(url).content.decode('utf-8')

tips_df = cudf.read_csv(StringIO(content))
Expand Down
2 changes: 1 addition & 1 deletion conda/recipes/dask-cudf/run_test.sh
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ function logger() {
echo -e "\n>>>> $@\n"
}

# Install the master version of dask and distributed
# Install the main version of dask and distributed
logger "pip install git+https://github.com/dask/distributed.git --upgrade --no-deps"
pip install "git+https://github.com/dask/distributed.git" --upgrade --no-deps

Expand Down
6 changes: 3 additions & 3 deletions cpp/docs/TRANSITIONGUIDE.md
Original file line number Diff line number Diff line change
Expand Up @@ -504,7 +504,7 @@ For example `is_numeric<T>()` can be used to specialize for any numeric type.
# Testing
Unit tests in libcudf are written using [Google Test](https://github.com/google/googletest/blob/master/googletest/docs/primer.md).
Unit tests in libcudf are written using [Google Test](https://github.com/google/googletest/blob/main/googletest/docs/primer.md).
**Important:** Instead of including `gtest/gtest.h` directly, use the custom header in `cpp/tests/utilities/cudf_gtest.hpp`.
Expand All @@ -521,7 +521,7 @@ This is because `nvcc` is generally slower than `gcc` in compiling host code.
## Base Fixture
All libcudf unit tests should make use of a GTest ["Test Fixture"](https://github.com/google/googletest/blob/master/googletest/docs/primer.md#test-fixtures-using-the-same-data-configuration-for-multiple-tests-same-data-multiple-tests).
All libcudf unit tests should make use of a GTest ["Test Fixture"](https://github.com/google/googletest/blob/main/googletest/docs/primer.md#test-fixtures-using-the-same-data-configuration-for-multiple-tests-same-data-multiple-tests).
Even if the fixture is empty, it should inherit from the base fixture `cudf::test::BaseFixture` found in `cudf/cpp/tests/utilities/base_fixture.hpp`.
This is to ensure that RMM is properly initialized/finalized.
`cudf::test::BaseFixture` already inherits from `::testing::Test` and therefore it is not necessary for your test fixtures to inherit from it.
Expand All @@ -534,7 +534,7 @@ class MyTestFiture : public cudf::test::BaseFixture {...};
## Typed Tests

In libcudf we must ensure that features work across all of the types we support.
In order to automate the process of running the same tests across multiple types, we make use of GTest's [Typed Tests](https://github.com/google/googletest/blob/master/googletest/docs/advanced.md#typed-tests).
In order to automate the process of running the same tests across multiple types, we make use of GTest's [Typed Tests](https://github.com/google/googletest/blob/main/googletest/docs/advanced.md#typed-tests).
Typed tests allow you to write a test once and run it across all types in a list of types.

For example:
Expand Down
2 changes: 1 addition & 1 deletion cpp/doxygen/Doxyfile
Original file line number Diff line number Diff line change
Expand Up @@ -1326,7 +1326,7 @@ CHM_FILE =
HHC_LOCATION =

# The GENERATE_CHI flag controls if a separate .chi index file is generated
# (YES) or that it should be included in the master .chm file (NO).
# (YES) or that it should be included in the main .chm file (NO).
# The default value is: NO.
# This tag requires that the tag GENERATE_HTMLHELP is set to YES.

Expand Down
2 changes: 1 addition & 1 deletion cpp/include/cudf/detail/nvtx/nvtx3.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@
* Systems:
*
* \image html
* https://raw.githubusercontent.com/jrhemstad/nvtx_wrappers/master/docs/example_range.png
* https://raw.githubusercontent.com/jrhemstad/nvtx_wrappers/main/docs/example_range.png
*
* Alternatively, use the \ref MACROS like `NVTX3_FUNC_RANGE()` to add
* ranges to your code that automatically use the name of the enclosing function
Expand Down
2 changes: 1 addition & 1 deletion cpp/include/cudf/detail/utilities/hash_functions.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ void CUDA_DEVICE_CALLABLE MD5Hash::operator()<string_view>(column_device_view co
} // namespace cudf

// MurmurHash3_32 implementation from
// https://github.com/aappleby/smhasher/blob/master/src/MurmurHash3.cpp
// https://github.com/aappleby/smhasher/blob/main/src/MurmurHash3.cpp
//-----------------------------------------------------------------------------
// MurmurHash3 was written by Austin Appleby, and is placed in the public
// domain. The author hereby disclaims copyright to this source code.
Expand Down
4 changes: 2 additions & 2 deletions cpp/libcudf_kafka/include/cudf_kafka/kafka_consumer.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ class kafka_consumer : public cudf::io::datasource {
* operations. This is useful when the need for delayed partition and topic assignment
* is not known ahead of time and needs to be delayed to as late as possible.
* Documentation for librdkafka configurations can be found at
* https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
* https://github.com/edenhill/librdkafka/blob/main/CONFIGURATION.md
*
* @param configs key/value pairs of librdkafka configurations that will be
* passed to the librdkafka client
Expand All @@ -53,7 +53,7 @@ class kafka_consumer : public cudf::io::datasource {

/**
* @brief Instantiate a Kafka consumer object. Documentation for librdkafka configurations can be
* found at https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
* found at https://github.com/edenhill/librdkafka/blob/main/CONFIGURATION.md
*
* @param configs key/value pairs of librdkafka configurations that will be
* passed to the librdkafka client
Expand Down
2 changes: 1 addition & 1 deletion cpp/src/io/comp/cpu_unbz2.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
*
* bzip2 license information is available at
* https://spdx.org/licenses/bzip2-1.0.6.html
* https://github.com/asimonov-im/bzip2/blob/master/LICENSE
* https://github.com/asimonov-im/bzip2/blob/main/LICENSE
* original source code available at
* http://www.sourceware.org/bzip2/
*
Expand Down
2 changes: 1 addition & 1 deletion cpp/src/io/comp/snap.cu
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ static __device__ uint32_t Match60(const uint8_t *src1,

/**
* @brief Snappy compression kernel
* See http://github.com/google/snappy/blob/master/format_description.txt
* See http://github.com/google/snappy/blob/main/format_description.txt
*
* blockDim {128,1,1}
*
Expand Down
2 changes: 1 addition & 1 deletion cpp/src/io/comp/unbz2.h
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
*
* bzip2 license information is available at
* https://spdx.org/licenses/bzip2-1.0.6.html
* https://github.com/asimonov-im/bzip2/blob/master/LICENSE
* https://github.com/asimonov-im/bzip2/blob/main/LICENSE
* original source code available at
* http://www.sourceware.org/bzip2/
*
Expand Down
2 changes: 1 addition & 1 deletion cpp/src/io/comp/unsnap.cu
Original file line number Diff line number Diff line change
Expand Up @@ -593,7 +593,7 @@ __device__ void snappy_process_symbols(unsnap_state_s *s, int t)

/**
* @brief Snappy decompression kernel
* See http://github.com/google/snappy/blob/master/format_description.txt
* See http://github.com/google/snappy/blob/main/format_description.txt
*
* blockDim {128,1,1}
*
Expand Down
2 changes: 1 addition & 1 deletion cpp/src/io/parquet/page_data.cu
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ struct page_state_s {
* @brief Computes a 32-bit hash when given a byte stream and range.
*
* MurmurHash3_32 implementation from
* https://github.com/aappleby/smhasher/blob/master/src/MurmurHash3.cpp
* https://github.com/aappleby/smhasher/blob/main/src/MurmurHash3.cpp
*
* MurmurHash3 was written by Austin Appleby, and is placed in the public
* domain. The author hereby disclaims copyright to this source code.
Expand Down
4 changes: 2 additions & 2 deletions cpp/src/io/parquet/reader_impl.cu
Original file line number Diff line number Diff line change
Expand Up @@ -169,7 +169,7 @@ std::string name_from_path(const std::vector<std::string> &path_in_schema)
// For the case of lists, we will see a schema that looks like:
// a.list.element.list.element
// where each (list.item) pair represents a level of nesting. According to the parquet spec,
// https://github.com/apache/parquet-format/blob/master/LogicalTypes.md
// https://github.com/apache/parquet-format/blob/main/LogicalTypes.md
// the initial field must be named "list" and the inner element must be named "element".
// If we are dealing with a list, we want to return the topmost name of the group ("a").
//
Expand Down Expand Up @@ -357,7 +357,7 @@ class aggregate_metadata {
auto &pfm = per_file_metadata[0];

// see : the "Nested Types" section here
// https://github.com/apache/parquet-format/blob/master/LogicalTypes.md
// https://github.com/apache/parquet-format/blob/main/LogicalTypes.md
int index = get_column_leaf_schema_index(col_index);
int depth = 0;

Expand Down
10 changes: 5 additions & 5 deletions docs/cudf/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,8 +60,8 @@
# source_suffix = ['.rst', '.md']
source_suffix = {".rst": "restructuredtext", ".md": "markdown"}

# The master toctree document.
master_doc = "index"
# The main toctree document.
main_doc = "index"

# General information about the project.
project = "cudf"
Expand Down Expand Up @@ -157,7 +157,7 @@
# author, documentclass [howto, manual, or own class]).
latex_documents = [
(
master_doc,
main_doc,
"cudf.tex",
"cudf Documentation",
"Continuum Analytics",
Expand All @@ -170,7 +170,7 @@

# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [(master_doc, "cudf", "cudf Documentation", [author], 1)]
man_pages = [(main_doc, "cudf", "cudf Documentation", [author], 1)]


# -- Options for Texinfo output -------------------------------------------
Expand All @@ -180,7 +180,7 @@
# dir menu entry, description, category)
texinfo_documents = [
(
master_doc,
main_doc,
"cudf",
"cudf Documentation",
author,
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -16,4 +16,4 @@ exclude = '''
build |
dist
)/
'''
'''
2 changes: 1 addition & 1 deletion python/cudf/cudf/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
# "stabilization", as well as "HEAD" and "main".
tags = set([r for r in refs if re.search(r"\d", r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
Expand Down
2 changes: 1 addition & 1 deletion python/cudf/cudf/core/dataframe.py
Original file line number Diff line number Diff line change
Expand Up @@ -6294,7 +6294,7 @@ def select_dtypes(self, include=None, exclude=None):
"""

# code modified from:
# https://github.com/pandas-dev/pandas/blob/master/pandas/core/frame.py#L3196
# https://github.com/pandas-dev/pandas/blob/main/pandas/core/frame.py#L3196

if not isinstance(include, (list, tuple)):
include = (include,) if include is not None else ()
Expand Down
2 changes: 1 addition & 1 deletion python/cudf/cudf/tests/test_array_function.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
missing_arrfunc_reason = "NEP-18 support is not available in NumPy"

# Test implementation based on dask array test
# https://github.com/dask/dask/blob/master/dask/array/tests/test_array_function.py
# https://github.com/dask/dask/blob/main/dask/array/tests/test_array_function.py


@pytest.mark.skipif(missing_arrfunc_cond, reason=missing_arrfunc_reason)
Expand Down
2 changes: 1 addition & 1 deletion python/cudf/cudf/utils/ioutils.py
Original file line number Diff line number Diff line change
Expand Up @@ -864,7 +864,7 @@
----------
kafka_configs : dict, key/value pairs of librdkafka configuration values.
The complete list of valid configurations can be found at
https://github.com/edenhill/librdkafka/blob/master/CONFIGURATION.md
https://github.com/edenhill/librdkafka/blob/main/CONFIGURATION.md
topic : string, case sensitive name of the Kafka topic that contains the
source data.
partition : int,
Expand Down
2 changes: 1 addition & 1 deletion python/cudf/cudf/utils/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -263,7 +263,7 @@ def get_null_series(size, dtype=np.bool):


# taken from dask array
# https://github.com/dask/dask/blob/master/dask/array/utils.py#L352-L363
# https://github.com/dask/dask/blob/main/dask/array/utils.py#L352-L363
def _is_nep18_active():
class A:
def __array_function__(self, *args, **kwargs):
Expand Down
8 changes: 4 additions & 4 deletions python/cudf/versioneer.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
(https://pypip.in/version/versioneer/badge.svg?style=flat)
](https://pypi.python.org/pypi/versioneer/)
* [![Build Status]
(https://travis-ci.org/warner/python-versioneer.png?branch=master)
(https://travis-ci.org/warner/python-versioneer.png?branch=main)
](https://travis-ci.org/warner/python-versioneer)
This is a tool for managing a recorded version number in distutils-based
Expand Down Expand Up @@ -175,7 +175,7 @@
* Source trees which contain multiple subprojects, such as
[Buildbot](https://github.com/buildbot/buildbot), which contains both
"master" and "slave" subprojects, each with their own `setup.py`,
"main" and "slave" subprojects, each with their own `setup.py`,
`setup.cfg`, and `tox.ini`. Projects like these produce multiple PyPI
distributions (and upload multiple independently-installable tarballs).
* Source trees whose main purpose is to contain a C library, but which also
Expand Down Expand Up @@ -623,7 +623,7 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
# "stabilization", as well as "HEAD" and "main".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%%s', no digits" %% ",".join(refs - tags))
Expand Down Expand Up @@ -1015,7 +1015,7 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
# "stabilization", as well as "HEAD" and "main".
tags = set([r for r in refs if re.search(r"\d", r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
Expand Down
2 changes: 1 addition & 1 deletion python/cudf_kafka/cudf_kafka/_version.py
Original file line number Diff line number Diff line change
Expand Up @@ -200,7 +200,7 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
# "stabilization", as well as "HEAD" and "main".
tags = set([r for r in refs if re.search(r"\d", r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
Expand Down
8 changes: 4 additions & 4 deletions python/cudf_kafka/versioneer.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
(https://pypip.in/version/versioneer/badge.svg?style=flat)
](https://pypi.python.org/pypi/versioneer/)
* [![Build Status]
(https://travis-ci.org/warner/python-versioneer.png?branch=master)
(https://travis-ci.org/warner/python-versioneer.png?branch=main)
](https://travis-ci.org/warner/python-versioneer)
This is a tool for managing a recorded version number in distutils-based
Expand Down Expand Up @@ -175,7 +175,7 @@
* Source trees which contain multiple subprojects, such as
[Buildbot](https://github.com/buildbot/buildbot), which contains both
"master" and "slave" subprojects, each with their own `setup.py`,
"main" and "slave" subprojects, each with their own `setup.py`,
`setup.cfg`, and `tox.ini`. Projects like these produce multiple PyPI
distributions (and upload multiple independently-installable tarballs).
* Source trees whose main purpose is to contain a C library, but which also
Expand Down Expand Up @@ -623,7 +623,7 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
# "stabilization", as well as "HEAD" and "main".
tags = set([r for r in refs if re.search(r'\d', r)])
if verbose:
print("discarding '%%s', no digits" %% ",".join(refs - tags))
Expand Down Expand Up @@ -1015,7 +1015,7 @@ def git_versions_from_keywords(keywords, tag_prefix, verbose):
# refs/heads/ and refs/tags/ prefixes that would let us distinguish
# between branches and tags. By ignoring refnames without digits, we
# filter out many common branch names like "release" and
# "stabilization", as well as "HEAD" and "master".
# "stabilization", as well as "HEAD" and "main".
tags = set([r for r in refs if re.search(r"\d", r)])
if verbose:
print("discarding '%s', no digits" % ",".join(refs - tags))
Expand Down
2 changes: 1 addition & 1 deletion python/custreamz/README.md
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
1. cuStreamz is a GPU-accelerated Streaming Library, which uses cuDF with Streamz for stream data processing on GPUs.
2. cuStreamz has its own conda metapackage which makes it as simple as possible to install the set of dependencies necessary to process streaming workloads on GPUs.
3. A series of tests for use in a cuDF gpuCI instance have been included ensuring that changes continuously rolled out as part of cuDF don't break its integration with Streamz.
4. You can find [example](https://github.com/rapidsai/notebooks-contrib/blob/master/getting_started_notebooks/basics/hello_streamz.ipynb) [notebooks](https://github.com/rapidsai/notebooks-contrib/blob/master/getting_started_notebooks/basics/streamz_weblogs.ipynb) on how to write cuStreamz jobs in the RAPIDS [notebooks-contrib repository](https://github.com/rapidsai/notebooks-contrib).
4. You can find [example](https://github.com/rapidsai/notebooks-contrib/blob/main/getting_started_notebooks/basics/hello_streamz.ipynb) [notebooks](https://github.com/rapidsai/notebooks-contrib/blob/main/getting_started_notebooks/basics/streamz_weblogs.ipynb) on how to write cuStreamz jobs in the RAPIDS [notebooks-contrib repository](https://github.com/rapidsai/notebooks-contrib).
Loading

0 comments on commit 6a4bc1d

Please sign in to comment.