Skip to content

Change host to ip #50216

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
2 changes: 1 addition & 1 deletion .github/PULL_REQUEST_TEMPLATE
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines: https://spark.apache.org/contributing.html
2. Ensure you have added or run the appropriate tests for your PR: https://spark.apache.org/developer-tools.html
3. If the PR is unfinished, add '[WIP]' in your PR title, e.g., '[WIP][SPARK-XXXX] Your PR title ...'.
3. If the PR is unfinished, add '[WHOST]' in your PR title, e.g., '[WHOST][SPARK-XXXX] Your PR title ...'.
4. Be sure to keep the PR description updated to reflect all changes.
5. Please write your PR title to summarize what this PR proposes.
6. If possible, provide a concise example to reproduce the issue for a faster review.
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/benchmark.yml
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ jobs:
if: contains(inputs.class, 'TPCDSQueryBenchmark') || contains(inputs.class, '*')
runs-on: ubuntu-latest
env:
SPARK_LOCAL_IP: localhost
SPARK_LOCAL_HOST: localhost
steps:
- name: Checkout Spark repository
uses: actions/checkout@v4
Expand Down Expand Up @@ -134,7 +134,7 @@ jobs:
SPARK_BENCHMARK_NUM_SPLITS: ${{ inputs.num-splits }}
SPARK_BENCHMARK_CUR_SPLIT: ${{ matrix.split }}
SPARK_GENERATE_BENCHMARK_FILES: 1
SPARK_LOCAL_IP: localhost
SPARK_LOCAL_HOST: localhost
# To prevent spark.test.home not being set. See more detail in SPARK-36007.
SPARK_HOME: ${{ github.workspace }}
SPARK_TPCDS_DATA: ${{ github.workspace }}/tpcds-sf-1
Expand Down
56 changes: 28 additions & 28 deletions .github/workflows/build_and_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -300,11 +300,11 @@ jobs:
HADOOP_PROFILE: ${{ matrix.hadoop }}
HIVE_PROFILE: ${{ matrix.hive }}
GITHUB_PREV_SHA: ${{ github.event.before }}
SPARK_LOCAL_IP: localhost
SPARK_LOCAL_HOST: localhost
NOLINT_ON_COMPILE: true
SKIP_UNIDOC: true
SKIP_MIMA: true
SKIP_PACKAGING: true
SKHOST_UNIDOC: true
SKHOST_MIMA: true
SKHOST_PACKAGING: true
steps:
- name: Checkout Spark repository
uses: actions/checkout@v4
Expand Down Expand Up @@ -542,10 +542,10 @@ jobs:
HADOOP_PROFILE: ${{ inputs.hadoop }}
HIVE_PROFILE: hive2.3
GITHUB_PREV_SHA: ${{ github.event.before }}
SPARK_LOCAL_IP: localhost
SKIP_UNIDOC: true
SKIP_MIMA: true
SKIP_PACKAGING: true
SPARK_LOCAL_HOST: localhost
SKHOST_UNIDOC: true
SKHOST_MIMA: true
SKHOST_PACKAGING: true
METASPACE_SIZE: 1g
BRANCH: ${{ inputs.branch }}
steps:
Expand Down Expand Up @@ -614,7 +614,7 @@ jobs:
run: |
if [[ "$MODULES_TO_TEST" == *"pyspark-errors"* ]]; then
export PATH=$CONDA/bin:$PATH
export SKIP_PACKAGING=false
export SKHOST_PACKAGING=false
echo "Python Packaging Tests Enabled!"
fi
if [ ! -z "$PYTHON_TO_TEST" ]; then
Expand Down Expand Up @@ -661,10 +661,10 @@ jobs:
HADOOP_PROFILE: ${{ inputs.hadoop }}
HIVE_PROFILE: hive2.3
GITHUB_PREV_SHA: ${{ github.event.before }}
SPARK_LOCAL_IP: localhost
SKIP_UNIDOC: true
SKIP_MIMA: true
SKIP_PACKAGING: true
SPARK_LOCAL_HOST: localhost
SKHOST_UNIDOC: true
SKHOST_MIMA: true
SKHOST_PACKAGING: true
steps:
- name: Checkout Spark repository
uses: actions/checkout@v4
Expand Down Expand Up @@ -995,21 +995,21 @@ jobs:
run: |
# We need this link to make sure `python3` points to `python3.9` which contains the prerequisite packages.
ln -s "$(which python3.9)" "/usr/local/bin/python3"
# Build docs first with SKIP_API to ensure they are buildable without requiring any
# Build docs first with SKHOST_API to ensure they are buildable without requiring any
# language docs to be built beforehand.
cd docs; SKIP_ERRORDOC=1 SKIP_API=1 bundle exec jekyll build; cd ..
cd docs; SKHOST_ERRORDOC=1 SKHOST_API=1 bundle exec jekyll build; cd ..
if [ -f "./dev/is-changed.py" ]; then
# Skip PySpark and SparkR docs while keeping Scala/Java/SQL docs
pyspark_modules=`cd dev && python3.9 -c "import sparktestsupport.modules as m; print(','.join(m.name for m in m.all_modules if m.name.startswith('pyspark')))"`
if [ `./dev/is-changed.py -m $pyspark_modules` = false ]; then export SKIP_PYTHONDOC=1; fi
if [ `./dev/is-changed.py -m sparkr` = false ]; then export SKIP_RDOC=1; fi
if [ `./dev/is-changed.py -m $pyspark_modules` = false ]; then export SKHOST_PYTHONDOC=1; fi
if [ `./dev/is-changed.py -m sparkr` = false ]; then export SKHOST_RDOC=1; fi
fi
# Print the values of environment variables `SKIP_ERRORDOC`, `SKIP_SCALADOC`, `SKIP_PYTHONDOC`, `SKIP_RDOC` and `SKIP_SQLDOC`
echo "SKIP_ERRORDOC: $SKIP_ERRORDOC"
echo "SKIP_SCALADOC: $SKIP_SCALADOC"
echo "SKIP_PYTHONDOC: $SKIP_PYTHONDOC"
echo "SKIP_RDOC: $SKIP_RDOC"
echo "SKIP_SQLDOC: $SKIP_SQLDOC"
# Print the values of environment variables `SKHOST_ERRORDOC`, `SKHOST_SCALADOC`, `SKHOST_PYTHONDOC`, `SKHOST_RDOC` and `SKHOST_SQLDOC`
echo "SKHOST_ERRORDOC: $SKHOST_ERRORDOC"
echo "SKHOST_SCALADOC: $SKHOST_SCALADOC"
echo "SKHOST_PYTHONDOC: $SKHOST_PYTHONDOC"
echo "SKHOST_RDOC: $SKHOST_RDOC"
echo "SKHOST_SQLDOC: $SKHOST_SQLDOC"
cd docs
bundle exec jekyll build
- name: Tar documentation
Expand All @@ -1031,7 +1031,7 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 120
env:
SPARK_LOCAL_IP: localhost
SPARK_LOCAL_HOST: localhost
steps:
- name: Checkout Spark repository
uses: actions/checkout@v4
Expand Down Expand Up @@ -1136,10 +1136,10 @@ jobs:
HADOOP_PROFILE: ${{ inputs.hadoop }}
HIVE_PROFILE: hive2.3
GITHUB_PREV_SHA: ${{ github.event.before }}
SPARK_LOCAL_IP: localhost
SKIP_UNIDOC: true
SKIP_MIMA: true
SKIP_PACKAGING: true
SPARK_LOCAL_HOST: localhost
SKHOST_UNIDOC: true
SKHOST_MIMA: true
SKHOST_PACKAGING: true
steps:
- name: Checkout Spark repository
uses: actions/checkout@v4
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/build_branch40_java21.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ jobs:
{
"PYSPARK_IMAGE_TO_TEST": "python-311",
"PYTHON_TO_TEST": "python3.11",
"SKIP_MIMA": "true",
"SKIP_UNIDOC": "true",
"SKHOST_MIMA": "true",
"SKHOST_UNIDOC": "true",
"DEDICATED_JVM_SBT_TESTS": "org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormatV1Suite,org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormatV2Suite,org.apache.spark.sql.execution.datasources.orc.OrcSourceV1Suite,org.apache.spark.sql.execution.datasources.orc.OrcSourceV2Suite"
}
jobs: >-
Expand Down
4 changes: 2 additions & 2 deletions .github/workflows/build_java21.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,8 +39,8 @@ jobs:
{
"PYSPARK_IMAGE_TO_TEST": "python-311",
"PYTHON_TO_TEST": "python3.11",
"SKIP_MIMA": "true",
"SKIP_UNIDOC": "true",
"SKHOST_MIMA": "true",
"SKHOST_UNIDOC": "true",
"DEDICATED_JVM_SBT_TESTS": "org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormatV1Suite,org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormatV2Suite,org.apache.spark.sql.execution.datasources.orc.OrcSourceV1Suite,org.apache.spark.sql.execution.datasources.orc.OrcSourceV2Suite"
}
jobs: >-
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/build_python_connect35.yml
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ jobs:
- name: Run tests
env:
SPARK_TESTING: 1
SPARK_SKIP_CONNECT_COMPAT_TESTS: 1
SPARK_SKHOST_CONNECT_COMPAT_TESTS: 1
SPARK_CONNECT_TESTING_REMOTE: sc://localhost
run: |
# Make less noisy
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/maven_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -124,7 +124,7 @@ jobs:
INCLUDED_TAGS: ${{ matrix.included-tags }}
HADOOP_PROFILE: ${{ matrix.hadoop }}
HIVE_PROFILE: ${{ matrix.hive }}
SPARK_LOCAL_IP: localhost
SPARK_LOCAL_HOST: localhost
GITHUB_PREV_SHA: ${{ github.event.before }}
steps:
- name: Checkout Spark repository
Expand Down
2 changes: 1 addition & 1 deletion .github/workflows/pages.yml
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ jobs:
sed -i".tmp3" "s/'facetFilters':.*$/'facetFilters': [\"version:$RELEASE_VERSION\"]/g" docs/_config.yml
sed -i".tmp4" 's/__version__: str = .*$/__version__: str = "'"$RELEASE_VERSION"'"/' python/pyspark/version.py
cd docs
SKIP_RDOC=1 bundle exec jekyll build
SKHOST_RDOC=1 bundle exec jekyll build
- name: Setup Pages
uses: actions/configure-pages@v5
- name: Upload artifact
Expand Down
10 changes: 5 additions & 5 deletions .github/workflows/python_macos_test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -85,10 +85,10 @@ jobs:
# GitHub Actions' default miniconda to use in pip packaging test.
CONDA_PREFIX: /usr/share/miniconda
GITHUB_PREV_SHA: ${{ github.event.before }}
SPARK_LOCAL_IP: localhost
SKIP_UNIDOC: true
SKIP_MIMA: true
SKIP_PACKAGING: true
SPARK_LOCAL_HOST: localhost
SKHOST_UNIDOC: true
SKHOST_MIMA: true
SKHOST_PACKAGING: true
METASPACE_SIZE: 1g
BRANCH: ${{ inputs.branch }}
steps:
Expand Down Expand Up @@ -143,7 +143,7 @@ jobs:
env: ${{ fromJSON(inputs.envs) }}
run: |
if [[ "$MODULES_TO_TEST" == *"pyspark-errors"* ]]; then
export SKIP_PACKAGING=false
export SKHOST_PACKAGING=false
echo "Python Packaging Tests Enabled!"
fi
./dev/run-tests --parallelism 1 --modules "$MODULES_TO_TEST" --python-executables "$PYTHON_TO_TEST"
Expand Down
2 changes: 1 addition & 1 deletion NOTICE-binary
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ The Apache Software Foundation (http://www.apache.org/).
Apache Avro Mapred API
Copyright 2009-2014 The Apache Software Foundation

Apache Avro IPC
Apache Avro HOSTC
Copyright 2009-2014 The Apache Software Foundation

Objenesis
Expand Down
6 changes: 3 additions & 3 deletions R/CRAN_RELEASE.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ To release SparkR as a package to CRAN, we would use the `devtools` package. Ple

### Release

First, check that the `Version:` field in the `pkg/DESCRIPTION` file is updated. Also, check for stale files not under source control.
First, check that the `Version:` field in the `pkg/DESCRHOSTTION` file is updated. Also, check for stale files not under source control.

Note that while `run-tests.sh` runs `check-cran.sh` (which runs `R CMD check`), it is doing so with `--no-manual --no-vignettes`, which skips a few vignettes or PDF checks - therefore it will be preferred to run `R CMD check` on the source package built manually before uploading a release. Also note that for CRAN checks for pdf vignettes to success, `qpdf` tool must be there (to install it, e.g. `yum -q -y install qpdf`).

Expand Down Expand Up @@ -58,7 +58,7 @@ Similarly, the source package is also created by `check-cran.sh` with `R CMD bui
For example, this should be the content of the source package:

```sh
DESCRIPTION R inst tests
DESCRHOSTTION R inst tests
NAMESPACE build man vignettes

inst/doc/
Expand Down Expand Up @@ -104,6 +104,6 @@ paths <- .libPaths(); .libPaths(c("lib", paths)); Sys.setenv(SPARK_HOME=tools::f
For example, this should be the content of the binary package:

```sh
DESCRIPTION Meta R html tests
DESCRHOSTTION Meta R html tests
INDEX NAMESPACE help profile worker
```
2 changes: 1 addition & 1 deletion R/WINDOWS.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,7 +25,7 @@ To build SparkR on Windows, the following steps are required
2. Install R (>= 3.5) and [Rtools](https://cloud.r-project.org/bin/windows/Rtools/). Make sure to
include Rtools and R in `PATH`.

3. Install JDK that SparkR supports (see `R/pkg/DESCRIPTION`), and set `JAVA_HOME` in the system environment variables.
3. Install JDK that SparkR supports (see `R/pkg/DESCRHOSTTION`), and set `JAVA_HOME` in the system environment variables.

4. Download and install [Maven](https://maven.apache.org/download.html). Also include the `bin`
directory in Maven in `PATH`.
Expand Down
8 changes: 4 additions & 4 deletions R/check-cran.sh
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ fi

if [ -d "$SPARK_JARS_DIR" ]; then
# Build a zip file containing the source package with vignettes
SPARK_HOME="${SPARK_HOME}" "$R_SCRIPT_PATH/R" CMD build "$FWDIR/pkg"
SPARK_HOME="${SPARK_HOME}" "$R_SCRHOSTT_PATH/R" CMD build "$FWDIR/pkg"

find pkg/vignettes/. -not -name '.' -not -name '*.Rmd' -not -name '*.md' -not -name '*.pdf' -not -name '*.html' -delete
else
Expand All @@ -49,7 +49,7 @@ else
fi

# Run check as-cran.
VERSION=`grep Version "$FWDIR/pkg/DESCRIPTION" | awk '{print $NF}'`
VERSION=`grep Version "$FWDIR/pkg/DESCRHOSTTION" | awk '{print $NF}'`

CRAN_CHECK_OPTIONS="--as-cran"

Expand All @@ -71,10 +71,10 @@ export _R_CHECK_FORCE_SUGGESTS_=FALSE

if [ -n "$NO_TESTS" ] && [ -n "$NO_MANUAL" ]
then
"$R_SCRIPT_PATH/R" CMD check $CRAN_CHECK_OPTIONS "SparkR_$VERSION.tar.gz"
"$R_SCRHOSTT_PATH/R" CMD check $CRAN_CHECK_OPTIONS "SparkR_$VERSION.tar.gz"
else
# This will run tests and/or build vignettes, and require SPARK_HOME
SPARK_HOME="${SPARK_HOME}" "$R_SCRIPT_PATH/R" CMD check $CRAN_CHECK_OPTIONS "SparkR_$VERSION.tar.gz"
SPARK_HOME="${SPARK_HOME}" "$R_SCRHOSTT_PATH/R" CMD check $CRAN_CHECK_OPTIONS "SparkR_$VERSION.tar.gz"
fi

popd > /dev/null
6 changes: 3 additions & 3 deletions R/create-docs.sh
Original file line number Diff line number Diff line change
Expand Up @@ -51,16 +51,16 @@ pushd "$FWDIR" > /dev/null
mkdir -p pkg/html
pushd pkg/html

"$R_SCRIPT_PATH/Rscript" -e 'libDir <- "../../lib"; library(SparkR, lib.loc=libDir); knitr::knit_rd("SparkR", links = tools::findHTMLlinks(file.path(libDir, "SparkR")))'
"$R_SCRHOSTT_PATH/Rscript" -e 'libDir <- "../../lib"; library(SparkR, lib.loc=libDir); knitr::knit_rd("SparkR", links = tools::findHTMLlinks(file.path(libDir, "SparkR")))'


# Determine Spark(R) version
SPARK_VERSION=$(grep Version "../DESCRIPTION" | awk '{print $NF}')
SPARK_VERSION=$(grep Version "../DESCRHOSTTION" | awk '{print $NF}')

# Update url
sed "s/{SPARK_VERSION}/$SPARK_VERSION/" ../pkgdown/_pkgdown_template.yml > ../_pkgdown.yml

"$R_SCRIPT_PATH/Rscript" -e 'libDir <- "../../lib"; library(SparkR, lib.loc=libDir); pkgdown::build_site("..")'
"$R_SCRHOSTT_PATH/Rscript" -e 'libDir <- "../../lib"; library(SparkR, lib.loc=libDir); pkgdown::build_site("..")'

# Clean temporary config
rm ../_pkgdown.yml
Expand Down
2 changes: 1 addition & 1 deletion R/create-rd.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ pushd "$FWDIR" > /dev/null
. "$FWDIR/find-r.sh"

# Generate Rd files if devtools is installed
"$R_SCRIPT_PATH/Rscript" -e ' if(requireNamespace("devtools", quietly=TRUE)) { setwd("'$FWDIR'"); devtools::document(pkg="./pkg", roclets="rd") }'
"$R_SCRHOSTT_PATH/Rscript" -e ' if(requireNamespace("devtools", quietly=TRUE)) { setwd("'$FWDIR'"); devtools::document(pkg="./pkg", roclets="rd") }'
8 changes: 4 additions & 4 deletions R/find-r.sh
Original file line number Diff line number Diff line change
Expand Up @@ -17,18 +17,18 @@
# limitations under the License.
#

if [ -z "$R_SCRIPT_PATH" ]
if [ -z "$R_SCRHOSTT_PATH" ]
then
if [ ! -z "$R_HOME" ]
then
R_SCRIPT_PATH="$R_HOME/bin"
R_SCRHOSTT_PATH="$R_HOME/bin"
else
# if system wide R_HOME is not found, then exit
if [ ! `command -v R` ]; then
echo "Cannot find 'R_HOME'. Please specify 'R_HOME' or make sure R is properly installed."
exit 1
fi
R_SCRIPT_PATH="$(dirname $(which R))"
R_SCRHOSTT_PATH="$(dirname $(which R))"
fi
echo "Using R_SCRIPT_PATH = ${R_SCRIPT_PATH}"
echo "Using R_SCRHOSTT_PATH = ${R_SCRHOSTT_PATH}"
fi
2 changes: 1 addition & 1 deletion R/install-dev.sh
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ pushd "$FWDIR" > /dev/null
. "$FWDIR/create-rd.sh"

# Install SparkR to $LIB_DIR
"$R_SCRIPT_PATH/R" CMD INSTALL --library="$LIB_DIR" "$FWDIR/pkg/"
"$R_SCRHOSTT_PATH/R" CMD INSTALL --library="$LIB_DIR" "$FWDIR/pkg/"

# Zip the SparkR package so that it can be distributed to worker nodes on YARN
cd "$LIB_DIR"
Expand Down
4 changes: 2 additions & 2 deletions R/install-source-package.sh
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ pushd "$FWDIR" > /dev/null
. "$FWDIR/find-r.sh"

if [ -z "$VERSION" ]; then
VERSION=`grep Version "$FWDIR/pkg/DESCRIPTION" | awk '{print $NF}'`
VERSION=`grep Version "$FWDIR/pkg/DESCRHOSTTION" | awk '{print $NF}'`
fi

if [ ! -f "$FWDIR/SparkR_$VERSION.tar.gz" ]; then
Expand All @@ -47,7 +47,7 @@ echo "Removing lib path and installing from source package"
LIB_DIR="$FWDIR/lib"
rm -rf "$LIB_DIR"
mkdir -p "$LIB_DIR"
"$R_SCRIPT_PATH/R" CMD INSTALL "SparkR_$VERSION.tar.gz" --library="$LIB_DIR"
"$R_SCRHOSTT_PATH/R" CMD INSTALL "SparkR_$VERSION.tar.gz" --library="$LIB_DIR"

# Zip the SparkR package so that it can be distributed to worker nodes on YARN
pushd "$LIB_DIR" > /dev/null
Expand Down
4 changes: 2 additions & 2 deletions R/pkg/tests/fulltests/test_sparkSQL.R
Original file line number Diff line number Diff line change
Expand Up @@ -3122,7 +3122,7 @@ test_that("read/write Parquet files - compression option/mode", {
tempPath <- tempfile(pattern = "tempPath", fileext = ".parquet")

# Test write.df and read.df
write.parquet(df, tempPath, compression = "GZIP")
write.parquet(df, tempPath, compression = "GZHOST")
df2 <- read.parquet(tempPath)
expect_is(df2, "SparkDataFrame")
expect_equal(count(df2), 3)
Expand Down Expand Up @@ -3167,7 +3167,7 @@ test_that("read/write text files - compression option", {
df <- read.df(jsonPath, "text")

textPath <- tempfile(pattern = "textPath", fileext = ".txt")
write.text(df, textPath, compression = "GZIP")
write.text(df, textPath, compression = "GZHOST")
textDF <- read.text(textPath)
expect_is(textDF, "SparkDataFrame")
expect_equal(count(textDF), count(df))
Expand Down
4 changes: 2 additions & 2 deletions R/run-tests.sh
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ else
SPARKR_SUPPRESS_DEPRECATION_WARNING=1 SPARK_TESTING=1 NOT_CRAN=true $FWDIR/../bin/spark-submit --jars $SPARK_JARS --driver-java-options "-Dlog4j.configurationFile=file:$FWDIR/log4j2.properties" --conf spark.hadoop.fs.defaultFS="file:///" --conf spark.driver.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true -Xss4M" --conf spark.executor.extraJavaOptions="-Dio.netty.tryReflectionSetAccessible=true -Xss4M" $FWDIR/pkg/tests/run-all.R 2>&1 | tee -a $LOGFILE
fi

FAILED=$((PIPESTATUS[0]||$FAILED))
FAILED=$((PHOSTESTATUS[0]||$FAILED))

NUM_TEST_WARNING="$(grep -c -e 'Warnings ----------------' $LOGFILE)"

Expand All @@ -44,7 +44,7 @@ CRAN_CHECK_LOG_FILE=$FWDIR/cran-check.out
rm -f $CRAN_CHECK_LOG_FILE

NO_TESTS=1 NO_MANUAL=1 $FWDIR/check-cran.sh 2>&1 | tee -a $CRAN_CHECK_LOG_FILE
FAILED=$((PIPESTATUS[0]||$FAILED))
FAILED=$((PHOSTESTATUS[0]||$FAILED))

NUM_CRAN_WARNING="$(grep -c WARNING$ $CRAN_CHECK_LOG_FILE)"
NUM_CRAN_ERROR="$(grep -c ERROR$ $CRAN_CHECK_LOG_FILE)"
Expand Down
Loading