Skip to content

Commit 909231d

Browse files
committed
[SPARK-17003][BUILD][BRANCH-1.6] release-build.sh is missing hive-thriftserver for scala 2.11
## What changes were proposed in this pull request? hive-thriftserver works with Scala 2.11 (https://issues.apache.org/jira/browse/SPARK-8013). So, let's publish scala 2.11 artifacts with the flag of `-Phive-thfitserver`. I am also fixing the doc. Author: Yin Huai <yhuai@databricks.com> Closes #14586 from yhuai/SPARK-16453-branch-1.6.
1 parent b3ecff6 commit 909231d

File tree

3 files changed

+5
-9
lines changed

3 files changed

+5
-9
lines changed

dev/create-release/release-build.sh

Lines changed: 4 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -80,7 +80,7 @@ NEXUS_PROFILE=d63f592e7eac0 # Profile for Spark staging uploads
8080
BASE_DIR=$(pwd)
8181

8282
MVN="build/mvn --force"
83-
PUBLISH_PROFILES="-Pyarn -Phive -Phadoop-2.2"
83+
PUBLISH_PROFILES="-Pyarn -Phive -Phive-thriftserver -Phadoop-2.2"
8484
PUBLISH_PROFILES="$PUBLISH_PROFILES -Pspark-ganglia-lgpl -Pkinesis-asl"
8585

8686
rm -rf spark
@@ -187,7 +187,7 @@ if [[ "$1" == "package" ]]; then
187187
# We increment the Zinc port each time to avoid OOM's and other craziness if multiple builds
188188
# share the same Zinc server.
189189
make_binary_release "hadoop1" "-Psparkr -Phadoop-1 -Phive -Phive-thriftserver" "3030" &
190-
make_binary_release "hadoop1-scala2.11" "-Psparkr -Phadoop-1 -Phive -Dscala-2.11" "3031" &
190+
make_binary_release "hadoop1-scala2.11" "-Psparkr -Phadoop-1 -Phive -Phive-thriftserver -Dscala-2.11" "3031" &
191191
make_binary_release "cdh4" "-Psparkr -Phadoop-1 -Phive -Phive-thriftserver -Dhadoop.version=2.0.0-mr1-cdh4.2.0" "3032" &
192192
make_binary_release "hadoop2.3" "-Psparkr -Phadoop-2.3 -Phive -Phive-thriftserver -Pyarn" "3033" &
193193
make_binary_release "hadoop2.4" "-Psparkr -Phadoop-2.4 -Phive -Phive-thriftserver -Pyarn" "3034" &
@@ -256,8 +256,7 @@ if [[ "$1" == "publish-snapshot" ]]; then
256256
# Generate random point for Zinc
257257
export ZINC_PORT=$(python -S -c "import random; print random.randrange(3030,4030)")
258258

259-
$MVN -DzincPort=$ZINC_PORT --settings $tmp_settings -DskipTests $PUBLISH_PROFILES \
260-
-Phive-thriftserver deploy
259+
$MVN -DzincPort=$ZINC_PORT --settings $tmp_settings -DskipTests $PUBLISH_PROFILES deploy
261260
./dev/change-scala-version.sh 2.11
262261
$MVN -DzincPort=$ZINC_PORT -Dscala-2.11 --settings $tmp_settings \
263262
-DskipTests $PUBLISH_PROFILES clean deploy
@@ -293,8 +292,7 @@ if [[ "$1" == "publish-release" ]]; then
293292
# Generate random point for Zinc
294293
export ZINC_PORT=$(python -S -c "import random; print random.randrange(3030,4030)")
295294

296-
$MVN -DzincPort=$ZINC_PORT -Dmaven.repo.local=$tmp_repo -DskipTests $PUBLISH_PROFILES \
297-
-Phive-thriftserver clean install
295+
$MVN -DzincPort=$ZINC_PORT -Dmaven.repo.local=$tmp_repo -DskipTests $PUBLISH_PROFILES clean install
298296

299297
./dev/change-scala-version.sh 2.11
300298

docs/building-spark.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -129,8 +129,6 @@ To produce a Spark package compiled with Scala 2.11, use the `-Dscala-2.11` prop
129129
./dev/change-scala-version.sh 2.11
130130
mvn -Pyarn -Phadoop-2.4 -Dscala-2.11 -DskipTests clean package
131131

132-
Spark does not yet support its JDBC component for Scala 2.11.
133-
134132
# Spark Tests in Maven
135133

136134
Tests are run by default via the [ScalaTest Maven plugin](http://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin).

python/pyspark/sql/functions.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1299,7 +1299,7 @@ def regexp_extract(str, pattern, idx):
12991299
>>> df = sqlContext.createDataFrame([('100-200',)], ['str'])
13001300
>>> df.select(regexp_extract('str', '(\d+)-(\d+)', 1).alias('d')).collect()
13011301
[Row(d=u'100')]
1302-
>>> df = spark.createDataFrame([('aaaac',)], ['str'])
1302+
>>> df = sqlContext.createDataFrame([('aaaac',)], ['str'])
13031303
>>> df.select(regexp_extract('str', '(a+)(b)?(c)', 2).alias('d')).collect()
13041304
[Row(d=u'')]
13051305
"""

0 commit comments

Comments
 (0)