Skip to content

SPARK-2686 Add Length and OctetLen support to Spark SQL #1586

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 23 commits into from
Closed

SPARK-2686 Add Length and OctetLen support to Spark SQL #1586

wants to merge 23 commits into from

Conversation

javadba
Copy link
Contributor

@javadba javadba commented Jul 25, 2014

Syntactic, parsing, and operational support have been added for LEN(GTH) and OCTET_LEN functions.
Examples:
SQL:
import org.apache.spark.sql._
case class TestData(key: Int, value: String)
val sqlc = new SQLContext(sc)
import sqlc._
val testData: SchemaRDD = sqlc.sparkContext.parallelize(
(1 to 100).map(i => TestData(i, i.toString)))
testData.registerAsTable("testData")
sqlc.sql("select length(key) as key_len from testData order by key_len desc limit 5").collect
res12: Array[org.apache.spark.sql.Row] = Array([3], [2], [2], [2], [2])
HQL:
val hc = new org.apache.spark.sql.hive.HiveContext(sc)
import hc._
hc.hql
hql("select length(grp) from simplex").collect
res14: Array[org.apache.spark.sql.Row] = Array([6], [6], [6], [6])
As far as codebase changes: they have been purposefully made similar to the ones made for for adding SUBSTR(ING) from July 17:
SQLParser, Optimizer, Expression, stringOperations, and HiveQL were the main classes changed. The testing suites affected are ConstantFolding and ExpressionEvaluation.

@AmplabJenkins
Copy link

Can one of the admins verify this patch?

override def foldable = child.foldable
def nullable = child.nullable

override def eval(input: Row): EvaluatedType = {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd just put this in Strlen since that is the only place it is used.

@marmbrus
Copy link
Contributor

test this please

Substring(nodeToExpr(string), nodeToExpr(pos), nodeToExpr(length))
case Token("TOK_FUNCTION", Token(LENGTH(), Nil) :: string :: Nil) =>
Length(nodeToExpr(string))
// case Token("TOK_FUNCTION", Token(STRLEN(), Nil) :: string :: Nil) =>
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove

@marmbrus
Copy link
Contributor

Thanks for doing this! A few minor comments.

@SparkQA
Copy link

SparkQA commented Jul 25, 2014

QA tests have started for PR 1586. This patch merges cleanly.
View progress: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17181/consoleFull

@SparkQA
Copy link

SparkQA commented Jul 25, 2014

QA results for PR 1586:
- This patch FAILED unit tests.
- This patch merges cleanly
- This patch adds the following public classes (experimental):
trait LengthExpression {
case class Length(child: Expression) extends UnaryExpression with LengthExpression {
case class Strlen(child: Expression, encoding : Expression) extends UnaryExpression with LengthExpression {

For more information see test ouptut:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17181/consoleFull

@marmbrus
Copy link
Contributor

BTW, there are also a few style errors. You can find them locally by running sbt scalastyle.

@javadba
Copy link
Contributor Author

javadba commented Jul 25, 2014

Thanks for the review Michael! I agree with / will apply all of your comments and will re-run with sbt scalastyle . Question: from https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17181/consoleFull there is a message in the jenkins output saying that unit tests failed. But I can not find any information on which failed tests. (I had run and re-run the sql/core and sql/catalyst tests before submitting the PR and they were passing)

@marmbrus
Copy link
Contributor

It doesn't get to unit tests if the style check fails.
On Jul 25, 2014 3:48 PM, "StephenBoesch" notifications@github.com wrote:

Thanks for the review Michael! I agree with / will apply all of your
comments and will re-run with sbt scalastyle . Question: from
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17181/consoleFull
there is a message in the jenkins output saying that unit tests failed. But
I can not find any information on which failed tests. (I had run and re-run
the sql/core and sql/catalyst tests before submitting the PR and they were
passing)


Reply to this email directly or view it on GitHub
#1586 (comment).

@javadba
Copy link
Contributor Author

javadba commented Jul 27, 2014

After a fair bit of struggling with testing inconsistencies and maven and git, I have the updates in place. Please take a look whenever you have a chance - no rush ;)

@marmbrus
Copy link
Contributor

test this please

@SparkQA
Copy link

SparkQA commented Jul 27, 2014

QA tests have started for PR 1586. This patch merges cleanly.
View progress: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17246/consoleFull

@SparkQA
Copy link

SparkQA commented Jul 27, 2014

QA results for PR 1586:
- This patch FAILED unit tests.
- This patch merges cleanly
- This patch adds the following public classes (experimental):
case class Length(child: Expression) extends UnaryExpression {
case class Strlen(child: Expression, encoding : Expression) extends UnaryExpression {

For more information see test ouptut:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17246/consoleFull

@javadba
Copy link
Contributor Author

javadba commented Jul 27, 2014

The updated code got caught by one of the cases in the Hive compatibility suite.

The Hive UDF length calculation appears to be different than the new one implemented, presumably due to differences in handling of character encoding. For the fix: I will make the length() function use the same character encoding as does Hive to keep it compatible. The strlen() method will be the "outlet" to permit flexible handling of multi byte character sets in the general RDD (no strlen method is defined in hive proper).

I am going to roll back just the hive portion of the commit, and will report back end of evening.

udf_length *** FAILED ***
[info] Results do not match for udf_length:
[info] SELECT length(dest1.name) FROM dest1
[info] == Logical Plan ==
[info] Project [Length(name#41188) AS c_0#41186]
[info] MetastoreRelation default, dest1, None
[info]
[info] == Optimized Logical Plan ==
[info] Project [Length(name#41188) AS c_0#41186]
[info] MetastoreRelation default, dest1, None
[info]
[info] == Physical Plan ==
[info] Project [Length(name#41188:0) AS c_0#41186]
[info] HiveTableScan [name#41188], (MetastoreRelation default, dest1, None), None
[info] c_0
[info] !== HIVE - 1 row(s) == == CATALYST - 1 row(s) ==
[info] !2 6 (HiveComparisonTest.scala:366)

override def eval(input: Row): EvaluatedType = {
val string = child.eval(input)
if (string == null) {
null.asInstanceOf[DataType]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi, I think asInstanceOf[DataType] is not needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi Ueshin, I will try it the way you suggest here.

@chenghao-intel
Copy link
Contributor

That's very useful feature of getting the string length for different character set. Since most of code are quite similar between Length and StrLen, can we eliminate the Length and use the StrLen by providing the default character set name instead?

@javadba
Copy link
Contributor Author

javadba commented Jul 28, 2014

@chenghao-intel Let us keep both length and strlen: they both serve different purposes. The length operation may be applied to any datatype. The tests show examples of finding the "select max(length(s)) from testData"

@javadba
Copy link
Contributor Author

javadba commented Jul 29, 2014

All tests passing

mvn -Pyarn -Phadoop-2.3 -Phive test

INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Spark Project Parent POM .......................... SUCCESS [2.065s]
[INFO] Spark Project Core ................................ SUCCESS [17:30.416s]
[INFO] Spark Project Bagel ............................... SUCCESS [21.431s]
[INFO] Spark Project GraphX .............................. SUCCESS [2:13.008s]
[INFO] Spark Project ML Library .......................... SUCCESS [5:29.677s]
[INFO] Spark Project Streaming ........................... SUCCESS [7:17.728s]
[INFO] Spark Project Tools ............................... SUCCESS [3.675s]
[INFO] Spark Project Catalyst ............................ SUCCESS [8.714s]
[INFO] Spark Project SQL ................................. SUCCESS [1:56.384s]
[INFO] Spark Project Hive ................................ SUCCESS [2:44:50.515s]
[INFO] Spark Project REPL ................................ SUCCESS [1:09.897s]
[INFO] Spark Project YARN Parent POM ..................... SUCCESS [2.720s]
[INFO] Spark Project YARN Stable API ..................... SUCCESS [9.891s]
[INFO] Spark Project Assembly ............................ SUCCESS [0.628s]
[INFO] Spark Project External Twitter .................... SUCCESS [9.825s]
[INFO] Spark Project External Kafka ...................... SUCCESS [10.803s]
[INFO] Spark Project External Flume ...................... SUCCESS [24.332s]
[INFO] Spark Project External ZeroMQ ..................... SUCCESS [9.918s]
[INFO] Spark Project External MQTT ....................... SUCCESS [9.112s]
[INFO] Spark Project Examples ............................ SUCCESS [13.890s]
[INFO] ------------------------------------------------------------------------

@marmbrus
Copy link
Contributor

test this please

@SparkQA
Copy link

SparkQA commented Jul 29, 2014

QA tests have started for PR 1586. This patch merges cleanly.
View progress: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17323/consoleFull

if (string == null) {
null
} else if (!string.isInstanceOf[String]) {
throw new IllegalArgumentException(s"Non-string value [$string] provided to strlen")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@marmbrus do you think it's more reasonable to put the children data type checking into the resolved? I am not sure if we do the right thing for the existed expressions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@javadba I think you probably also need to update rules in the HiveTypeCoercion, which will insert the expression Cast if the child expression type are not satisfied, and then you won't need the children data type checking here.

@javadba
Copy link
Contributor Author

javadba commented Aug 4, 2014

I have narrowed the problem down to the SQLParser. I will update when the precise cause is determined, likely within the hour.

@javadba
Copy link
Contributor Author

javadba commented Aug 4, 2014

Surprising result here: the following change makes this work:

StackOverflowError:

  protected val OCTET_LENGTH = Keyword("OCTET_LENGTH")

Works fine:

 protected val OCTET_LENGTH = Keyword("OCTET_LEN")

Also works fine:

 protected val OCTET_LENGTH = Keyword("OCTET_LENG")

Let's double check - make sure this really repeatably fails on "OCTET_LENGTH":

And Yes! It does fail again with OCTET_LENGTH. We have a clear test failure scenario.

So here we are we need to do OCTET/CHAR_LEN and NOT OCTET/CHAR_LENGTH - until the root cause of this unrelated parser bug is found!

Should I open a separate JIRA for the parser bug?

BTW my theory is that there is something happening when one KEYWORD contains another KEYWORD. But OTOH the LEN keyword is not causing an issue. So this is a subtle case to understand

@javadba javadba changed the title SPARK-2686 Add Length and Strlen support to Spark SQL SPARK-2686 Add Length and OctetLen support to Spark SQL Aug 4, 2014
@javadba
Copy link
Contributor Author

javadba commented Aug 4, 2014

The rename change was committed/pushed and the most germane tests pass. I am re-running full regression. One thing I have noticed already: the flume-sink external project is failing - looks to be unrelated to any of my work. But I am looking into it.

@javadba
Copy link
Contributor Author

javadba commented Aug 4, 2014

Hi,
For some reason the CORE module testing has ballooned in overall testing time: it took over 7.5 hours to run. There was one timeout error out of 736 tests - and it is quite unlikely to have anything to do with the code added in this PR.

Here is the test that failed and then the overall results:

DriverSuite:
 Spark assembly has been built with Hive, including Datanucleus jars on classpath
 - driver should exit after finishing *** FAILED ***
   TestFailedDueToTimeoutException was thrown during property evaluation.    (DriverSuite.scala:40)
     Message: The code passed to failAfter did not complete within 60 seconds.
     Location: (DriverSuite.scala:41)
      Occurred at table row 0 (zero based, not counting headings), which had values (
       master = local
     )

 Tests: succeeded 723, failed 1, canceled 0, ignored 7, pending 0
 *** 1 TEST FAILED ***
 [INFO] ------------------------------------------------------------------------
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Spark Project Parent POM ........................... SUCCESS [  1.180 s]
 [INFO] Spark Project Core ................................. FAILURE [  07:35 h]

So I am not presently in a position to run regression tests - given the overall runtime will be doulbe-digit hours. Would someone please run Jenkins on this code?

@marmbrus
Copy link
Contributor

marmbrus commented Aug 5, 2014

test this please

@SparkQA
Copy link

SparkQA commented Aug 5, 2014

QA tests have started for PR 1586. This patch merges cleanly.
View progress: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17902/consoleFull

@SparkQA
Copy link

SparkQA commented Aug 5, 2014

QA results for PR 1586:
- This patch PASSES unit tests.
- This patch merges cleanly
- This patch adds the following public classes (experimental):
case class Length(child: Expression) extends UnaryExpression {
case class OctetLength(child: Expression, encoding : Expression) extends UnaryExpression

For more information see test ouptut:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/17902/consoleFull

@ueshin
Copy link
Member

ueshin commented Aug 5, 2014

Hi @javadba, I tested org.apache.spark.sql.SQLQuerySuite and org.apache.spark.sql.hive.execution.HiveQuerySuite locally, and they worked fine even if I reverted the last commit 22eddbc.

@javadba
Copy link
Contributor Author

javadba commented Aug 5, 2014

@ueshin I repeatably verified that simply changing "OCTET_LEN" to "OCTET_LENGTH" ended up causing SOF. By "repeatably" I mean:

  Set the 'constant'  val OCTET_LENGTH="OCTET_LENGTH"
  observe the error
  change to something like val OCTET_LENGTH="OCTET_LEN" or  val OCTET_LENGTH="OCTET_LENG"
  observe the error has gone away
  Rinse, cleanse, repeat

i have been able to demonstrate this multiple times. Now the regression tests have been run against the modified and reliable code.

Please re-run your tests in a fresh area. I will do the same .. but i am hesitant to consider to revert because we have positive test results now with the latest commit (as well as my results of the problem before the commit).

@javadba
Copy link
Contributor Author

javadba commented Aug 5, 2014

@ueshin

I have git clone'd to a completely new area, and I reverted my last commit.

git clone https://github.com/javadba/spark.git strlen2 
cd strlen2
git checkout strlen
git log
 # Note the Hash of the last commit: in this case it is 22eddbce6a201c8f5b5c31859ceb972e60657377
 mvn -DskipTests  -Pyarn -Phive -Phadoop-2.3 clean compile package
 mvn  -Pyarn -Phive -Phadoop-2.3 test -DwildcardSuites=org.apache.spark.sql.hive.execution.HiveQuerySuite,org.apache.spark.sql.SQLQuerySuite,org.apache.spark.sql.catalyst.expressions.ExpressionEvaluationSuite

I get precisely the same error:

HiveQuerySuite:
21:03:31.120 WARN org.apache.spark.util.Utils: Your hostname, mithril resolves to a loopback address: 127.0.1.1; using 10.0.0.33 instead (on interface eth0)
21:03:31.121 WARN org.apache.spark.util.Utils: Set SPARK_LOCAL_IP if you need to bind to another address
21:03:37.294 WARN org.apache.hadoop.util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21:03:40.045 WARN com.jolbox.bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
21:03:49.464 WARN com.jolbox.bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
21:03:49.487 WARN org.apache.hadoop.hive.metastore.ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.12.0
21:03:57.157 WARN com.jolbox.bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
21:03:57.593 WARN com.jolbox.bonecp.BoneCPConfig: Max Connections < 1. Setting to 20
- single case
- double case
- case else null
- having no references
- boolean = number
- CREATE TABLE AS runs once
- between
- div
- division
*** RUN ABORTED ***
  java.lang.StackOverflowError:
  at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
  at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
  at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at scala.util.parsing.combinator.Parsers$$anon$3.apply(Parsers.scala:222)
  at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)
  at scala.util.parsing.combinator.Parsers$Parser$$anonfun$append$1.apply(Parsers.scala:254)

Now, let's revert the revert :
git log
commit db09cd132c2d7e995287eea54f3415726934138c
Author: Stephen Boesch
Date: Mon Aug 4 20:54:24 2014 -0700

  Revert "Use Octet/Char_Len instead of Octet/Char_length due to apparent preexisting spark ParserCombinator bug."

This reverts commit 22eddbce6a201c8f5b5c31859ceb972e60657377.
git revert db09cd132c2d7e995287eea54f3415726934138c
mvn  -Pyarn -Phive -Phadoop-2.3 test -DwildcardSuites=org.apache.spark.sql.hive.execution.HiveQuerySuite,org.apache.spark.sql.SQLQuerySuite,org.apache.spark.sql.catalyst.expressions.ExpressionEvaluationSuite

Now (with the latest commit re-applied) those three test sutes pass again (specifically HiveQuerySuite did not fail)

And .. just to be extra sure here- that we can toggle between pass/fail arbitrary # of times:

commit 602adedc9ca58d99957eb12bd91098ffe904604c
Author: Stephen Boesch <javadba>
Date:   Mon Aug 4 21:18:53 2014 -0700

    Revert "Revert "Use Octet/Char_Len instead of Octet/Char_length due to apparent preexisting spark ParserCombinator bug.""

git revert 602adedc9ca58d99957eb12bd91098ffe904604c    

And once again (with the last commit reverted) the HiveQuerySuite fails with the same error.

Oh ok, let's revert yet again.. So I did yet another revert, and yes the tests PASS again.

BTW after each of these reverts, I manually viewed the SparkSQL.scala file after each revert to ensure we are seeing the expected version of "OCTET_LENGTH" vs "OCTET_LEN". Specifically the "OCTET_LENGTH" always results in the SOF failure and the "OCTET_LEN" always results in the HiveQLSuite passing.

So I have established clearly the following:
the strlen branch on my fork fails with SOF if we rollback the commit that changes OCTET/CHAR_LENGTH -> OCTET/CHAR_LEN.

The steps I showed above should be available for anyone to repeat to see themselves.

@ueshin
Copy link
Member

ueshin commented Aug 5, 2014

@javadba Thanks for detail.
Let me replay the sequence.

@ueshin
Copy link
Member

ueshin commented Aug 5, 2014

@javadba, @marmbrus
I saw the case of SOF sometimes, it was not with @javadba's sequence, though.
I can't identify the exact reason now, but I guess this is not related to @javadba's commits and is more general problem.
I'll file a new issue on JIRA with some sequences to reproduce the problem.

I'd like to know the result of Jenkins build without the last commit 22eddbc.
If succeed, we should use OCTET/CHAR_LENGTH, or if failed, we could use OCTET/CHAR_LEN for now, I think.

@javadba
Copy link
Contributor Author

javadba commented Aug 8, 2014

I have been waiting here for @ueshin and @marmbrus to decide on next steps. From @ueshin's last comment I have been waiting for a Jenkins build to be run (based off of my branch without the last commit). Please be clear on if/when that build were going to be run - and what are the next steps.

@marmbrus
Copy link
Contributor

marmbrus commented Aug 8, 2014

Sorry for the delay. We are a little swamped with the 1.1 release. I will trigger Jenkins. If we are still having issues with the SQL parser its probably okay to leave that out. We are hoping to overhaul that codepath in the near future anyway.

Also, we are going to need to block this PR on https://issues.apache.org/jira/browse/SPARK-2863

@marmbrus
Copy link
Contributor

marmbrus commented Aug 8, 2014

add to whitelist

@SparkQA
Copy link

SparkQA commented Aug 8, 2014

QA tests have started for PR 1586. This patch merges cleanly.
View progress: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18209/consoleFull

@javadba
Copy link
Contributor Author

javadba commented Aug 8, 2014

@marmbrus I am fine with delays on this - I just was unclear as to whether there some expectation on action on my part. Overall this is a minor enhancement but it has generated a non-negiglible amount of interaction and effort on the part of yourself and /the reviewers. I can understand if this were put on hold.

@SparkQA
Copy link

SparkQA commented Aug 8, 2014

QA results for PR 1586:
- This patch PASSES unit tests.
- This patch merges cleanly
- This patch adds the following public classes (experimental):
case class Length(child: Expression) extends UnaryExpression {
case class OctetLength(child: Expression, encoding : Expression) extends UnaryExpression

For more information see test ouptut:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/18209/consoleFull

@marmbrus
Copy link
Contributor

Hey @javadba, thanks for all the work on this, especially the time figuring out the surprisingly complicated semantics for this function. Also, sorry for the delay with review/merging! I'd love to add this, but right now I'm concerned that the way we are adding UDFs is unsustainable. I've written up some thoughts on the right way to proceed in SPARK-4867.

Since I'm trying really hard to keep the PR queue small (mostly to help avoid PRs that languish like this one has been), I propose we close this issue for now and reopen once the UDF framework has been updated. I've linked your issue to that one so you or others can use this code as a starting point.

@javadba
Copy link
Contributor Author

javadba commented Dec 17, 2014

OK Michael thanks for the update.

2014-12-17 11:21 GMT-08:00 Michael Armbrust notifications@github.com:

Hey @javadba https://github.com/javadba, thanks for all the work on
this, especially the time figuring out the surprisingly complicated
semantics for this function. Also, sorry for the delay with review/merging!
I'd love to add this, but right now I'm concerned that the way we are
adding UDFs is unsustainable. I've written up some thoughts on the right
way to proceed in SPARK-4867
https://issues.apache.org/jira/browse/SPARK-4867.

Since I'm trying really hard to keep the PR queue small (mostly to help
avoid PRs that languish like this one has been), I propose we close this
issue for now and reopen once the UDF framework has been updated. I've
linked your issue to that one so you or others can use this code as a
starting point.


Reply to this email directly or view it on GitHub
#1586 (comment).

@marmbrus
Copy link
Contributor

Mind closing this manually? Our script seems to be missing it.

@SparkQA
Copy link

SparkQA commented Dec 30, 2014

QA tests have started for PR 1586. This patch DID NOT merge cleanly!
View progress: https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24880/consoleFull

@SparkQA
Copy link

SparkQA commented Dec 30, 2014

QA results for PR 1586:
- This patch FAILED unit tests.

For more information see test ouptut:
https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/24880/consoleFull

@AmplabJenkins
Copy link

Test FAILed.
Refer to this link for build results (access rights to CI server needed):
https://amplab.cs.berkeley.edu/jenkins//job/SparkPullRequestBuilder/24880/
Test FAILed.

@javadba javadba closed this Jan 16, 2015
sunchao pushed a commit to sunchao/spark that referenced this pull request Jun 2, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants