-
Notifications
You must be signed in to change notification settings - Fork 28.7k
SPARK-1387. Update build plugins, avoid plugin version warning, centralize versions #291
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Merged build triggered. |
Merged build started. |
Merged build finished. All automated tests passed. |
All automated tests passed. |
…aven/SBT to define dependency versions that should stay in step.
Merged build triggered. |
Merged build started. |
Merged build finished. All automated tests passed. |
All automated tests passed. |
</issueManagement> | ||
|
||
<prerequisites> | ||
<maven>3.0.0</maven> | ||
<maven>3.0.4</maven> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just wondering - why is this needed? Does this mean that users with Maven 3.0.X (X < 4) will need to upgrade?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The Maven versions plugin claimed that some plugin already in use required Maven >= 3.0.4. I presume it would warn or fail if run with earlier versions, which would indicate no active devs are running an earlier version, but I can't be sure. This was just a bit of tidiness.
Is anyone out there on Maven < 3.0.4? In general I think it's easy to upgrade; on Linux they're just packages and AFAIK people usually use brew
on OS X to easily update things like this.
If there's a hint that it might cause pains, it can be reversed, but I am presuming that we would know already if someone was using <3.0.4?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay - mind updating the building-with-maven
doc then? It currently says
Building Spark using Maven Requires Maven 3 (the build process is tested with Maven 3.0.4) and Java 1.6 or newer.
But maybe we can just say:
Building Spark using Maven requires Maven 3.0.4 or newer and Java 1.6 or newer.
Looks great! Two very minor comments/questions. |
Okay I'm just gonna merge this with a minor doc change. |
Thanks @pwendell for finishing it off with the doc update -- would have done it if I weren't asleep here! |
…alize versions Another handful of small build changes to organize and standardize a bit, and avoid warnings: - Update Maven plugin versions for good measure - Since plugins need maven 3.0.4 already, require it explicitly (<3.0.4 had some bugs anyway) - Use variables to define versions across dependencies where they should move in lock step - ... and make this consistent between Maven/SBT OK, I also updated the JIRA URL while I was at it here. Author: Sean Owen <sowen@cloudera.com> Closes apache#291 from srowen/SPARK-1387 and squashes the following commits: 461eca1 [Sean Owen] Couldn't resist also updating JIRA location to new one c2d5cc5 [Sean Owen] Update plugins and Maven version; use variables consistently across Maven/SBT to define dependency versions that should stay in step.
Adding a new configuration parameter spark.databricks.debug.queryWatchdog.enabled as the main on/off switch for the slow task killer. Renaming the other four related configuration parameters to match the new name. Existing unit test. Author: Reynold Xin <rxin@databricks.com> Author: Ala Luszczak <ala@databricks.com> Closes apache#291 from ala/query-watchdog. (cherry picked from commit 47fdf227fbf13a631d7858dfec601ce328e90be5) Signed-off-by: Reynold Xin <rxin@databricks.com>
…st names (apache#291) * Fix an HDFS data locality bug in case cluster node names are not full host names * Add a NOTE about InetAddress caching
…st names (apache#291) * Fix an HDFS data locality bug in case cluster node names are not full host names * Add a NOTE about InetAddress caching
* cloud-provider-openstack manila-provisioner: run k8s cluster * pipelines: fixed a comment for manila-provisioner * create-devstack-local-conf: enable ceph in manila, set the driver to native cephfs
Another handful of small build changes to organize and standardize a bit, and avoid warnings:
OK, I also updated the JIRA URL while I was at it here.