#Scala #akka @ApacheSpark
-
Python job support! (mostly @mattinbits, some @CalebFenton)
- Some settings like job-jar-paths renamed to job-bin-paths (@nikkiSproutling)
-
New Scala API! The expanded API includes passing in a
JobEnvironment
which provides access to job ID, context information; it also offers more type safety in job return type, validation etc. (@velvia, others) -
Spark 2.0 support (see spark-2.0-preview branch) (@f1yegor, @addisonj)
-
Changed minor HTTP responses to be all JSON (@roarking)
-
Deprecate JobFileDAO. H2 is now the default. Furthermore, H2 and JobFileDAO cannot be used with context-per-jvm. (@noorul)
-
Context creation API (POST /contexts) now takes a config in the POST body! (@casey-green)
-
Make create job API to return more information (@noorul)
-
Upgrade Slick to 3.1.1 (@hntd187)
-
Fix broken links to ec2 scripts, #369 (@noorul)
-
EC2 VPC deploy fixes (@mcnels)
-
Only set spark.executor.uri if env var is set (@addisonj)
-
Add Scala version to Docker image (@mariussoutier)
-
Integrate converage into CI using codecov.io service (@hntd187)
-
Remove akka dependency from api module (@f1yegor)
-
Eliminate POST and DEL /job race conditions (@addisonj)
-
Improve Chinese document (@stanzhai)
-
Return error if data file can't be deleted (@CalebFenton)
-
Make dbcp optional, default: disabled (@noorul)
-
Fix for logging issue in dev mode, #475 (@noorul)
-
Fix flaky tests (@TimMaltGermany)
-
Increase size of config/input that can be submitted via custom Akka serializer (@rmettu-rms)
-
Update build plugins (@hntd187)
-
README fix (@Vincibean)
-
Ensure Scala compiler dependency has correct version (@@aganim-chariot)
-
Docs for YARN queue config option (@ash211)
-
Make JMX port configurable (@casey-green)
-
Update test description, PR #481 (@oranda)
-
Per-user authenticated contexts, PR #469 (@TimMaltGermany)
-
Fix for Delete file API issue, #507 (@noorul)
-
Fix UI not showing running jobs if completed jobs fills the limit, #547 (@TianLangStudio)
-
Fix Jar name issue on Windows (@TianLangStudio)
-
Forked JVM processes must have their own JobDAO, #353, First step towards HA (@noorul)
-
Change cluster status of removed contexts to down (@derSascha)
-
Include Python exception stacktrace on failure (@CalebFenton)
-
UI - Use relative paths so it works if running in a context-path (@sjoerdmulder)