-
Notifications
You must be signed in to change notification settings - Fork 348
Comparing changes
Open a pull request
base repository: databricks/spark-redshift
base: master
head repository: databricks/spark-redshift
compare: branch-1.x
- 11 commits
- 15 files changed
- 7 contributors
Commits on Aug 4, 2016
-
Custom JDBC column types back-port
Back-port JDBC column types to 1.x branch. Author: Marc-Andre Tremblay <marcandre.tr@gmail.com> Closes #247 from nrstott/feature/custom-jdbc-column-types.
Configuration menu - View commit details
-
Copy full SHA for 5ed63ad - Browse repository at this point
Copy the full SHA 5ed63adView commit details -
Author: Mathias Bogaert <mathias.bogaert@gmail.com> Closes #251 from analytically/patch-1.
Configuration menu - View commit details
-
Copy full SHA for 237f947 - Browse repository at this point
Copy the full SHA 237f947View commit details
Commits on Aug 20, 2016
-
Configuration menu - View commit details
-
Copy full SHA for ebe5e1a - Browse repository at this point
Copy the full SHA ebe5e1aView commit details -
Handle invalid S3 hostname exceptions with older aws-java-sdk versions
We've seen a lot of messages lately regarding the "Invalid S3 URI: hostname does not appear to be a valid S3 endpoint" exception and so thought we should contribute our two cents and the code changes that worked for us. We've tried many approaches listed in that thread including using `spark.executor.extraClassPath` and `spark.driver.extraClassPath` environment variables to prepend to the classpath, including it in the assembled jar or as a shaded jar, Unfortunately many of these approaches failed mainly because we have on the machines themselves the older aws-java-sdk jar and that usually takes precedence. We ended up going with what Josh mentioned earlier about changing the S3 url in the spark-redshift code to add the endpoint to the host (`*.s3.amazonaws.com`). This logic will try to instantiate a new AmazonS3URI and if it fails, it'll try to add the default S3 Amazon domain to the host. Author: James Hou <jameshou@data101.udemy.com> Author: James Hou <james.hou@gmail.com> Author: Josh Rosen <joshrosen@databricks.com> Closes #254 from jameshou/feature/add-s3-full-endpoint-v1.
Configuration menu - View commit details
-
Copy full SHA for 8782802 - Browse repository at this point
Copy the full SHA 8782802View commit details
Commits on Aug 21, 2016
-
Configuration menu - View commit details
-
Copy full SHA for 34d03e9 - Browse repository at this point
Copy the full SHA 34d03e9View commit details -
Configuration menu - View commit details
-
Copy full SHA for 33fa626 - Browse repository at this point
Copy the full SHA 33fa626View commit details -
Configuration menu - View commit details
-
Copy full SHA for 6e4ecd4 - Browse repository at this point
Copy the full SHA 6e4ecd4View commit details
Commits on Sep 8, 2016
-
Configuration menu - View commit details
-
Copy full SHA for ad2498d - Browse repository at this point
Copy the full SHA ad2498dView commit details
Commits on Sep 21, 2016
-
Fix reading of NaN and Infinity
This patch fixes a bug which caused `spark-redshift` to throw `NumberFormatException` when reading NaN or Infinity from Redshift. This patch fixes the bug by adding special-case handling of the string constants `nan`, `inf`, and `-inf`, which are the values sent back by Redshift during unloads. Note that we still do not support loads of `NaN` to Redshift since Redshift itself does not seem to support this yet (https://forums.aws.amazon.com/thread.jspa?threadID=236367). Fixes #261. Author: Josh Rosen <joshrosen@databricks.com> Closes #269 from JoshRosen/fix-nan.
Configuration menu - View commit details
-
Copy full SHA for 0abd27d - Browse repository at this point
Copy the full SHA 0abd27dView commit details
Commits on Oct 18, 2016
-
Updated README.md with sample code to read a Redshift table in SparkR
Added sample code for reading a Redshift table with SparkR. Author: Ganesh Chand <ganeshchand@gmail.com> Author: Josh Rosen <joshrosen@databricks.com> Closes #282 from ganeshchand/patch-1.
Configuration menu - View commit details
-
Copy full SHA for 9f12f3f - Browse repository at this point
Copy the full SHA 9f12f3fView commit details
Commits on Nov 16, 2016
-
Wrap and re-throw Await.result exceptions in order to capture full st…
…acktrace Exceptions thrown from Scala's `Await.result` don't include the waiting thread's stacktrace, making it hard to figure out where errors occur. Similar to the fix implemented in Spark in apache/spark#12433, this patch modifies our `Await.result` usages to wrap and rethrow exceptions to capture the calling thread's stack. Author: Josh Rosen <joshrosen@databricks.com> Closes #299 from JoshRosen/better-error-reporting. (cherry picked from commit b4c6053) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
Configuration menu - View commit details
-
Copy full SHA for 647f678 - Browse repository at this point
Copy the full SHA 647f678View commit details
This comparison is taking too long to generate.
Unfortunately it looks like we can’t render this comparison for you right now. It might be too big, or there might be something weird with your repository.
You can try running this command locally to see the comparison on your machine:
git diff master...branch-1.x