Skip to content

[SPARK-31655][BUILD][3.0] Upgrade snappy-java to 1.1.7.5 #28508

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
699 commits
Select commit Hold shift + click to select a range
a44880b
Preparing Spark release v2.4.2-rc1
cloud-fan Apr 18, 2019
7a8efc8
Preparing development version 2.4.3-SNAPSHOT
cloud-fan Apr 18, 2019
7f64963
[MINOR][TEST] Expand spark-submit test to allow python2/3 executable
srowen Apr 18, 2019
eaa88ae
[SPARK-25079][PYTHON][BRANCH-2.4] update python3 executable to 3.6.x
shaneknapp Apr 19, 2019
6f394a2
[SPARK-24601][SPARK-27051][BACKPORT][CORE] Update to Jackson 2.9.8
srowen Apr 21, 2019
33864a8
[SPARK-27496][CORE] Fatal errors should also be sent back to the sender
zsxwing Apr 22, 2019
3ba71e9
[SPARK-27419][FOLLOWUP][DOCS] Add note about spark.executor.heartbeat…
srowen Apr 22, 2019
4472a9f
[SPARK-27469][BUILD][BRANCH-2.4] Unify commons-beanutils deps to late…
srowen Apr 22, 2019
42cb4a2
[SPARK-27539][SQL] Fix inaccurate aggregate outputRows estimation wit…
pengbo Apr 23, 2019
b615f22
[SPARK-27544][PYTHON][TEST][BRANCH-2.4] Fix Python test script to wor…
dongjoon-hyun Apr 23, 2019
34fd79d
[SPARK-27550][TEST][BRANCH-2.4] Fix `test-dependencies.sh` not to use…
dongjoon-hyun Apr 24, 2019
ca32108
[MINOR][TEST] switch from 2.4.1 to 2.4.2 in HiveExternalCatalogVersio…
cloud-fan Apr 25, 2019
705507f
[SPARK-27494][SS] Null values don't work in Kafka source v2
uncleGen Apr 26, 2019
ed0739a
add missing import and fix compilation
cloud-fan Apr 26, 2019
29a4e04
[SPARK-27563][SQL][TEST] automatically get the latest Spark versions …
cloud-fan Apr 26, 2019
ec53a19
[SPARK-26891][BACKPORT-2.4][YARN] Fixing flaky test in YarnSchedulerB…
attilapiros Apr 26, 2019
fce9b2b
[SPARK-25535][CORE][BRANCH-2.4] Work around bad error handling in com…
Apr 27, 2019
ba9e12d
[SPARK-26745][SQL][TESTS] JsonSuite test case: empty line -> 0 record…
Feb 6, 2019
3d49bd4
[SPARK-24935][SQL][FOLLOWUP] support INIT -> UPDATE -> MERGE -> FINIS…
cloud-fan Apr 30, 2019
1323ddc
Revert "[SPARK-24601][SPARK-27051][BACKPORT][CORE] Update to Jackson …
gatorsmile Apr 30, 2019
c3e32bf
Preparing Spark release v2.4.3-rc1
Apr 30, 2019
5ac2014
Preparing development version 2.4.4-SNAPSHOT
Apr 30, 2019
e417168
[SPARK-26048][SPARK-24530][2.4] Cherrypick all the missing commits to…
cloud-fan May 1, 2019
f29da6b
[SPARK-27626][K8S] Fix `docker-image-tool.sh` to be robust in non-bas…
dongjoon-hyun May 3, 2019
d4eddce
[SPARK-27621][ML] Linear Regression - validate training related param…
May 3, 2019
771da83
[SPARK-27596][SQL] The JDBC 'query' option doesn't work for Oracle da…
dilipbiswal May 6, 2019
b3d30a8
[SPARK-27577][MLLIB] Correct thresholds downsampled in BinaryClassifi…
shishaochen May 7, 2019
2111b59
[SPARK-27624][CORE] Fix CalenderInterval to show an empty interval co…
dongjoon-hyun May 7, 2019
b15866c
[MINOR][DOCS] Fix invalid documentation for StreamingQueryManager Class
asaf400 May 8, 2019
2f16255
[SPARK-25139][SPARK-18406][CORE][2.4] Avoid NonFatals to kill the Exe…
jiangxb1987 May 8, 2019
3726ece
[MINOR][TEST] Fix schema mismatch error
ericl May 10, 2019
4df491c
[SPARK-27672][SQL] Add `since` info to string expressions
HyukjinKwon May 10, 2019
f2cd16f
[SPARK-27673][SQL] Add `since` info to random, regex, null expressions
HyukjinKwon May 10, 2019
95c55f6
[SPARK-27347][MESOS] Fix supervised driver retry logic for outdated t…
samvantran May 10, 2019
aae03ef
[SPARK-27671][SQL] Fix error when casting from a nested null in a struct
viirya May 13, 2019
c50261b
[SPARK-26812][SQL][BACKPORT-2.4] Report correct nullability for compl…
mgaido91 May 14, 2019
fbd2eac
[MINOR][SS] Remove duplicate 'add' in comment of `StructuredSessioniz…
beliefer May 15, 2019
41b0529
[SPARK-27735][SS] Parsing interval string should be case-insensitive …
zsxwing May 16, 2019
046af44
[SPARK-27771][SQL] Add SQL description for grouping functions (cube, …
HyukjinKwon May 20, 2019
4463027
[MINOR][EXAMPLES] Don't use internal Spark logging in user examples
srowen May 20, 2019
694ebb4
[MINOR][DOCS] Fix Spark hive example.
ScrapCodes May 21, 2019
1e2b60f
[SPARK-27726][CORE] Fix performance of ElementTrackingStore deletes w…
May 21, 2019
4d687a5
[SPARK-27800][SQL][DOC] Fix wrong answer of example for BitwiseXor
alex-lx May 22, 2019
fa7c319
[SPARK-27800][SQL][HOTFIX][FOLLOWUP] Fix wrong answer on BitwiseXor t…
dongjoon-hyun May 22, 2019
e0e8a6d
Revert "[SPARK-27539][SQL] Fix inaccurate aggregate outputRows estima…
HyukjinKwon May 23, 2019
e69ad46
Revert "[SPARK-27351][SQL] Wrong outputRows estimation after Aggregat…
HyukjinKwon May 23, 2019
d6ab7e6
[SPARK-26045][BUILD] Leave avro, avro-ipc dependendencies as compile …
srowen May 23, 2019
ec6a08b
Revert "Revert "[SPARK-27351][SQL] Wrong outputRows estimation after …
HyukjinKwon May 23, 2019
fb60066
Revert "Revert "[SPARK-27539][SQL] Fix inaccurate aggregate outputRow…
HyukjinKwon May 23, 2019
80fe1ed
[MINOR][DOC] ForeachBatch doc fix.
gaborgsomogyi May 24, 2019
a287110
[SPARK-27711][CORE] Unset InputFileBlockHolder at the end of tasks
jose-torres May 26, 2019
7223c0e
[SPARK-27441][SQL][TEST] Add read/write tests to Hive serde tables
wangyum May 26, 2019
0d9be28
[SPARK-27858][SQL] Fix for avro deserialization on union types with m…
May 28, 2019
a4bbe02
[SPARK-27657][ML] Fix the log format of ml.util.Instrumentation.logFai…
May 28, 2019
456ecb5
[SPARK-27863][SQL][BACKPORT-2.4] Metadata files and temporary files s…
wangyum May 29, 2019
b876c14
[SPARK-27869][CORE] Redact sensitive information in System Properties…
aaruna May 29, 2019
84bd808
[SPARK-27868][CORE] Better default value and documentation for socket…
May 29, 2019
2adf548
[SPARK-26192][MESOS][2.4] Retrieve enableFetcherCache option from sub…
May 31, 2019
f41ba2a
[SPARK-27794][R][DOCS][BACKPORT] Use https URL for CRAN repo
srowen May 31, 2019
16f2ceb
[SPARK-27896][ML] Fix definition of clustering silhouette coefficient…
srowen May 31, 2019
ee46b0f
Revert "[SPARK-27896][ML] Fix definition of clustering silhouette coe…
dongjoon-hyun Jun 1, 2019
6baed83
[SPARK-27907][SQL] HiveUDAF should return NULL in case of 0 rows
ajithme Jun 2, 2019
6715135
[MINOR][BRANCH-2.4] Avoid hardcoded py4j-0.10.7-src.zip in Scala
HyukjinKwon Jun 3, 2019
880cb7b
[SPARK-27873][SQL][BRANCH-2.4] columnNameOfCorruptRecord should not b…
viirya Jun 4, 2019
9d307dd
[MINOR][DOC] Avro data source documentation change
dmatrix Jun 4, 2019
1a86eb3
[MINOR][SQL] Skip warning if JOB_SUMMARY_LEVEL is set to NONE
jmsanders Jun 5, 2019
ad23006
[SPARK-27798][SQL][BRANCH-2.4] from_avro shouldn't produces same valu…
viirya Jun 8, 2019
89ca658
[SPARK-27973][MINOR] [EXAMPLES]correct DirectKafkaWordCount usage tex…
cnZach Jun 7, 2019
c961e7c
[SPARK-27917][SQL][BACKPORT-2.4] canonical form of CaseWhen object is…
sandeep-katta Jun 11, 2019
29a39e8
[SPARK-28031][PYSPARK][TEST] Improve doctest on over function of Column
viirya Jun 13, 2019
f94410e
[SPARK-21882][CORE] OutputMetrics doesn't count written bytes correct…
srowen Jun 14, 2019
9ddb6b5
[SPARK-24898][DOC] Adding spark.checkpoint.compress to the docs
sandeepvja Jun 17, 2019
f4efcbf
[SPARK-28058][DOC] Add a note to doc of mode of CSV for column pruning
viirya Jun 18, 2019
e4f5d84
[SPARK-28081][ML] Handle large vocab counts in word2vec
srowen Jun 19, 2019
ba7f61e
[SPARK-26555][SQL][BRANCH-2.4] make ScalaReflection subtype checking …
Jun 20, 2019
f9105c0
[MINOR][DOC] Fix python variance() documentation
tools4origins Jun 20, 2019
4990be9
[SPARK-28093][SPARK-28109][SQL][2.4] Fix TRIM/LTRIM/RTRIM function pa…
wangyum Jun 20, 2019
a71e90a
[SPARK-26038][BRANCH-2.4] Decimal toScalaBigInt/toJavaBigInteger for …
juliuszsompolski Jun 21, 2019
d1a3e4d
[SPARK-27018][CORE] Fix incorrect removal of checkpointed file in Per…
zhengruifeng Jun 24, 2019
e5cc11d
Revert "[SPARK-28093][SPARK-28109][SQL][2.4] Fix TRIM/LTRIM/RTRIM fun…
wangyum Jun 24, 2019
eb97f95
[SPARK-28154][ML][FOLLOWUP] GMM fix double caching
zhengruifeng Jun 25, 2019
680c1b6
[SPARK-27100][SQL][2.4] Use `Array` instead of `Seq` in `FilePartitio…
parthchandra Jun 26, 2019
eb66d3b
[SPARK-28164] Fix usage description of `start-slave.sh`
shivusondur Jun 26, 2019
b477194
[SPARK-28157][CORE][2.4] Make SHS clear KVStore `LogInfo`s for the bl…
dongjoon-hyun Jun 27, 2019
9f9bf13
[SPARK-28160][CORE] Fix a bug that callback function may hang when un…
LantaoJin Jun 30, 2019
d57b392
[SPARK-28170][ML][PYTHON] Uniform Vectors and Matrix documentation
mgaido91 Jul 1, 2019
ec6d0c9
[MINOR] Add requestHeaderSize debug log
gaborgsomogyi Jul 3, 2019
19487cb
[SPARK-28261][CORE] Fix client reuse test
gaborgsomogyi Jul 8, 2019
072e0eb
[SPARK-28308][CORE] CalendarInterval sub-second part should be padded…
dongjoon-hyun Jul 9, 2019
55f92a3
[SPARK-28302][CORE] Make sure to generate unique output file for Spar…
Ngone51 Jul 9, 2019
1abac14
[SPARK-28335][DSTREAMS][TEST] DirectKafkaStreamSuite wait for Kafka a…
gaborgsomogyi Jul 10, 2019
17974e2
[SPARK-28015][SQL] Check stringToDate() consumes entire input for the…
MaxGekk Jul 11, 2019
094a20c
[SPARK-28357][CORE][TEST] Fix Flaky Test - FileAppenderSuite.rollingf…
dongjoon-hyun Jul 12, 2019
1a6a67f
[SPARK-28361][SQL][TEST] Test equality of generated code with id in c…
gatorsmile Jul 12, 2019
98aebf4
[SPARK-28371][SQL] Make Parquet "StartsWith" filter null-safe
Jul 13, 2019
35d5886
[SPARK-28378][PYTHON] Remove usage of cgi.escape
viirya Jul 14, 2019
c9c9eac
[SPARK-28404][SS] Fix negative timeout value in RateStreamContinuousP…
gaborgsomogyi Jul 15, 2019
72f547d
[SPARK-27485] EnsureRequirements.reorder should handle duplicate expr…
hvanhovell Jul 16, 2019
3f5a114
[SPARK-28247][SS][BRANCH-2.4] Fix flaky test "query without test harn…
HeartSaVioR Jul 16, 2019
63898cb
Revert "[SPARK-27485] EnsureRequirements.reorder should handle duplic…
gatorsmile Jul 16, 2019
198f2f3
[SPARK-27485][BRANCH-2.4] EnsureRequirements.reorder should handle du…
hvanhovell Jul 17, 2019
76251c3
[SPARK-28418][PYTHON][SQL] Wait for event process in 'test_query_exec…
HyukjinKwon Jul 17, 2019
5b8b9fb
[SPARK-28430][UI] Fix stage table rendering when some tasks' metrics …
JoshRosen Jul 18, 2019
a7e2de8
[SPARK-28464][DOC][SS] Document Kafka source minPartitions option
Jul 21, 2019
b26e82f
[SPARK-27416][SQL][BRANCH-2.4] UnsafeMapData & UnsafeArrayData Kryo s…
pengbo Jul 22, 2019
c01c294
[SPARK-28468][INFRA][2.4] Upgrade pip to fix `sphinx` install error
dongjoon-hyun Jul 22, 2019
4336d1c
[SPARK-27159][SQL] update mssql server dialect to support binary type
lipzhu Mar 16, 2019
73bb605
[SPARK-27168][SQL][TEST] Add docker integration test for MsSql server
lipzhu Mar 19, 2019
366519d
[SPARK-28496][INFRA] Use branch name instead of tag during dry-run
dongjoon-hyun Jul 24, 2019
98ba2f6
[SPARK-28152][SQL][2.4] Mapped ShortType to SMALLINT and FloatType to…
shivsood Jul 25, 2019
771db3b
[SPARK-28156][SQL][BACKPORT-2.4] Self-join should not miss cached view
bersprockets Jul 25, 2019
59137e2
[SPARK-26995][K8S][2.4] Make ld-linux-x86-64.so.2 visible to snappy n…
LucaCanali Jul 25, 2019
a285c0d
[SPARK-28421][ML] SparseVector.apply performance optimization
zhengruifeng Jul 24, 2019
2c2b102
[MINOR][SQL] Fix log messages of DataWritingSparkTask
dongjoon-hyun Jul 26, 2019
afb7492
[SPARK-28489][SS] Fix a bug that KafkaOffsetRangeCalculator.getRanges…
zsxwing Jul 26, 2019
2e0763b
[SPARK-28535][CORE][TEST] Slow down tasks to de-flake JobCancellation…
Jul 27, 2019
8934560
[SPARK-28545][SQL] Add the hash map size to the directional log of Ob…
dongjoon-hyun Jul 28, 2019
5f4feeb
[SPARK-25474][SQL][2.4] Support `spark.sql.statistics.fallBackToHdfs`…
shahidki31 Jul 29, 2019
9d9c5a5
[SPARK-26152][CORE][2.4] Synchronize Worker Cleanup with Worker Shutdown
ajithme Jul 30, 2019
992b1bb
[MINOR][CORE][DOCS] Fix inconsistent description of showConsoleProgress
beliefer Jul 31, 2019
6a361d4
[SPARK-28564][CORE] Access history application defaults to the last a…
cxzl25 Jul 31, 2019
93f5fb8
[SPARK-24352][CORE][TESTS] De-flake StandaloneDynamicAllocationSuite …
Aug 1, 2019
9c8c8ba
[SPARK-28153][PYTHON][BRANCH-2.4] Use AtomicReference at InputFileBlo…
HyukjinKwon Aug 1, 2019
dc09a02
[SPARK-28582][PYSPARK] Fix flaky test DaemonTests.do_termination_test…
WeichenXu123 Aug 2, 2019
a065a50
Revert "[SPARK-28582][PYSPARK] Fix flaky test DaemonTests.do_terminat…
HyukjinKwon Aug 2, 2019
20e46ef
[SPARK-28582][PYSPARK] Fix flaky test DaemonTests.do_termination_test…
WeichenXu123 Aug 2, 2019
dad1cd6
[MINOR][DOC][SS] Correct description of minPartitions in Kafka option
HeartSaVioR Aug 2, 2019
fe0f53a
Revert "[SPARK-28582][PYSPARK] Fix flaky test DaemonTests.do_terminat…
dongjoon-hyun Aug 2, 2019
be52903
[SPARK-28606][INFRA] Update CRAN key to recover docker image generation
dongjoon-hyun Aug 2, 2019
6c61321
[SPARK-28582][PYTHON] Fix flaky test DaemonTests.do_termination_test …
WeichenXu123 Aug 3, 2019
04d9f8f
[SPARK-28609][DOC] Fix broken styles/links and make up-to-date
dongjoon-hyun Aug 4, 2019
a2bbbf8
[SPARK-28649][INFRA] Add Python .eggs to .gitignore
rvesse Aug 7, 2019
c37abba
[SPARK-28638][WEBUI] Task summary should only contain successful task…
gengliangwang Aug 12, 2019
dfcebca
[SPARK-28713][BUILD][2.4] Bump checkstyle from 8.2 to 8.23
Fokko Aug 13, 2019
5a06584
[SPARK-27234][SS][PYTHON][BRANCH-2.4] Use InheritableThreadLocal for …
HyukjinKwon Aug 15, 2019
97471bd
[MINOR][DOC] Use `Java 8` instead of `Java 8+` as a running environment
dongjoon-hyun Aug 15, 2019
0246f48
[SPARK-28766][R][DOC] Fix CRAN incoming feasibility warning on invali…
dongjoon-hyun Aug 17, 2019
b98a372
[SPARK-28647][WEBUI][2.4] Recover additional metric feature
sarutak Aug 18, 2019
73032a0
Revert "[SPARK-25474][SQL][2.4] Support `spark.sql.statistics.fallBac…
dongjoon-hyun Aug 18, 2019
13f2465
Preparing Spark release v2.4.4-rc1
dongjoon-hyun Aug 19, 2019
5a558a4
Preparing development version 2.4.5-SNAPSHOT
dongjoon-hyun Aug 19, 2019
154b325
[SPARK-28749][TEST][BRANCH-2.4] Fix PySpark tests not to require kafk…
mattf-apache Aug 19, 2019
7e4825c
[SPARK-28775][CORE][TESTS] Skip date 8633 in Kwajalein due to changes…
srowen Aug 20, 2019
75076ff
[SPARK-28777][PYTHON][DOCS] Fix format_string doc string with the cor…
darrentirto Aug 20, 2019
aff5e2b
[SPARK-28650][SS][DOC] Correct explanation of guarantee for ForeachWr…
HeartSaVioR Aug 20, 2019
fd2fe15
[SPARK-26895][CORE][2.4] prepareSubmitEnvironment should be called wi…
abellina Aug 21, 2019
21bba9c
[SPARK-28699][SQL] Disable using radix sort for ShuffleExchangeExec i…
xuanyuanking Aug 21, 2019
ff9339d
[SPARK-28844][SQL] Fix typo in SQLConf FILE_COMRESSION_FACTOR
triplesheep Aug 22, 2019
001e32a
[SPARK-28780][ML][2.4] deprecate LinearSVCModel.setWeightCol
zhengruifeng Aug 22, 2019
e468576
[SPARK-28699][CORE][2.4] Fix a corner case for aborting indeterminate…
xuanyuanking Aug 22, 2019
0415d9d
[SPARK-28642][SQL][2.4] Hide credentials in show create table
wangyum Aug 23, 2019
b913abd
[SPARK-27330][SS][2.4] support task abort in foreach writer
Aug 23, 2019
0a5efc3
[SPARK-28025][SS][2.4] Fix FileContextBasedCheckpointFileManager leak…
HeartSaVioR Aug 23, 2019
e66f9d5
[SPARK-28868][INFRA] Specify Jekyll version to 3.8.6 in release docke…
dongjoon-hyun Aug 25, 2019
b7a15b6
Preparing Spark release v2.4.4-rc2
dongjoon-hyun Aug 25, 2019
3f2eea3
Preparing development version 2.4.5-SNAPSHOT
dongjoon-hyun Aug 25, 2019
2c13dc9
[SPARK-28871][MINOR][DOCS] WaterMark doc fix
Aug 27, 2019
0d0686e
[SPARK-28642][SQL][TEST][FOLLOW-UP] Test spark.sql.redaction.options.…
wangyum Aug 26, 2019
c4bb486
[SPARK-27992][SPARK-28881][PYTHON][2.4] Allow Python to join with con…
HyukjinKwon Aug 27, 2019
7955b39
Preparing Spark release v2.4.4-rc3
dongjoon-hyun Aug 27, 2019
449f319
Preparing development version 2.4.5-SNAPSHOT
dongjoon-hyun Aug 27, 2019
5b6e56b
[SPARK-28778][MESOS][2.4] Fixed executors advertised address in virtu…
Aug 31, 2019
6dc209f
[SPARK-28903][STREAMING][PYSPARK][TESTS] Fix AWS JDK version conflict…
srowen Aug 31, 2019
b4a4616
[SPARK-28951][INFRA] Add release announce template
dongjoon-hyun Sep 2, 2019
446ffb1
[SPARK-28921][BUILD][K8S][2.4] Update kubernetes client to 4.4.2
andygrove Sep 3, 2019
1a5858f
[SPARK-22955][DSTREAMS] - graceful shutdown shouldn't lead to job gen…
Aug 27, 2019
3f3f524
[SPARK-28709][DSTREAMS] Fix StreamingContext leak through Streaming
Aug 26, 2019
0566fd0
[SPARK-28921][K8S][FOLLOWUP] Also bump K8S client version in integrat…
srowen Sep 5, 2019
a1471f9
[SPARK-28977][DOCS][SQL] Fix DataFrameReader.json docs to doc that pa…
srowen Sep 5, 2019
2654c33
[SPARK-28912][STREAMING] Fixed MatchError in getCheckpointFiles()
avkgh Sep 7, 2019
0a4b356
Revert "[SPARK-28912][STREAMING] Fixed MatchError in getCheckpointFil…
gatorsmile Sep 7, 2019
483dcf5
[SPARK-28912][BRANCH-2.4] Fixed MatchError in getCheckpointFiles()
avkgh Sep 9, 2019
9ef48f7
[SPARK-29011][BUILD] Update netty-all from 4.1.30-Final to 4.1.39-Final
n-marion Sep 9, 2019
df55f3c
[SPARK-28657][CORE] Fix currentContext Instance failed sometimes
hddong Sep 9, 2019
92e5216
Revert "[SPARK-28657][CORE] Fix currentContext Instance failed someti…
srowen Sep 9, 2019
75b902f
[SPARK-23519][SQL][2.4] Create view should work from query with dupli…
hemanthmeka Sep 10, 2019
9e6a1b8
[SPARK-28906][BUILD] Fix incorrect information in bin/spark-submit --…
kiszk Sep 11, 2019
ecb2052
[MINOR][DOCS] Fix few typos in the java docs
Sep 12, 2019
56ef752
[SPARK-29073][INFRA][2.4] Add GitHub Action to branch-2.4 for `Scala-…
dongjoon-hyun Sep 13, 2019
c269b19
[SPARK-29075][BUILD] Add enforcer rule to ban duplicated pom dependency
dongjoon-hyun Sep 13, 2019
637a6c2
[SPARK-24663][STREAMING][TESTS] StreamingContextSuite: Wait until slo…
HeartSaVioR Sep 11, 2019
339b0f2
[SPARK-29045][SQL][TESTS] Drop table to avoid test failure in SQLMetr…
LantaoJin Sep 12, 2019
ac10d73
[SPARK-29079][INFRA] Enable GitHub Action on PR
dongjoon-hyun Sep 13, 2019
58ad3e6
[SPARK-26989][CORE][TEST][2.4] DAGSchedulerSuite: ensure listeners ar…
HeartSaVioR Sep 15, 2019
21649e3
[SPARK-27122][CORE][2.4] Jetty classes must not be return via getters…
ajithme Sep 15, 2019
b41795a
[SPARK-29087][CORE][STREAMING] Use DelegatingServletContextHandler to…
dongjoon-hyun Sep 15, 2019
1c57da3
[SPARK-25277][YARN] YARN applicationMaster metrics should not registe…
LucaCanali Dec 13, 2018
68e29ba
[SPARK-29046][SQL][2.4] Fix NPE in SQLConf.get when active SparkConte…
HeartSaVioR Sep 17, 2019
4dedd39
[SPARK-26713][CORE][2.4] Interrupt pipe IO threads in PipedRDD when t…
advancedxy Sep 18, 2019
00589bd
[SPARK-29104][CORE][TESTS] Fix PipedRDDSuite to use `eventually` to c…
dongjoon-hyun Sep 17, 2019
cc0f659
[SPARK-29124][CORE] Use MurmurHash3 `bytesHash(data, seed)` instead o…
dongjoon-hyun Sep 18, 2019
89a065d
[MINOR][SS][DOCS] Adapt multiple watermark policy comment to the reality
bartosz25 Sep 18, 2019
efcca57
[SPARK-29042][CORE][BRANCH-2.4] Sampling-based RDD with unordered inp…
viirya Sep 18, 2019
0770037
[SPARK-28616][INFRA] Improve merge-spark-pr script to warn WIP PRs an…
dongjoon-hyun Aug 5, 2019
60600c8
[SPARK-28857][INFRA] Clean up the comments of PR template during merging
dongjoon-hyun Aug 23, 2019
f146853
[SPARK-28683][BUILD][2.4] Upgrade Scala to 2.12.10
wangyum Sep 19, 2019
92189f2
[SPARK-28683][BUILD][FOLLOW-UP][2.4] Fix javadoc generation issue aft…
wangyum Sep 19, 2019
0e63603
[SPARK-29159][BUILD] Increase ReservedCodeCacheSize to 1G
dongjoon-hyun Sep 19, 2019
267d318
[SPARK-29165][SQL][TEST] Set log level of log generated code as ERROR…
HeartSaVioR Sep 19, 2019
7ea3195
[SPARK-29101][SQL][2.4] Fix count API for csv file when DROPMALFORMED…
sandeep-katta Sep 19, 2019
71b0562
[MINOR][BUILD][2.4] Fix 4 misc build warnings
srowen Sep 20, 2019
026e789
[SPARK-26003][SQL][2.4] Improve SQLAppStatusListener.aggregateMetrics…
mgaido91 Sep 20, 2019
0d26cfc
[SPARK-27460][TESTS][2.4] Running slowest test suites in their own fo…
gengliangwang Sep 20, 2019
25d4b3a
[MINOR][INFRA] Use java-version instead of version for GitHub Action
wangyum Sep 20, 2019
e21c52b
[SPARK-29199][INFRA] Add linters and license/dependency checkers to G…
dongjoon-hyun Sep 21, 2019
1b939ea
[SPARK-19147][CORE] Gracefully handle error in task after executor is…
Sep 21, 2019
c835ccb
[CORE][MINOR] Correct a log message in DAGScheduler
Sep 22, 2019
5fbb65d
[SPARK-29201][INFRA][2.4] Add Hadoop 2.6 combination to GitHub Action
dongjoon-hyun Sep 22, 2019
56cf17e
[SPARK-29177][CORE] fix zombie tasks after stage abort
adrian-wang Sep 23, 2019
bc78f98
[SPARK-29053][WEBUI][2.4] Sort does not work on some columns
amanomer Sep 23, 2019
866f763
fix compilation
cloud-fan Sep 23, 2019
328c4ec
[SPARK-25903][CORE] TimerTask should be synchronized on ContextBarrie…
viirya Sep 23, 2019
03079cd
[SPARK-28599][SQL][2.4] Fix `Duration` column sorting for ThriftServe…
wangyum Sep 23, 2019
05a32ca
[SPARK-28678][DOC] Specify that array indices start at 1 for function…
sheepstop Sep 24, 2019
5267e6e
[SPARK-29229][SQL] Change the additional remote repository in Isolate…
xuanyuanking Sep 24, 2019
e052cd5
[SPARK-23197][STREAMING][TESTS][2.4] Fix ReceiverSuite."receiver_life…
HeartSaVioR Sep 25, 2019
64bb083
[SPARK-29203][SQL][TESTS][2.4] Reduce shuffle partitions in SQLQueryT…
wangyum Sep 26, 2019
3dbe065
[SPARK-29213][SQL] Generate extra IsNotNull predicate in FilterExec
wangshuo128 Sep 27, 2019
361b605
[SPARK-29240][PYTHON] Pass Py4J column instance to support PySpark co…
HyukjinKwon Sep 27, 2019
99e503c
[SPARK-29263][SCHEDULER] Update `availableSlots` in `resourceOffers()…
juliuszsompolski Sep 27, 2019
9ae7393
[SPARK-29263][CORE][TEST][FOLLOWUP][2.4] Fix build failure of `TaskSc…
jiangxb1987 Sep 27, 2019
e12398c
[SPARK-29247][SQL] Redact sensitive information in when construct Hiv…
AngersZhuuuu Sep 29, 2019
7ea4b9f
[SPARK-29186][SQL] AliasIdentifier should be converted to Json in pre…
viirya Sep 30, 2019
332f9da
[SPARK-29186][SQL][2.4][FOLLOWUP] AliasIdentifier should be converted…
viirya Sep 30, 2019
3173439
[SPARK-29055][CORE] Update driver/executors' storage memory when bloc…
HeartSaVioR Oct 1, 2019
9cf7ea6
[SPARK-29244][CORE] Prevent freed page in BytesToBytesMap free again
viirya Oct 1, 2019
66c1d50
[SPARK-29244][CORE][FOLLOWUP] Fix compilation
dongjoon-hyun Oct 1, 2019
fd01c9e
[SPARK-29244][CORE][FOLLOWUP] Fix java lint error due to line length
dongjoon-hyun Oct 1, 2019
1560f6f
[SPARK-29203][TESTS][MINOR][FOLLOW UP] Add access modifier for sparkC…
xuanyuanking Oct 4, 2019
5992e29
[SPARK-29286][PYTHON][TESTS] Uses UTF-8 with 'replace' on errors at P…
HyukjinKwon Oct 4, 2019
daa1749
[SPARK-25753][CORE][2.4] Fix reading small files via BinaryFileRDD
10110346 Oct 4, 2019
008ee63
[SPARK-28938][K8S][2.4] Move to supported OpenJDK docker image for Ku…
viirya Oct 7, 2019
04b3e0e
[MINOR][BUILD] Fix an incorrect path in license file
beliefer Oct 8, 2019
4f46e8f
[SPARK-28917][CORE] Synchronize access to RDD mutable state
squito Oct 8, 2019
80cded3
[SPARK-29410][BUILD] Update commons-beanutils to 1.9.4
peter-toth Oct 12, 2019
b2f96a5
[SPARK-29445][CORE] Bump netty-all from 4.1.39.Final to 4.1.42.Final
Fokko Oct 12, 2019
90139f6
[SPARK-27259][CORE] Allow setting -1 as length for FileBlock
prasha2 Oct 16, 2019
65c0a78
[SPARK-27812][K8S][2.4] Bump K8S client version to 4.6.1
igorcalabria Oct 18, 2019
4d476ed
[SPARK-29494][SQL] Fix for ArrayOutofBoundsException while converting…
rahulsmahadev Oct 18, 2019
b094774
Revert "[SPARK-29494][SQL] Fix for ArrayOutofBoundsException while co…
zsxwing Oct 18, 2019
3d334ac
[SPARK-29494][SQL][2.4] Fix for ArrayOutofBoundsException while conve…
rahulsmahadev Oct 19, 2019
c0101de
[SPARK-28963][BUILD] Fall back to archive.apache.org in build/mvn for…
srowen Sep 4, 2019
92b9706
[SPARK-29556][CORE] Avoid putting request path in error response in E…
srowen Oct 22, 2019
7c9bdd7
[SPARK-29560][BUILD] Add typesafe bintray repo for sbt-mima-plugin
dongjoon-hyun Oct 22, 2019
b1ba6fa
[SPARK-21492][SQL][2.4] Fix memory leak in SortMergeJoin
xuanyuanking Oct 23, 2019
9838df2
[SPARK-21492][SQL][FOLLOW UP] Reimplement UnsafeExternalRowSorter in …
xuanyuanking Oct 24, 2019
ac72b0e
[SPARK-21287][SQL] Remove requirement of fetch_size>=0 from JDBCOptions
fuwhu Oct 24, 2019
be323d2
[SPARK-29530][SQL][2.4] Make SQLConf in SQL parse process thread safe
AngersZhuuuu Oct 25, 2019
f42a40e
[SPARK-29498][SQL][2.4] CatalogTable to HiveTable should not change t…
wangyum Oct 25, 2019
70fc9d9
[SPARK-31655][BUILD][2.4] Upgrade snappy-java to 1.1.7.5
AngersZhuuuu May 12, 2020
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
60 changes: 60 additions & 0 deletions .github/workflows/branch-2.4.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
name: branch-2.4

on:
push:
branches:
- branch-2.4
pull_request:
branches:
- branch-2.4

jobs:
build:

runs-on: ubuntu-latest
strategy:
matrix:
scala: [ '2.11', '2.12' ]
hadoop: [ 'hadoop-2.6', 'hadoop-2.7' ]
name: Build Spark with Scala ${{ matrix.scala }} / Hadoop ${{ matrix.hadoop }}

steps:
- uses: actions/checkout@master
- name: Set up JDK 8
uses: actions/setup-java@v1
with:
java-version: '1.8'
- name: Change to Scala ${{ matrix.scala }}
run: |
dev/change-scala-version.sh ${{ matrix.scala }}
- name: Build with Maven
run: |
export MAVEN_OPTS="-Xmx2g -XX:ReservedCodeCacheSize=512m -Dorg.slf4j.simpleLogger.defaultLogLevel=WARN"
export MAVEN_CLI_OPTS="--no-transfer-progress"
./build/mvn $MAVEN_CLI_OPTS -DskipTests -Pyarn -Pmesos -Pkubernetes -Phive -Phive-thriftserver -Pscala-${{ matrix.scala }} -P${{ matrix.hadoop }} -Phadoop-cloud package


lint:
runs-on: ubuntu-latest
name: Linters
steps:
- uses: actions/checkout@master
- uses: actions/setup-java@v1
with:
java-version: '1.8'
- uses: actions/setup-python@v1
with:
python-version: '3.x'
architecture: 'x64'
- name: Scala
run: ./dev/lint-scala
- name: Java
run: ./dev/lint-java
- name: Python
run: |
pip install flake8 sphinx numpy
./dev/lint-python
- name: License
run: ./dev/check-license
- name: Dependencies
run: ./dev/test-dependencies.sh
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -61,6 +61,7 @@ project/plugins/project/build.properties
project/plugins/src_managed/
project/plugins/target/
python/lib/pyspark.zip
python/.eggs/
python/deps
python/test_coverage/coverage_data
python/test_coverage/htmlcov
Expand Down
50 changes: 0 additions & 50 deletions .travis.yml

This file was deleted.

2 changes: 1 addition & 1 deletion LICENSE
Original file line number Diff line number Diff line change
Expand Up @@ -243,7 +243,7 @@ MIT License
core/src/main/resources/org/apache/spark/ui/static/dagre-d3.min.js
core/src/main/resources/org/apache/spark/ui/static/*dataTables*
core/src/main/resources/org/apache/spark/ui/static/graphlib-dot.min.js
ore/src/main/resources/org/apache/spark/ui/static/jquery*
core/src/main/resources/org/apache/spark/ui/static/jquery*
core/src/main/resources/org/apache/spark/ui/static/sorttable.js
docs/js/vendor/anchor.min.js
docs/js/vendor/jquery*
Expand Down
2 changes: 1 addition & 1 deletion LICENSE-binary
Original file line number Diff line number Diff line change
Expand Up @@ -305,7 +305,6 @@ com.google.code.gson:gson
com.google.inject:guice
com.google.inject.extensions:guice-servlet
com.twitter:parquet-hadoop-bundle
commons-beanutils:commons-beanutils-core
commons-cli:commons-cli
commons-dbcp:commons-dbcp
commons-io:commons-io
Expand Down Expand Up @@ -468,6 +467,7 @@ Common Development and Distribution License (CDDL) 1.1
------------------------------------------------------

javax.annotation:javax.annotation-api https://jcp.org/en/jsr/detail?id=250
javax.el:javax.el-api https://javaee.github.io/uel-ri/
javax.servlet:javax.servlet-api https://javaee.github.io/servlet-spec/
javax.transaction:jta http://www.oracle.com/technetwork/java/index.html
javax.ws.rs:javax.ws.rs-api https://github.com/jax-rs
Expand Down
10 changes: 5 additions & 5 deletions R/pkg/DESCRIPTION
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
Package: SparkR
Type: Package
Version: 2.4.0
Title: R Frontend for Apache Spark
Description: Provides an R Frontend for Apache Spark.
Version: 2.4.5
Title: R Front End for 'Apache Spark'
Description: Provides an R Front end for 'Apache Spark' <https://spark.apache.org>.
Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),
email = "shivaram@cs.berkeley.edu"),
person("Xiangrui", "Meng", role = "aut",
Expand All @@ -11,8 +11,8 @@ Authors@R: c(person("Shivaram", "Venkataraman", role = c("aut", "cre"),
email = "felixcheung@apache.org"),
person(family = "The Apache Software Foundation", role = c("aut", "cph")))
License: Apache License (== 2.0)
URL: http://www.apache.org/ http://spark.apache.org/
BugReports: http://spark.apache.org/contributing.html
URL: https://www.apache.org/ https://spark.apache.org/
BugReports: https://spark.apache.org/contributing.html
SystemRequirements: Java (== 8)
Depends:
R (>= 3.0),
Expand Down
1 change: 0 additions & 1 deletion R/pkg/NAMESPACE
Original file line number Diff line number Diff line change
Expand Up @@ -313,7 +313,6 @@ exportMethods("%<=>%",
"lower",
"lpad",
"ltrim",
"map_entries",
"map_from_arrays",
"map_keys",
"map_values",
Expand Down
1 change: 0 additions & 1 deletion R/pkg/R/DataFrame.R
Original file line number Diff line number Diff line change
Expand Up @@ -503,7 +503,6 @@ setMethod("createOrReplaceTempView",
#' @param x A SparkDataFrame
#' @param tableName A character vector containing the name of the table
#'
#' @family SparkDataFrame functions
#' @seealso \link{createOrReplaceTempView}
#' @rdname registerTempTable-deprecated
#' @name registerTempTable
Expand Down
3 changes: 2 additions & 1 deletion R/pkg/R/SQLContext.R
Original file line number Diff line number Diff line change
Expand Up @@ -655,7 +655,8 @@ loadDF <- function(x = NULL, ...) {
#'
#' @param url JDBC database url of the form \code{jdbc:subprotocol:subname}
#' @param tableName the name of the table in the external database
#' @param partitionColumn the name of a column of integral type that will be used for partitioning
#' @param partitionColumn the name of a column of numeric, date, or timestamp type
#' that will be used for partitioning.
#' @param lowerBound the minimum value of \code{partitionColumn} used to decide partition stride
#' @param upperBound the maximum value of \code{partitionColumn} used to decide partition stride
#' @param numPartitions the number of partitions, This, along with \code{lowerBound} (inclusive),
Expand Down
1 change: 0 additions & 1 deletion R/pkg/R/catalog.R
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,6 @@ createExternalTable <- function(x, ...) {
#' @param ... additional named parameters as options for the data source.
#' @return A SparkDataFrame.
#' @rdname createTable
#' @seealso \link{createExternalTable}
#' @examples
#'\dontrun{
#' sparkR.session()
Expand Down
50 changes: 35 additions & 15 deletions R/pkg/R/context.R
Original file line number Diff line number Diff line change
Expand Up @@ -167,18 +167,30 @@ parallelize <- function(sc, coll, numSlices = 1) {
# 2-tuples of raws
serializedSlices <- lapply(slices, serialize, connection = NULL)

# The PRC backend cannot handle arguments larger than 2GB (INT_MAX)
# The RPC backend cannot handle arguments larger than 2GB (INT_MAX)
# If serialized data is safely less than that threshold we send it over the PRC channel.
# Otherwise, we write it to a file and send the file name
if (objectSize < sizeLimit) {
jrdd <- callJStatic("org.apache.spark.api.r.RRDD", "createRDDFromArray", sc, serializedSlices)
} else {
fileName <- writeToTempFile(serializedSlices)
jrdd <- tryCatch(callJStatic(
"org.apache.spark.api.r.RRDD", "createRDDFromFile", sc, fileName, as.integer(numSlices)),
finally = {
file.remove(fileName)
})
if (callJStatic("org.apache.spark.api.r.RUtils", "getEncryptionEnabled", sc)) {
# the length of slices here is the parallelism to use in the jvm's sc.parallelize()
parallelism <- as.integer(numSlices)
jserver <- newJObject("org.apache.spark.api.r.RParallelizeServer", sc, parallelism)
authSecret <- callJMethod(jserver, "secret")
port <- callJMethod(jserver, "port")
conn <- socketConnection(port = port, blocking = TRUE, open = "wb", timeout = 1500)
doServerAuth(conn, authSecret)
writeToConnection(serializedSlices, conn)
jrdd <- callJMethod(jserver, "getResult")
} else {
fileName <- writeToTempFile(serializedSlices)
jrdd <- tryCatch(callJStatic(
"org.apache.spark.api.r.RRDD", "createRDDFromFile", sc, fileName, as.integer(numSlices)),
finally = {
file.remove(fileName)
})
}
}

RDD(jrdd, "byte")
Expand All @@ -194,14 +206,21 @@ getMaxAllocationLimit <- function(sc) {
))
}

writeToConnection <- function(serializedSlices, conn) {
tryCatch({
for (slice in serializedSlices) {
writeBin(as.integer(length(slice)), conn, endian = "big")
writeBin(slice, conn, endian = "big")
}
}, finally = {
close(conn)
})
}

writeToTempFile <- function(serializedSlices) {
fileName <- tempfile()
conn <- file(fileName, "wb")
for (slice in serializedSlices) {
writeBin(as.integer(length(slice)), conn, endian = "big")
writeBin(slice, conn, endian = "big")
}
close(conn)
writeToConnection(serializedSlices, conn)
fileName
}

Expand Down Expand Up @@ -278,7 +297,7 @@ broadcastRDD <- function(sc, object) {
#' Set the checkpoint directory
#'
#' Set the directory under which RDDs are going to be checkpointed. The
#' directory must be a HDFS path if running on a cluster.
#' directory must be an HDFS path if running on a cluster.
#'
#' @param sc Spark Context to use
#' @param dirName Directory path
Expand All @@ -302,7 +321,8 @@ setCheckpointDirSC <- function(sc, dirName) {
#'
#' A directory can be given if the recursive option is set to true.
#' Currently directories are only supported for Hadoop-supported filesystems.
#' Refer Hadoop-supported filesystems at \url{https://wiki.apache.org/hadoop/HCFS}.
#' Refer Hadoop-supported filesystems at
#' \url{https://cwiki.apache.org/confluence/display/HADOOP2/HCFS}.
#'
#' Note: A path can be added only once. Subsequent additions of the same path are ignored.
#'
Expand Down Expand Up @@ -422,7 +442,7 @@ setLogLevel <- function(level) {
#' Set checkpoint directory
#'
#' Set the directory under which SparkDataFrame are going to be checkpointed. The directory must be
#' a HDFS path if running on a cluster.
#' an HDFS path if running on a cluster.
#'
#' @rdname setCheckpointDir
#' @param directory Directory path to checkpoint to
Expand Down
43 changes: 22 additions & 21 deletions R/pkg/R/functions.R
Original file line number Diff line number Diff line change
Expand Up @@ -219,7 +219,7 @@ NULL
#' head(select(tmp, sort_array(tmp$v1)))
#' head(select(tmp, sort_array(tmp$v1, asc = FALSE)))
#' tmp3 <- mutate(df, v3 = create_map(df$model, df$cyl))
#' head(select(tmp3, map_entries(tmp3$v3), map_keys(tmp3$v3), map_values(tmp3$v3)))
#' head(select(tmp3, map_keys(tmp3$v3), map_values(tmp3$v3)))
#' head(select(tmp3, element_at(tmp3$v3, "Valiant")))
#' tmp4 <- mutate(df, v4 = create_array(df$mpg, df$cyl), v5 = create_array(df$cyl, df$hp))
#' head(select(tmp4, concat(tmp4$v4, tmp4$v5), arrays_overlap(tmp4$v4, tmp4$v5)))
Expand Down Expand Up @@ -2203,9 +2203,16 @@ setMethod("from_json", signature(x = "Column", schema = "characterOrstructType")
})

#' @details
#' \code{from_utc_timestamp}: Given a timestamp like '2017-07-14 02:40:00.0', interprets it as a
#' time in UTC, and renders that time as a timestamp in the given time zone. For example, 'GMT+1'
#' would yield '2017-07-14 03:40:00.0'.
#' \code{from_utc_timestamp}: This is a common function for databases supporting TIMESTAMP WITHOUT
#' TIMEZONE. This function takes a timestamp which is timezone-agnostic, and interprets it as a
#' timestamp in UTC, and renders that timestamp as a timestamp in the given time zone.
#' However, timestamp in Spark represents number of microseconds from the Unix epoch, which is not
#' timezone-agnostic. So in Spark this function just shift the timestamp value from UTC timezone to
#' the given timezone.
#' This function may return confusing result if the input is a string with timezone, e.g.
#' (\code{2018-03-13T06:18:23+00:00}). The reason is that, Spark firstly cast the string to
#' timestamp according to the timezone in the string, and finally display the result by converting
#' the timestamp to string according to the session local timezone.
#'
#' @rdname column_datetime_diff_functions
#'
Expand Down Expand Up @@ -2261,9 +2268,16 @@ setMethod("next_day", signature(y = "Column", x = "character"),
})

#' @details
#' \code{to_utc_timestamp}: Given a timestamp like '2017-07-14 02:40:00.0', interprets it as a
#' time in the given time zone, and renders that time as a timestamp in UTC. For example, 'GMT+1'
#' would yield '2017-07-14 01:40:00.0'.
#' \code{to_utc_timestamp}: This is a common function for databases supporting TIMESTAMP WITHOUT
#' TIMEZONE. This function takes a timestamp which is timezone-agnostic, and interprets it as a
#' timestamp in the given timezone, and renders that timestamp as a timestamp in UTC.
#' However, timestamp in Spark represents number of microseconds from the Unix epoch, which is not
#' timezone-agnostic. So in Spark this function just shift the timestamp value from the given
#' timezone to UTC timezone.
#' This function may return confusing result if the input is a string with timezone, e.g.
#' (\code{2018-03-13T06:18:23+00:00}). The reason is that, Spark firstly cast the string to
#' timestamp according to the timezone in the string, and finally display the result by converting
#' the timestamp to string according to the session local timezone.
#'
#' @rdname column_datetime_diff_functions
#' @aliases to_utc_timestamp to_utc_timestamp,Column,character-method
Expand Down Expand Up @@ -3238,19 +3252,6 @@ setMethod("flatten",
column(jc)
})

#' @details
#' \code{map_entries}: Returns an unordered array of all entries in the given map.
#'
#' @rdname column_collection_functions
#' @aliases map_entries map_entries,Column-method
#' @note map_entries since 2.4.0
setMethod("map_entries",
signature(x = "Column"),
function(x) {
jc <- callJStatic("org.apache.spark.sql.functions", "map_entries", x@jc)
column(jc)
})

#' @details
#' \code{map_from_arrays}: Creates a new map column. The array in the first column is used for
#' keys. The array in the second column is used for values. All elements in the array for key
Expand Down Expand Up @@ -3336,7 +3337,7 @@ setMethod("size",

#' @details
#' \code{slice}: Returns an array containing all the elements in x from the index start
#' (or starting from the end if start is negative) with the specified length.
#' (array indices start at 1, or from the end if start is negative) with the specified length.
#'
#' @rdname column_collection_functions
#' @param start an index indicating the first element occurring in the result.
Expand Down
4 changes: 0 additions & 4 deletions R/pkg/R/generics.R
Original file line number Diff line number Diff line change
Expand Up @@ -1076,10 +1076,6 @@ setGeneric("lpad", function(x, len, pad) { standardGeneric("lpad") })
#' @name NULL
setGeneric("ltrim", function(x, trimString) { standardGeneric("ltrim") })

#' @rdname column_collection_functions
#' @name NULL
setGeneric("map_entries", function(x) { standardGeneric("map_entries") })

#' @rdname column_collection_functions
#' @name NULL
setGeneric("map_from_arrays", function(x, y) { standardGeneric("map_from_arrays") })
Expand Down
Loading