forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 14
Merged Apache branch-1.6 #174
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
## What changes were proposed in this pull request? RegexExtract and RegexReplace currently crash on non-nullable input due use of a hard-coded local variable name (e.g. compiles fail with `java.lang.Exception: failed to compile: org.codehaus.commons.compiler.CompileException: File 'generated.java', Line 85, Column 26: Redefinition of local variable "m" `). This changes those variables to use fresh names, and also in a few other places. ## How was this patch tested? Unit tests. rxin Author: Eric Liang <ekl@databricks.com> Closes apache#14168 from ericl/sc-3906. (cherry picked from commit 1c58fa9) Signed-off-by: Reynold Xin <rxin@databricks.com>
…rtitionBy This patch fixes a variable namespace collision bug in pmod and partitionBy Regression test for one possible occurrence. A more general fix in `ExpressionEvalHelper.checkEvaluation` will be in a subsequent PR. Author: Sameer Agarwal <sameer@databricks.com> Closes apache#14144 from sameeragarwal/codegen-bug. (cherry picked from commit 9cc74f9) Signed-off-by: Reynold Xin <rxin@databricks.com> (cherry picked from commit 6892614) Signed-off-by: Reynold Xin <rxin@databricks.com>
…n code generation In code generation, it is incorrect for expressions to reuse variable names across different instances of itself. As an example, SPARK-16488 reports a bug in which pmod expression reuses variable name "r". This patch updates ExpressionEvalHelper test harness to always project two instances of the same expression, which will help us catch variable reuse problems in expression unit tests. This patch also fixes the bug in crc32 expression. This is a test harness change, but I also created a new test suite for testing the test harness. Author: Reynold Xin <rxin@databricks.com> Closes apache#14146 from rxin/SPARK-16489. (cherry picked from commit c377e49) Signed-off-by: Reynold Xin <rxin@databricks.com>
…signed to numSkippedTasks ## What changes were proposed in this pull request? I fixed a misassigned var, numCompletedTasks was assigned to numSkippedTasks in the convertJobData method ## How was this patch tested? dev/run-tests Author: Alex Bozarth <ajbozart@us.ibm.com> Closes apache#14141 from ajbozarth/spark16375. (cherry picked from commit f156136) Signed-off-by: Sean Owen <sowen@cloudera.com>
…g OoM for long runs ## What changes were proposed in this pull request? Unpersist broadcasted vars in Word2Vec.fit for more timely / reliable resource cleanup ## How was this patch tested? Jenkins tests Author: Sean Owen <sowen@cloudera.com> Closes apache#14153 from srowen/SPARK-16440. (cherry picked from commit 51ade51) Signed-off-by: Sean Owen <sowen@cloudera.com>
…ons in file listing ## What changes were proposed in this pull request? Spark silently drops exceptions during file listing. This is a very bad behavior because it can mask legitimate errors and the resulting plan will silently have 0 rows. This patch changes it to not silently drop the errors. After making partition discovery not silently drop exceptions, HiveMetastoreCatalog can trigger partition discovery on empty tables, which cause FileNotFoundExceptions (these Exceptions were dropped by partition discovery silently). To address this issue, this PR introduces two **hacks** to workaround the issues. These two hacks try to avoid of triggering partition discovery on empty tables in HiveMetastoreCatalog. ## How was this patch tested? Manually tested. **Note: This is a backport of apache#13987 Author: Yin Huai <yhuai@databricks.com> Closes apache#14139 from yhuai/SPARK-16313-branch-1.6.
## What changes were proposed in this pull request? Forgotten broadcasted variables were persisted into a previous #PR 14153). This PR turns those `unpersist()` into `destroy()` so that memory is freed even on the driver. ## How was this patch tested? Unit Tests in Word2VecSuite were run locally. This contribution is done on behalf of Criteo, according to the terms of the Apache license 2.0. Author: Anthony Truchet <a.truchet@criteo.com> Closes apache#14268 from AnthonyTruchet/SPARK-16440. (cherry picked from commit 0dc79ff) Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request? Fix some mistake in ```LinearRegression``` formula. ## How was this patch tested? Documents change, no tests. Author: Yanbo Liang <ybliang8@gmail.com> Closes apache#14369 from yanboliang/LiR-formula. (cherry picked from commit 3c3371b) Signed-off-by: Sean Owen <sowen@cloudera.com>
f12f11e introduced this bug, missed foreach as map Test added Author: Wesley Tang <tangmingjun@mininglamp.com> Closes apache#14324 from breakdawn/master. (cherry picked from commit d1d5069) Signed-off-by: Sean Owen <sowen@cloudera.com>
Version of derby upgraded based on important security info at VersionEye. Test scope added so we don't include it in our final package anyway. NB: I think this should be backported to all previous releases as it is a security problem https://www.versioneye.com/java/org.apache.derby:derby/10.11.1.1 The CVE number is 2015-1832. I also suggest we add a SECURITY tag for JIRAs Existing tests with the change making sure that we see no new failures. I checked derby 10.12.x and not derby 10.11.x is downloaded to our ~/.m2 folder. I then used dev/make-distribution.sh and checked the dist/jars folder for Spark 2.0: no derby jar is present. I don't know if this would also remove it from the assembly jar in our 1.x branches. Author: Adam Roberts <aroberts@uk.ibm.com> Closes apache#14379 from a-roberts/patch-4. (cherry picked from commit 04a2c07) Signed-off-by: Sean Owen <sowen@cloudera.com>
…than 200…" This reverts commit 15abbf9.
… 10.12.1.1 security fix ## What changes were proposed in this pull request? See apache#14379 ; I failed to note in back-porting to 1.6 that an additional Hadoop 1 deps file would need to be updated. This makes that change. ## How was this patch tested? Jenkins tests. Author: Sean Owen <sowen@cloudera.com> Closes apache#14403 from srowen/SPARK-16751.2.
## What changes were proposed in this pull request? Cherry-pick from d1d5069 and fix the test case ## How was this patch tested? Test updated Author: Wesley Tang <tangmingjun@mininglamp.com> Closes apache#14404 from breakdawn/branch-1.6.
Casting ConcurrentHashMap to ConcurrentMap allows to run code compiled with Java 8 on Java 7 Compilation. Existing automatic tests Author: Maciej Brynski <maciej.brynski@adpilot.pl> Closes apache#14459 from maver1ck/spark-15541-master. (cherry picked from commit 511dede) Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request? Casting ConcurrentHashMap to ConcurrentMap allows to run code compiled with Java 8 on Java 7 ## How was this patch tested? Compilation. Existing automatic tests Author: Maciej Brynski <maciej.brynski@adpilot.pl> Closes apache#14390 from maver1ck/spark-15541.
avgMetrics was summed, not averaged, across folds Author: =^_^= <maxmoroz@gmail.com> Closes apache#14456 from pkch/pkch-patch-1. (cherry picked from commit 639df04) Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request? Mask spark.ssl.keyPassword, spark.ssl.keyStorePassword, spark.ssl.trustStorePassword in Web UI environment page. (Changes their values to ***** in env. page) ## How was this patch tested? I've built spark, run spark shell and checked that this values have been masked with *****. Also run tests: ./dev/run-tests [info] ScalaTest [info] Run completed in 1 hour, 9 minutes, 5 seconds. [info] Total number of tests run: 2166 [info] Suites: completed 65, aborted 0 [info] Tests: succeeded 2166, failed 0, canceled 0, ignored 590, pending 0 [info] All tests passed.  Author: Artur Sukhenko <artur.sukhenko@gmail.com> Closes apache#14409 from Devian-ua/maskpass. (cherry picked from commit 3861273) Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request? SpillReader NPE when spillFile has no data. See follow logs: 16/07/31 20:54:04 INFO collection.ExternalSorter: spill memory to file:/data4/yarnenv/local/usercache/tesla/appcache/application_1465785263942_56138/blockmgr-db5f46c3-d7a4-4f93-8b77-565e469696fb/09/temp_shuffle_ec3ece08-4569-4197-893a-4a5dfcbbf9fa, fileSize:0.0 B 16/07/31 20:54:04 WARN memory.TaskMemoryManager: leak 164.3 MB memory from org.apache.spark.util.collection.ExternalSorter3db4b52d 16/07/31 20:54:04 ERROR executor.Executor: Managed memory leak detected; size = 190458101 bytes, TID = 2358516/07/31 20:54:04 ERROR executor.Executor: Exception in task 1013.0 in stage 18.0 (TID 23585) java.lang.NullPointerException at org.apache.spark.util.collection.ExternalSorter$SpillReader.cleanup(ExternalSorter.scala:624) at org.apache.spark.util.collection.ExternalSorter$SpillReader.nextBatchStream(ExternalSorter.scala:539) at org.apache.spark.util.collection.ExternalSorter$SpillReader.<init>(ExternalSorter.scala:507) at org.apache.spark.util.collection.ExternalSorter$SpillableIterator.spill(ExternalSorter.scala:816) at org.apache.spark.util.collection.ExternalSorter.forceSpill(ExternalSorter.scala:251) at org.apache.spark.util.collection.Spillable.spill(Spillable.scala:109) at org.apache.spark.memory.TaskMemoryManager.acquireExecutionMemory(TaskMemoryManager.java:154) at org.apache.spark.memory.TaskMemoryManager.allocatePage(TaskMemoryManager.java:249) at org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:112) at org.apache.spark.shuffle.sort.ShuffleExternalSorter.acquireNewPageIfNecessary(ShuffleExternalSorter.java:346) at org.apache.spark.shuffle.sort.ShuffleExternalSorter.insertRecord(ShuffleExternalSorter.java:367) at org.apache.spark.shuffle.sort.UnsafeShuffleWriter.insertRecordIntoSorter(UnsafeShuffleWriter.java:237) at org.apache.spark.shuffle.sort.UnsafeShuffleWriter.write(UnsafeShuffleWriter.java:164) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73) at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41) at org.apache.spark.scheduler.Task.run(Task.scala:89) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) 16/07/31 20:54:30 INFO executor.Executor: Executor is trying to kill task 1090.1 in stage 18.0 (TID 23793) 16/07/31 20:54:30 INFO executor.CoarseGrainedExecutorBackend: Driver commanded a shutdown ## How was this patch tested? Manual test. Author: sharkd <sharkd.tu@gmail.com> Author: sharkdtu <sharkdtu@tencent.com> Closes apache#14479 from sharkdtu/master. (cherry picked from commit 583d91a) Signed-off-by: Reynold Xin <rxin@databricks.com>
…vents, not only failures This patch fixes a bug in Spark's standalone Master which could cause applications to hang if tasks cause executors to exit with zero exit codes. As an example of the bug, run ``` sc.parallelize(1 to 1, 1).foreachPartition { _ => System.exit(0) } ``` on a standalone cluster which has a single Spark application. This will cause all executors to die but those executors won't be replaced unless another Spark application or worker joins or leaves the cluster (or if an executor exits with a non-zero exit code). This behavior is caused by a bug in how the Master handles the `ExecutorStateChanged` event: the current implementation calls `schedule()` only if the executor exited with a non-zero exit code, so a task which causes a JVM to unexpectedly exit "cleanly" will skip the `schedule()` call. This patch addresses this by modifying the `ExecutorStateChanged` to always unconditionally call `schedule()`. This should be safe because it should always be safe to call `schedule()`; adding extra `schedule()` calls can only affect performance and should not introduce correctness bugs. I added a regression test in `DistributedSuite`. Author: Josh Rosen <joshrosen@databricks.com> Closes apache#14510 from JoshRosen/SPARK-16925. (cherry picked from commit 4f5f9b6) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
## What changes were proposed in this pull request? regexp_extract actually returns null when it shouldn't when a regex matches but the requested optional group did not. This makes it return an empty string, as apparently designed. ## How was this patch tested? Additional unit test Author: Sean Owen <sowen@cloudera.com> Closes apache#14504 from srowen/SPARK-16409. (cherry picked from commit 8d87252) Signed-off-by: Sean Owen <sowen@cloudera.com>
…ringFunctionsSuite ## What changes were proposed in this pull request? This PR aims to fix a build error on branch 1.6 at apache@8d87252, but I think we had better have this consistently in master branch, too. It's because there exist other ongoing PR (apache#14525) about this. https://amplab.cs.berkeley.edu/jenkins/job/spark-branch-1.6-compile-maven-with-yarn-2.3/286/console ```scala [error] /home/jenkins/workspace/spark-branch-1.6-compile-maven-with-yarn-2.3/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala:82: value toDF is not a member of Seq[String] [error] val df = Seq("aaaac").toDF("s") [error] ^ ``` ## How was this patch tested? After passing Jenkins, run compilation test on branch 1.6. ``` build/mvn -DskipTests -Pyarn -Phadoop-2.3 -Pkinesis-asl -Phive -Phive-thriftserver install ``` Author: Dongjoon Hyun <dongjoon@apache.org> Closes apache#14526 from dongjoon-hyun/SPARK-16939. (cherry picked from commit a16983c) Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request? This patch introduces a new configuration, `spark.deploy.maxExecutorRetries`, to let users configure an obscure behavior in the standalone master where the master will kill Spark applications which have experienced too many back-to-back executor failures. The current setting is a hardcoded constant (10); this patch replaces that with a new cluster-wide configuration. **Background:** This application-killing was added in 6b5980d (from September 2012) and I believe that it was designed to prevent a faulty application whose executors could never launch from DOS'ing the Spark cluster via an infinite series of executor launch attempts. In a subsequent patch (apache#1360), this feature was refined to prevent applications which have running executors from being killed by this code path. **Motivation for making this configurable:** Previously, if a Spark Standalone application experienced more than `ApplicationState.MAX_NUM_RETRY` executor failures and was left with no executors running then the Spark master would kill that application, but this behavior is problematic in environments where the Spark executors run on unstable infrastructure and can all simultaneously die. For instance, if your Spark driver runs on an on-demand EC2 instance while all workers run on ephemeral spot instances then it's possible for all executors to die at the same time while the driver stays alive. In this case, it may be desirable to keep the Spark application alive so that it can recover once new workers and executors are available. In order to accommodate this use-case, this patch modifies the Master to never kill faulty applications if `spark.deploy.maxExecutorRetries` is negative. I'd like to merge this patch into master, branch-2.0, and branch-1.6. ## How was this patch tested? I tested this manually using `spark-shell` and `local-cluster` mode. This is a tricky feature to unit test and historically this code has not changed very often, so I'd prefer to skip the additional effort of adding a testing framework and would rather rely on manual tests and review for now. Author: Josh Rosen <joshrosen@databricks.com> Closes apache#14544 from JoshRosen/add-setting-for-max-executor-failures. (cherry picked from commit b89b3a5) Signed-off-by: Josh Rosen <joshrosen@databricks.com>
This reverts commit 92ee6fb.
…iftserver for scala 2.11 ## What changes were proposed in this pull request? hive-thriftserver works with Scala 2.11 (https://issues.apache.org/jira/browse/SPARK-8013). So, let's publish scala 2.11 artifacts with the flag of `-Phive-thfitserver`. I am also fixing the doc. Author: Yin Huai <yhuai@databricks.com> Closes apache#14586 from yhuai/SPARK-16453-branch-1.6.
…essage that would occur, it seems, only in Hadoop 1.x (and therefore in Spark 1.x)
…lySize Replaces custom choose function with o.a.commons.math3.CombinatoricsUtils.binomialCoefficient Spark unit tests Author: zero323 <zero323@users.noreply.github.com> Closes apache#14614 from zero323/SPARK-17027. (cherry picked from commit 0ebf7c1) Signed-off-by: Sean Owen <sowen@cloudera.com>
…on.getPolySize" This reverts commit b54a586.
…ore stable ## What changes were proposed in this pull request? This PR backports apache#14289 to branch 1.6 https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/62593/testReport/junit/org.apache.spark.sql.sources/CreateTableAsSelectSuite/create_a_table__drop_it_and_create_another_one_with_the_same_name/ shows that `create a table, drop it and create another one with the same name` failed. But other runs were good. Seems it is a flaky test. This PR tries to make this test more stable. Author: Yin Huai <yhuai@databricks.com> Closes apache#14668 from yhuai/SPARK-16656-branch1.6.
We use reflection to convert `TreeNode` to json string, and currently don't support arbitrary object. `UserDefinedGenerator` takes a function object, so we should skip json format test for it, or the tests can be flacky, e.g. `DataFrameSuite.simple explode`, this test always fail with scala 2.10(branch 1.6 builds with scala 2.10 by default), but pass with scala 2.11(master branch builds with scala 2.11 by default). N/A Author: Wenchen Fan <wenchen@databricks.com> Closes apache#14679 from cloud-fan/json. (cherry picked from commit 928ca1c) Signed-off-by: Yin Huai <yhuai@databricks.com>
…dBatch' https://issues.apache.org/jira/browse/SPARK-17038 ## What changes were proposed in this pull request? StreamingSource's lastReceivedBatch_submissionTime, lastReceivedBatch_processingTimeStart, and lastReceivedBatch_processingTimeEnd all use data from lastCompletedBatch instead of lastReceivedBatch. In particular, this makes it impossible to match lastReceivedBatch_records with a batchID/submission time. This is apparent when looking at StreamingSource.scala, lines 89-94. ## How was this patch tested? Manually running unit tests on local laptop Author: Xin Ren <iamshrek@126.com> Closes apache#14681 from keypointt/SPARK-17038. (cherry picked from commit e6bef7d) Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
… non-blocking ## What changes were proposed in this pull request? StandaloneSchedulerBackend.executorRemoved is a blocking call right now. It may cause some deadlock since it's called inside StandaloneAppClient.ClientEndpoint. This PR just changed CoarseGrainedSchedulerBackend.removeExecutor to be non-blocking. It's safe since the only two usages (StandaloneSchedulerBackend and YarnSchedulerEndpoint) don't need the return value). ## How was this patch tested? Jenkins unit tests. Author: Shixiong Zhu <shixiong@databricks.com> Closes apache#14882 from zsxwing/SPARK-17316.
… with testthat version 1.0.1 Fix warnings and a failure in SparkR test cases with testthat version 1.0.1 SparkR unit test cases. Author: Sun Rui <sunrui2016@gmail.com> Closes apache#12867 from sun-rui/SPARK-15091. (cherry picked from commit 8b6491f) Signed-off-by: Shivaram Venkataraman <shivaram@cs.berkeley.edu>
## What changes were proposed in this pull request? `DataSourceStrategy` does not consider `SQLConf` in `Context` and always match column names. For instance, `HiveContext` uses case insensitive configuration, but it's ignored in `DataSourceStrategy`. This issue was originally registered at SPARK-11301 against 1.6.0 and seemed to be fixed at that time, but Apache Spark 1.6.2 still handles **partitioned column name** in a case-sensitive way always. This is incorrect like the following. ```scala scala> sql("CREATE TABLE t(a int) PARTITIONED BY (b string) STORED AS PARQUET") scala> sql("INSERT INTO TABLE t PARTITION(b='P') SELECT * FROM (SELECT 1) t") scala> sql("INSERT INTO TABLE t PARTITION(b='Q') SELECT * FROM (SELECT 2) t") scala> sql("SELECT * FROM T WHERE B='P'").show +---+---+ | a| b| +---+---+ | 1| P| | 2| Q| +---+---+ ``` The result is the same with `set spark.sql.caseSensitive=false`. Here is the result in [Databricks CE](https://databricks-prod-cloudfront.cloud.databricks.com/public/4027ec902e239c93eaaa8714f173bcfc/6660119172909095/3421754458488607/5162191866050912/latest.html) . This PR reads the configuration and handle the column name comparison accordingly. ## How was this patch tested? Pass the Jenkins test with a modified test. Author: Dongjoon Hyun <dongjoon@apache.org> Closes apache#14970 from dongjoon-hyun/SPARK-11301.
…for TreeNode This is a backport of PR apache#14915 to branch 1.6. ## What changes were proposed in this pull request? class `org.apache.spark.sql.types.Metadata` is widely used in mllib to store some ml attributes. `Metadata` is commonly stored in `Alias` expression. ``` case class Alias(child: Expression, name: String)( val exprId: ExprId = NamedExpression.newExprId, val qualifier: Option[String] = None, val explicitMetadata: Option[Metadata] = None, override val isGenerated: java.lang.Boolean = false) ``` The `Metadata` can take a big memory footprint since the number of attributes is big ( in scale of million). When `toJSON` is called on `Alias` expression, the `Metadata` will also be converted to a big JSON string. If a plan contains many such kind of `Alias` expressions, it may trigger out of memory error when `toJSON` is called, since converting all `Metadata` references to JSON will take huge memory. With this PR, we will skip scanning Metadata when doing JSON conversion. For a reproducer of the OOM, and analysis, please look at jira https://issues.apache.org/jira/browse/SPARK-17356. ## How was this patch tested? Existing tests. Author: Sean Zhong <seanzhong@databricks.com> Closes apache#14973 from clockfly/json_oom_1.6.
Upgrades the Snappy version to 1.1.2.6 from 1.1.2.4, release notes: https://github.com/xerial/snappy-java/blob/master/Milestone.md mention "Fix a bug in SnappyInputStream when reading compressed data that happened to have the same first byte with the stream magic header (alteryx#142)" Existing unit tests using the latest IBM Java 8 on Intel, Power and Z architectures (little and big-endian) Author: Adam Roberts <aroberts@uk.ibm.com> Closes apache#14958 from a-roberts/master. (cherry picked from commit 6c08dbf) Signed-off-by: Sean Owen <sowen@cloudera.com>
## What changes were proposed in this pull request? Fix the 'ask' type parameter in 'removeExecutor' to eliminate a lot of error logs `Cannot cast java.lang.Boolean to scala.runtime.Nothing$` ## How was this patch tested? Jenkins Author: Shixiong Zhu <shixiong@databricks.com> Closes apache#14983 from zsxwing/SPARK-17316-3. (cherry picked from commit 175b434) Signed-off-by: Shixiong Zhu <shixiong@databricks.com>
…deps ## What changes were proposed in this pull request? Also update Hadoop 1 deps file to reflect Snappy 1.1.2.6 ## How was this patch tested? N/A Author: Sean Owen <sowen@cloudera.com> Closes apache#14992 from srowen/SPARK-17378.2.
… retrieve HiveConf ## What changes were proposed in this pull request? Right now, we rely on Hive's `SessionState.get()` to retrieve the HiveConf used by ClientWrapper. However, this conf is actually the HiveConf set with the `state`. There is a small chance that we are trying to use the Hive client in a new thread while the global client has not been created yet. In this case, `SessionState.get()` will return a `null`, which causes a NPE when we call `SessionState.get(). getConf `. Since the conf that we want is actually the conf we set to `state`. I am changing the code to just call `state.getConf` (this is also what Spark 2.0 does). ## How was this patch tested? I have not figured out a good way to reproduce this. Author: Yin Huai <yhuai@databricks.com> Closes apache#14816 from yhuai/SPARK-17245.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.