Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[SPARK-25942][SQL] Aggregate expressions shouldn't be resolved on App…
…endColumns ## What changes were proposed in this pull request? `Dataset.groupByKey` will bring in new attributes from serializer. If key type is the same as original Dataset's object type, they have same serializer output and so the attribute names will conflict. This won't be a problem at most of cases, if we don't refer conflict attributes: ```scala val ds: Dataset[(ClassData, Long)] = Seq(ClassData("one", 1), ClassData("two", 2)).toDS() .map(c => ClassData(c.a, c.b + 1)) .groupByKey(p => p).count() ``` But if we use conflict attributes, `Analyzer` will complain about ambiguous references: ```scala val ds = Seq(1, 2, 3).toDS() val agg = ds.groupByKey(_ >= 2).agg(sum("value").as[Long], sum($"value" + 1).as[Long]) ``` We have discussed two fixes apache#22944 (comment): 1. Implicitly add alias to key attribute: Works for primitive type. But for product type, we can't implicitly add aliases to key attributes because we might need to access key attributes by names in methods like `mapGroups`. 2. Detect conflict from key attributes and warn users to add alias manually This might work, but needs to add some hacks to Analyzer or AttributeSeq.resolve. This patch applies another simpler fix. We resolve aggregate expressions with `AppendColumns`'s children, instead of `AppendColumns`. `AppendColumns`'s output contains its children's output and serializer output, aggregate expressions shouldn't use serializer output. ## How was this patch tested? Added test. Closes apache#22944 from viirya/dataset_agg. Authored-by: Liang-Chi Hsieh <viirya@gmail.com> Signed-off-by: Wenchen Fan <wenchen@databricks.com>
- Loading branch information