Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ES|QL: Improve aggregation over constants handling #112392

Open
wants to merge 26 commits into
base: main
Choose a base branch
from

Conversation

astefan
Copy link
Contributor

@astefan astefan commented Aug 30, 2024

This change consists of:

  • Add separate rule for dealing with nulls in aggregations
  • Duplicate SubstituteSurrogate in "Operator Optimization" batch
  • Many more tests
  • Add test for median_absolute_deviation function
  • Add mv handling to top function
  • Allows PropagateEvalFoldables rule to also deal with aggregate functions

Addresses part of #100634. Missing bits:

Fixes #110257
Fixes #104430
Fixes #100170

Needs more tests for the new rule and the existent ones in LogicalPlanOptimizerTests.

Duplicate SubstituteSurrogate in "Operator Optimization" batch
Many more tests
Add tests for mad
Add mv handling to top function
null |null |null
;

########### failing :-( with InvalidArgumentException: Does not support yet aggregations over constants
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will be removed once we have mv_ function for st_centroid_agg.

@@ -197,11 +202,18 @@ public AggregatorFunctionSupplier supplier(List<Integer> inputChannels) {
public Expression surrogate() {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The surrogate method, as it stands now, is more a "surrogate-expression-for-foldable-scenario" kind of method. This implies that the behavior that existed below before this change is not possible anymore.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right - we should probably look into introducing a different interface altogether: surrogate was initially used for expressions that knew they'd be transformed.
But it evolved into a mechanism for "folding" however not to a value, but another expression (which itself might be foldable or not).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll leave this one for a follow up I think.

* All aggregate functions that are also nullable (COUNT_DISTINCT and COUNT are exceptions), will get a NULL
* field replacement by the FoldNull rule, COUNT_DISTINCT will benefit from PropagateEvalFoldables.
*/
public final class ReplaceAggregatesWithNull extends OptimizerRules.OptimizerRule<Aggregate> {
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This rule is a simplified variant of SubstituteSurrogates.

boolean changed = false;
boolean hasSurrogates = false;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done this to shortcircuit the execution earlier in the execution.

@@ -5091,7 +5091,7 @@ public void testNullFoldingDoesNotApplyOnAggregate() throws Exception {
CountDistinct countd = new CountDistinct(EMPTY, getFieldAttribute("a"), getFieldAttribute("a"));
assertEquals(countd, rule.rule(countd));
countd = new CountDistinct(EMPTY, NULL, NULL);
assertEquals(new Literal(EMPTY, null, LONG), rule.rule(countd));
assertEquals(countd, rule.rule(countd));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the consequence of CountDistinct not being nullable anymore.

Copy link
Member

@costin costin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - great tests and comments!

@@ -151,6 +152,11 @@ public DataType dataType() {
return DataType.LONG;
}

@Override
public Nullability nullable() {
return Nullability.FALSE;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@@ -197,11 +202,18 @@ public AggregatorFunctionSupplier supplier(List<Integer> inputChannels) {
public Expression surrogate() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right - we should probably look into introducing a different interface altogether: surrogate was initially used for expressions that knew they'd be transformed.
But it evolved into a mechanism for "folding" however not to a value, but another expression (which itself might be foldable or not).


@Override
public Expression surrogate() {
return field().foldable() ? field() : null;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Values not only merges values, but also removes duplicates (If no test was triggered because of this, we should add some!)

ROW x = [1, 1, 2] | STATS a = VALUES(x)

-> [1, 2]

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch, BUT I think we have a problem with the documentation. It's not mentioning this aspect. There were other misses in our functions docs (which are fixed in this PR), I think we need to review our documentation on functions and double check its correctness and completeness. I will create an issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ivancea thank you for pro-actively checking this PR 🙏, that was very helpful.
I've created two issues:

Copy link
Contributor

@alex-spies alex-spies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great @astefan ! I think this change is sound and added mostly minor remarks.

My only major remark is: I think we need LogicalPlanOptimizerTests cases that prove that the foldable propagation actually takes place. The csv tests are great, but they do not prove that foldable propagation actually takes place, only that the result is correct.

But you already mentioned more optimizer tests as one of the tasks to un-draft :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Out of scope: stats.csv-spec has a bunch of ...OfConst tests that overlap a lot with the tests here, except that they normally start with from employees. Because these test stats more than row, maybe we should move test cases like row ... | stats ... from here to stats.csv-spec in a follow-up PR.

} else if (p instanceof Aggregate agg) {
List<NamedExpression> newAggs = new ArrayList<>(agg.aggregates().size());
agg.aggregates().forEach(e -> {
if (Alias.unwrap(e) instanceof AggregateFunction) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks like it cannot propagate into the groups, as in

... | eval x = [1,2,3] | stats sum(field) by x

right? Maybe it's worth adding a comment.

That's another thing we could optimize though if needed, as I think STATS ... BY const is the same as STATS ... | eval x = mv_values(const) | mv_expand x. Not sure that's worth maintaining an optimization rule for, though.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it's not covered. Unintentionally, just I didn't think about this use case.
Will leave it for a follow up, though. There are many things going on in this PR.

Comment on lines 91 to 102
} else {
// All aggs actually have been optimized away
// \_Aggregate[[],[AVG([NULL][NULL]) AS s]]
// Replace by a local relation with one row, followed by an eval, e.g.
// \_Eval[[MVAVG([NULL][NULL]) AS s]]
// \_LocalRelation[[{e}#21],[ConstantNullBlock[positions=1]]]
plan = new LocalRelation(
source,
List.of(new EmptyAttribute(source)),
LocalSupplier.of(new Block[] { BlockUtils.constantBlock(PlannerUtils.NON_BREAKING_BLOCK_FACTORY, null, 1) })
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The code in lines 86-106 also happens in SubstituteSurrogates, and kinda-sorta also in ReplaceStatsAggExpressionWithEval. I opened #110345 but maybe, instead of reducing the number of opt. rules, we should just refactor the code path that moves expressions out of aggregates and into evals. We could start here and make sure the code is the same as in SubstituteSurrogates.

@astefan astefan marked this pull request as ready for review October 3, 2024 09:53
@elasticsearchmachine elasticsearchmachine added the needs:triage Requires assignment of a team area label label Oct 3, 2024
@astefan astefan added >bug auto-backport-and-merge and removed needs:triage Requires assignment of a team area label labels Oct 3, 2024
Copy link
Contributor

@ivancea ivancea left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

FROM airports
| eval z = TO_GEOPOINT(null)
| STATS centroidNull = ST_CENTROID_AGG(null),
centroidExpNull = ST_CENTROID_AGG(TO_GEOPOINT(null::string)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one is converted to null, which I suppose is not expected from a user perspective. Is this something to be fixed later? Is it related with #108215?
Maybe it should have a comment or something here

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ The whole result here looks inconsistent.

I expected null since this is consistent with e.g. SUM and AVG over 0 rows/rows with only null values.

Currently, running the same aggregation on an empty index returns POINT (NaN NaN), which itself is inconsistent with the fact that we shouldn't have NaN values in our results - but maybe this consistent with geospatial standards?

In any case, the 3 results should be the same. But it's fine to fix this in a follow-up issue. Let's figure out what the result should be and throw this inconsistency into an issue (new or existing).

@craigtaverner , is POINT(NaN NaN) really the result we need to return?

Copy link
Contributor

@alex-spies alex-spies Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per #106025, returning null is correct! Thanks @craigtaverner for pointing me to this issue!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a median_absolute_deviation.csv-spec

p50 = 40+10
| stats percentile(value200, p90),
percentile(value200, p100),
percentile(value200, p3_null),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't percentile(x, null) be invalid? Similarly for count_distinct(x, null).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a good question. Here's the paradox: percentile value (the second argument) must be a number between 0 and 100. If it's 1000 we are issuing a warning and treat the result of the agg as null; by the principle that we issue warnings and return nulls when things "don't work" I think we should treat percentile as null if the second parameter is null.

Yes, it's not a value between 0 and 100, it's not a missing value, it's.... "unknown" value, a null. Which is a bit special than missing or a number between 0 and 100. In main this query fails with an ugly exception.

There are other problems or, better said, inconsistencies: weighted_avg specifically forbids null at data type validation time (weighted_avg(x, null)), but it doesn't do it when the value received is the result of an evaluation (eval n = null + 2 | stats weighted_avg(x, n). If weighted_avg does check for null and forbids it explicitly, percentile doesn't even do it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't have an answer for you to this question right now. There should be a generic approach to dealing with aggregations' arguments that accept only constants: top, percentile, weighted_avg.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior is inconsistent and needs to be fixed, though not in this PR.
If the arguments have to be literals, then throwing an exception (invalid query) would be the way to go.
If not, then folding+ warning is the way to go.

For the former, the function implementation would have to be improved to rely on Validatable interface. Another way, would be marking the field as foldable/not-null and doing this in the planner as it will work across all functions.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have to hash this out now, but maybe it'd be safer to start with a validation exception now - we can still go back and return null later, while the other way around could be considered a breaking change, albeit in a very edge case scenario.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about some tests where the limit and/or the "asc"/"desc" is propagated? And, similarly, tests for weighted_avg where the weights are propagated?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've tried that but the situation is problematic because of the "surrogate" expressions abuse. Right now the validation of these constant arguments in percentile and weighted_avg is done in resolveType which is not completely correct. It should be done in validate after any constant values are propagated, folded and replaced.

If we do this in validate then in some cases the validation is not happening at all because weighted_avg disappears, being replaced by one of the formulas in surrogate. For example ROW values=[1,2,3,4,5] | weighted_avg(values, 0) - if we do the validation properly in validate that method won't even be called because when it is there is no WeightedAvg instance anymore (it's getting replaced with MvAvg(s, field)). Unless we do something about aggregation foldability and surrogate expressions we are in a deadlock: we fix something but break something else.

;

percentileOfNullsOnRealIndex
from employees | eval x = null::integer, y = null, z = 123 - null | stats percIntNull = percentile(y, 90), percNull = percentile(y, 90), percentile(null, 90), percentile(null+2, 90), percentile(z, 90) by languages | sort languages desc;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the test fix got lost :) It's only reformatted in the current version, but the definitions of percIntNull and percNull are still the same.

Copy link
Contributor

@alex-spies alex-spies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a small round of comments - plan to finalize the re-review tomorrow :)

@alex-spies alex-spies self-requested a review October 14, 2024 15:08
Copy link
Contributor

@alex-spies alex-spies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is a very good PR that we should go forward with, after resolving whatever conflicts may arise from the STATS ... WHERE ... support that was merged today.

The only thing I think this is really missing is optimizer tests that demonstrate that foldable propagation (of non-null expressions) actually takes place. And we should double check if we really want percentile(x, null) and count_distinct(x, null) to just be null instead of an invalid query, as that's a decision we won't be able to take back without becoming a breaking change.

Other than that, my remarks are mostly minor.

FROM airports
| eval z = TO_GEOPOINT(null)
| STATS centroidNull = ST_CENTROID_AGG(null),
centroidExpNull = ST_CENTROID_AGG(TO_GEOPOINT(null::string)),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

++ The whole result here looks inconsistent.

I expected null since this is consistent with e.g. SUM and AVG over 0 rows/rows with only null values.

Currently, running the same aggregation on an empty index returns POINT (NaN NaN), which itself is inconsistent with the fact that we shouldn't have NaN values in our results - but maybe this consistent with geospatial standards?

In any case, the 3 results should be the same. But it's fine to fix this in a follow-up issue. Let's figure out what the result should be and throw this inconsistency into an issue (new or existing).

@craigtaverner , is POINT(NaN NaN) really the result we need to return?

from employees | eval x = 82+null | stats count_distinct(salary, x*100), count_distinct(salary, null), count_distinct(salary, null + 1);

count_distinct(salary, x*100):long|count_distinct(salary, null):long|count_distinct(salary, null + 1):long
100 |100 |100
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are we sure we want null precision to be interpreted as default precision? That could hide mistakes in the query.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not so sure :-), now that you mentioned this aspect. Maybe we want null here, but at the same time count_distinct and count DO make sense to be not nullable, counting something should always be something concrete.

If null is not the default precision and we cannot return null as the return value of count_distinct, should we error out then? (like weighted_avg and percentile)

Copy link
Contributor

@bpintea bpintea Oct 16, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

counting something should always be something concrete.

COUNT(null) and COUNT_DISTINCT(null, ...) do return 0 now.
Maybe not really comparable, but OTOH MV_COUNT(null) returns null.

However, to cast a vote, COUNT_DISTINCT(..., null) feels like it should return null for the same reason why 1 + null (or some random SUBSTRING(..., 1, null)) should: the operation is applied on a missing / unknown value.
Edit: ... i.e. different from a COUNT_DISCINCT(null, 5), where it's known there's nothing to count and a concrete 0 makes already sense. Not a strong argument, tho.
But then also COUNT_DISTINCT(null, null) could return 0.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

count are special in that they are not nullable - they always return a value. If anything, MV_COUNT should be aligned with it.
It's up to the function to decide if null is treated as "default value" or invalid. Since there's no COUNT_DISTINCT(field), I'd opt for null meaning default.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd be in favor of COUNT_DISTINCT(field, null) resulting in an invalid query exception via the Validatable interface - but I'm fine with default precision, as long as we document this clearly :)

if (newAggs.isEmpty() == false) {
plan = new Aggregate(source, aggregate.child(), aggregate.aggregateType(), aggregate.groupings(), newAggs);
if (remainingAggregates.isEmpty() == false) {// build the new Aggregate with the rest (non-null) aggregates
plan = new Aggregate(source, aggregate.child(), aggregate.aggregateType(), aggregate.groupings(), remainingAggregates);
} else {
// All aggs actually have been optimized away
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For the LocalRelation substitution here to be correct, we mustn't have a BY clause - otherwise, the number of rows still depends on the number of groups.

Normally, if there's a BY clause, this means there'll also be a corresponding aggregate, in which case remainingAggregates will not be empty.

But! Other optimization rules can (and do!) optimize away the grouping from the aggregates if it's not used downstream.

This rule seems unaffected because I tried

FROM test | stats x = avg(null) by b | drop b

FROM test | eval i = null | stats x = avg(i) by b | drop b

and it produced correct results.

But I think we should

  • throw IAE if we end up here and the aggregate has a grouping, and
  • add a test for good measure, e.g. the queries I wrote above.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not following and I need more details. remainingAggregates being empty means exactly that - the BY part is empty, it was folded no null by this rule. If BY is reduced to null this means only one row in the results.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I found a reproducer on your branch:

ROW field = null::integer, otherfield = [1,2] | STATS min(field) by otherfield | DROP otherfield

  min(field)   
---------------
null 

This should instead be

  min(field)   
---------------
null 
null

(as it is on main) because there 2 values for otherfield and thus 2 groups, resp. 2 output rows from the STATS.

It's insufficient to look at the aggregates. We need to check aggregate.groupings() instead, as dropping the groupings after the stats lets us optimize them away from the aggregates.

The removal of the groupings from the aggregates happens in CombineProjections, like in the example above:

[2024-10-17T11:25:35,813][INFO ][o.e.x.e.o.LogicalPlanOptimizer] [runTask-0] Rule logical.CombineProjections applied
Limit[1000[INTEGER]]                                                                                     = Limit[1000[INTEGER]]
\_EsqlProject[[min(field){r}#7]]                                                                         ! \_Aggregate[STANDARD,[otherfield{r}#4],[MIN(field{r}#2,true[BOOLEAN]) AS min(field)]]
  \_Aggregate[STANDARD,[otherfield{r}#4],[MIN(field{r}#2,true[BOOLEAN]) AS min(field), otherfield{r}#4]] !   \_Row[[TOINTEGER(null[NULL]) AS field, [1, 2][INTEGER] AS otherfield]]
    \_Row[[TOINTEGER(null[NULL]) AS field, [1, 2][INTEGER] AS otherfield]]                               ! 

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think CombineProjections is to blame, it is doing the right thing. This can be seen with queries that have their projections correctly pruned away.

The "workaround" of calling SubstitueSurrogates again in the "Operator Optimization" is to blame here and this shows the vicious cycle of fixing something - breaking something else that I mentioned before. Foldability and surrogate expressions represent a faulty principle and when things start to not make sense and the code "struggles" to make the right thing (by using workarounds like calling SubstituteSurrogates twice) then it is clear that something more fundamental is not right.

At this point there are two options available:

  • create an issue for this and merge the PR for the benefits of the tests it adds. This though introduces a bug in a edgy situation (the one that we know of, there may be others that we don't)
  • try to make SubstituteSurrogates don't do its thing for aggregations that have a grouping which is not kept as a projection anymore. This is clearly a workaround that tries to ignore what CombineProjections did. I am trying now this second idea.

Comment on lines 5570 to 5571
from test
| stats x = percentile(languages, languages) by emp_no
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: we could also test

from test
| eval l = languages
| stats x = percentile(l, l) by emp_no

And the same with rename instead of eval.

I just checked manually and that seems to work fine :)

from test
| eval x = null + 1, y = salary / 1000
| limit 5
| stats s = avg(y) by x
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we could add optimizer tests with varying (propagatable/literal) nulls:

  • multiple agg functions, only some of which receive null (after propagation or literally) the others receive non-foldables
  • multiple agg functions, some of which receive null, the others receive foldable consts (no agg functions with non-foldables) - with and without a BY clause
  • multiple agg functions, all of which receive null (after propagation or literally) and there is no BY clause

That's because we hit a different code path than before - e.g. it's not SubstituteSurrogates that takes care of a STATS that's been optimized away, but ReplaceAggregatesWithNull. And it's also good to see that SubstituteSurrogates and ReplaceAggregatesWithNull work well together. And that ReplaceAggregatesWithNull correctly leaves agg functions in place that don't receive null (although that's already kinda tested by using a non-foldable BY clause)

Comment on lines +5763 to +5764
* \_Eval[[$$COUNT$$$SUM$s$0$0{r$}#24 * 10[INTEGER] AS $$SUM$s$0, $$SUM$s$0{r$}#22 / $$COUNT$s$1{r$}#23 AS s]]
* \_Aggregate[STANDARD,[y{r}#6],[COUNT([*][KEYWORD]) AS $$COUNT$$$SUM$s$0$0, COUNT(10[INTEGER]) AS $$COUNT$s$1, y{r}#6]]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have a problem here: we attempt the surrogate substitution before propagating foldables - this replaces the avg by sum/count instead of simply mv_avg(9+1). If we propagate first, the plan should be identical to that in testAvg_Of_Foldable_NonNull below.

I think this corroborates the observation that substitutions like avg -> sum/count should be separated from the const substitutions avg(const) -> mv_avg(const).

Let's add a comment that we want/could improve this - because the test doesn't document the "should be" state, but the current "is" state.

from test
| eval x = 9 + 1, y = salary / 1000
| limit 5
| stats s = avg(x) by y
Copy link
Contributor

@alex-spies alex-spies Oct 15, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should add a test where foldable propagation actually succeeds. E.g. with

stats s = sum(x) by y

it should work - and another case each for propagation into the second argument of percentile and count_distinct is also important.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

weighted_avg and top are special in that they can take 2(+) arguments which, in case of weighted_avg, can even be non-foldable. I think we should add tests that show foldable propagation and propagation of nulls into weighted_avg and top as well.

@alex-spies alex-spies requested review from alex-spies and removed request for alex-spies November 8, 2024 14:13
Copy link
Contributor

@alex-spies alex-spies left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Heya, just wanted to pick this up again and summarize what I think we need to do:

  • Per the discussion with Costin, percentile(field, null) is still not well defined (null vs invalid query) - okay to hash this out in a follow-up but maybe invalidating for now is safer w.r.t. bwc.
  • ST_CENTROID_AGG(null) should return null instead of Point(NaN NaN).
  • Some additional test cases won't hurt.
  • Fixing this edge case where all agg functions are optimized away

Consider this unblocked from my side as my main reason for requesting changes was the discussion about percentile(field, null) and similar cases. We could solve some problems in follow-up PRs as well, as I think the general approach here works :)

p50 = 40+10
| stats percentile(value200, p90),
percentile(value200, p100),
percentile(value200, p3_null),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have to hash this out now, but maybe it'd be safer to start with a validation exception now - we can still go back and return null later, while the other way around could be considered a breaking change, albeit in a very edge case scenario.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
:Analytics/ES|QL AKA ESQL auto-backport Automatically create backport pull requests when merged >bug Team:Analytics Meta label for analytical engine team (ESQL/Aggs/Geo) v8.19.0 v9.1.0
Projects
None yet
8 participants