Skip to content

Add a maximum search request size. #26423

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

jpountz
Copy link
Contributor

@jpountz jpountz commented Aug 29, 2017

This commit adds the http.search.max_content_length setting that is a
safeguard against too large search requests. It applies to the search, msearch
and template search APIs.

This commit adds the `http.search.max_content_length` setting that is a
safeguard against too large search requests. It applies to the search, msearch
and template search APIs.
@jpountz jpountz added :Search/Search Search-related issues that do not fall into other categories >feature v6.1.0 v7.0.0 labels Aug 29, 2017
@jasontedor
Copy link
Member

What is the user's expected outcome if http.search.max_content_length is more than http.max_content_length and a search request that is larger than the latter but less than the former is sent? Right now we will fail and to me this violates the principle of least astonishment. It seems this setting would only help when http.search.max_content_length is less than http.max_content_length. While this is the case by default (1 MB versus 100 MB) I still wonder if this setting is the right path forward because of the potential for confusion? Maybe this setting is targeting large term queries? If so, should we focus on those only?

@jpountz
Copy link
Contributor Author

jpountz commented Sep 25, 2017

Maybe this setting is targeting large term queries? If so, should we focus on those only?

I tend to like the fact that such a setting would catch more abuse cases than if we solely focused on the terms query, for instance large percolated documents, match queries on very long text, too complex geo-shapes, long lists of stored fields (not bounded by the number of mapped fields since we simply ignore unmapped fields), very large include lists on terms aggregations, etc. which are all bad practice if they end up making the request that large.

Would you be ok with fixing the potential confusion you mentioned with documentation?

@lcawl lcawl added v6.2.0 and removed v6.1.0 labels Dec 12, 2017
@colings86 colings86 added v6.3.0 and removed v6.2.0 labels Jan 22, 2018
@talevy
Copy link
Contributor

talevy commented Mar 19, 2018

@jpountz do you plan on coming back to this?

@polyfractal
Copy link
Contributor

We chatted about this in FixitFriday and were uneasy with how broad but not necessarily specific this soft limit is. E.g. large search requests are often bad, but you can definitely have a large request which is perfectly reasonable (long field names + many simple aggregations + long agg names, etc).

Jason's point about confusing overlap with request body size is a concern too.

We decided it would be better to try and tackle bad queries through more specific mechanisms (breakers, better validation of queries after parsing, etc).

Decided to close for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>feature resiliency :Search/Search Search-related issues that do not fall into other categories v6.4.1 v7.0.0-beta1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

9 participants