Skip to content

Add deprecation info check for monitoring exporter password #73742

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

danhermann
Copy link
Contributor

The AUTH_PASSWORD setting for monitoring exporters was deprecated in #50919 but never added to the deprecation info API. This PR needs to be merged directly to 7.x as the setting has already been removed from the master branch.

@danhermann
Copy link
Contributor Author

@elasticmachine update branch

@danhermann danhermann marked this pull request as ready for review June 4, 2021 13:23
@elasticmachine elasticmachine added the Team:Data Management Meta label for data/management team label Jun 4, 2021
@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-features (Team:Core/Features)

@dakrone dakrone self-requested a review June 10, 2021 15:45
Copy link
Member

@dakrone dakrone left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, thanks for adding this Dan!

@danhermann
Copy link
Contributor Author

@elasticmachine update branch

elasticmachine and others added 14 commits June 11, 2021 08:41
This test regularly takes almost `10s` because 5s default check interval
would cause a long wait until the idle check even runs. This in turn would
cause the busy assert to backoff to a long wait, causing this test to generally
hold up the build worker it runs on => shorter check interval.
…#74043)

S3 list, update etc. are consistent now => no need to have these tests around any longer.
) (elastic#74044)

This test broken when the geoip processor created its index before we take the snapshot
as that would cause an unexpected number of shards in the restore. Rather than excluding
global state from the snapshot (the internal index of the plugin is snapshotted as part
of global state in 7.12+ so the index filtering we use doesn't apply to it) I opted to fix this
by making the restore selective instead to keep coverage of the global state.

closes elastic#71763
…c#74045)

With work to make repo APIs more async incoming in elastic#73570
we need a non-blocking way to run this check. This adds that async
check and removes the need to manually pass executors around as well.
…lastic#74046)

This method is taking about 4% of CPU time with internal cluster tests
for me. 80% of that were coming from the slow immutability assertion,
the rest was due to the slow way we were building up the new map.
The CPU time slowness likely translates into outright test slowness,
because this was mainly hit through adding transport handlers when starting
nodes (which happens on the main test thread).

Fixed both to save a few % of test runtime.
Flatting the logic for parsing `SnapshotInfo` to go field by field like we do for `RepositoryData`
which is both easier to read and also faster (mostly when moving to batch multiple of these blobs into one
and doing on-the-fly filtering in an upcoming PR where the approach allows for more tricks).
Also, simplified/deduplicated parsing out (mostly/often) empty lists in the deserialization code
and used the new utility in a few more spots as well to save empty lists.
Drying up a few spots of code duplication with these tests. Partly to
reduce the size of PR elastic#73952 that makes use of the smoke test infrastructure.
Make the restore path a little easier to follow by splitting it up into
the cluster state update and the steps that happen before the CS update.
Also, document more pieces of it and remove some confusing redundant code.
…3149) (elastic#74050)

We were reading the full file contents up-front here because of the complexity
of verifying the footer otherwise. This commit moves the logic for reading metadata
blobs (that can become quite sizable in some cases) in a streaming manner by
manually doing the footer verification as Lucene's utility methods don't allow for
verification on top of a stream.
If a `NodeDisconnectedException` happens when sending a ban for a task
then today we log a message at `INFO` or `WARN` indicating that the ban
failed, but we don't indicate why. The message also uses a default
`toString()` for an inner class which is unhelpful.

Ban failures during disconnections are benign and somewhat expected, and
task cancellation respects disconnections anyway (elastic#65443). There's not
much the user can do about these messages either, and they can be
confusing and draw attention away from the real problem.

With this commit we log the failure messages at `DEBUG` on
disconnections, and include the exception details. We also include the
exception message for other kinds of failures, and we fix up a few cases
where a useless default `toString()` implementation was used in log
messages.

Slightly relates elastic#72968 in that these messages tend to obscure a
connectivity issue.
Today we handle get-aliases requests on a transport thread, but if there
are ludicrous numbers of aliases then this can take substantial time.
There's no need to avoid a context switch for this API, it's not
performance-critical, so this commit moves the work onto a management
thread instead.
Removes `o.e.c.network.InetAddressHelper`, exposing the underlying
methods of `NetworkUtils` as public instead.
Backport elastic#73987 to 7.x branch.

Use `is_write_index` instead of `is_write_data_stream` to indicate whether an data stream alias
is a write data stream alias. Although the latter is a more accurate name, the former is what is
used to turn a data stream alias into a write data stream in the indices aliases api. The design
of data stream aliases is that it looks and behaves like any other alias and
using `is_write_data_stream` would go against this design.

Also index or indices is an accepted overloaded term that can mean both regular index,
data stream or an alias in Elasticsearch APIs. By using `is_write_index`, consumers
of the get aliases API don't need to make changes.

Relates to elastic#66163
original-brownbear and others added 26 commits June 14, 2021 17:02
This test needs to run the restore request using the low-level REST client. My bad for missing this during a back-port,
this was breaking some 6.x versions because they wouldn't understand default parameters in the request that the client adds.
was causing errors downstream. Reverting to unblock while the bug
is fixed.

This reverts commit 6068fc4.
Missing comma in example

Co-authored-by: Ming Liang <42666128+zethsqx@users.noreply.github.com>
…#74079)

Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>

Co-authored-by: Tim Condon <0xTim@users.noreply.github.com>
Co-authored-by: Dan Hermann <danhermann@users.noreply.github.com>
…ic#74032) (elastic#74104)

Upgrades commons-math3 library in analytics plugin and updates all
notice and license files to match commons-math3 3.6.1 distribution.
) (elastic#74105)

API key authCache is set to expire after write (by default 24 hours).
ExpireAfterWrite is generally preferred over expireAfterAccess because it
guarantees stale entries get evicted eventually in edge cases, e.g. when the
cache misses a notification from the cluster.

However, things are a bit different for the authCache. There is an additional
roundtrip to the security index for fetching the API key document. If the
document does not exists (removed due to expiration) or is invalidated, the
authentication fails earlier on without even consulting the authCache. This
means the stale entries won't cause any security issues when they exist.
Therefore, this PR changes the authCache to be expire after access, which helps
preventing potential cyclic surge of expensive hash computations especially
when a large number of API keys are in use.

To further help the cache efficiency, this PR also actively invalidates the
authCache if the document is either not found or invalidated so it does not
have to wait for 24 hour to happen. Note that these are all edge cases and we
don't expect them to happen often (if at all).
)

Adds a new API that allows a user to reset
an anomaly detection job.

To use the API do:

```
POST _ml/anomaly_detectors/<job_id>_reset
```

The API removes all data associated to the job.
In particular, it deletes model state, results and stats.

However, job notifications and user annotations are not removed.

Also, the API can be called asynchronously by setting the parameter
`wait_for_completion` to `false` (defaults to `true`). When run
that way the API returns the task id for further monitoring.

In order to prevent the job from opening while it is resetting,
a new job field has been added called `blocked`. It is an object
that contains a `reason` and the `task_id`. `reason` can take
a value from ["delete", "reset", "revert"] as all these
operations should block the job from opening. The `task_id` is also
included in order to allow tracking the task if necessary.

Finally, this commit also sets the `blocked` field when
the revert snapshot API is called as a job should not be opened
while it is reverted to a different model snapshot.

Backport of elastic#73908
Backporting elastic#74065 to 7.x branch.

Removed unused code and made fields immutable.
…tic#73319)

Deprecation warning is now issued if any realm is configured with a name
prefixed with an underscore. This applies to all realms regardless
whether they are enabled or not.

Relates: elastic#73250
…ic#74123)

A restore operation is complete when all attempts to recover primary shards have finished, even if unsuccessful.

Closes elastic#70854
Adds a missing `decRef()` that prevents buffer recycling during
recovery. Relates elastic#65921 which introduced the extra retention.
…es (elastic#73978) (elastic#74129)

Co-authored-by: James Rodewig <40268737+jrodewig@users.noreply.github.com>

Co-authored-by: Jennie Soria <predogma@users.noreply.github.com>
* Put all service accounts information on one page

* De-emphasize connection with built-in accounts + edits

* Iterate on the docs: tweak, correction and more details.

* fix test

* Edits and minor text changes

Co-authored-by: Yang Wang <yang.wang@elastic.co>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>

Co-authored-by: Yang Wang <yang.wang@elastic.co>
Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
Mute MlDistributedFailureIT testClusterWithTwoMlNodes_RunsDatafeed_GivenOriginalNodeGoesDown

Co-authored-by: Elastic Machine <elasticmachine@users.noreply.github.com>
…74031) (elastic#74109)

implement a simple change optimization for histograms using min and max aggregations. The
optimization is not applied if the range cutoff would be too small compared to the overall
range from previous checkpoints. At least 20% must be cut compared to former checkpoints.

fixes elastic#63801
…stic#74137)

* Fix mapping error to indicate values field (elastic#74132)

The error message erroneously mentions the count field being out of order. It should instead mention the values field
Date based aggregations accept a timezone, which gets applied to both the bucketing logic and the formatter. This is usually what you want, but in the case of date formats where a timezone doesn't make any sense, it can create problems. In particular, our formatting logic and our parsing logic were doing different things for epoch_second and epoch_millis formats with time zones. This led to a problem on composite where we'd return an after key for the last bucket that would parse to a time before the last bucket, so instead of correctly returning an empty response to indicate the end of the aggregation, we'd keep returning the same last page of data.
@danhermann
Copy link
Contributor Author

Superseded by #74156

@danhermann danhermann closed this Jun 15, 2021
@danhermann danhermann deleted the 7x_deprecation_info_for_auth_password branch June 15, 2021 22:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.