diff --git a/FAQ.md b/FAQ.md
deleted file mode 100644
index d5f038d42a90..000000000000
--- a/FAQ.md
+++ /dev/null
@@ -1,23 +0,0 @@
-# Frequently asked questions
-
-**Q:** I'm getting `bin/node/bin/node: not found` but I can see the node binary in the package?
-**A:** OpenSearch Dashboards 4 packages are architecture specific. Ensure you are using the correct package for your architecture.
-
-**Q:** Where do I go for support?
-**A:** Please join us at [discuss.elastic.co](https://discuss.elastic.co) with questions. Your problem might be a bug, but it might just be a misunderstanding, or a feature we could improve. We're also available on Freenode in #opensearch-dashboards
-
-**Q:** Ok, we talked about it and its definitely a bug
-**A:** Doh, ok, let's get that fixed. File an issue on [github.com/opensearch-project/opensearch-dashboards](https://github.com/opensearch-project/opensearch-dashboards). I'd recommend reading the beginning of the CONTRIBUTING.md, just so you know how we'll handle the issue.
-
-### OpenSearch Dashboards 3 Migration
-**Q:** Where is feature X that I loved from OpenSearch Dashboards 3?
-**A:** It might be coming! We’ve published our immediate roadmap as tickets. Check out the beta milestones on GitHub to see if the feature you’re missing is coming soon.
-
-**Q:** Is the dashboard schema compatible?
-**A:** Unfortunately, it is not compatible. In order to create the new features we wanted, it simply was not possible to keep the same schema. Aggregations work fundamentally different from facets, the new dashboard isn’t tied to rows and columns, and the relationships between searches, visualizations and the dashboard are complex enough that we simply had to design something more flexible.
-
-**Q:** How do I execute a multi-query?
-**A:** The ‘filters’ aggregations will allow you to input multiple queries and compare them visually. You can even use Elasticsearch JSON in there!
-
-**Q:** What happened to templated/scripted dashboards?
-**A:** Check out the URL. The state of each app is stored there, including any filters, queries or columns. This should be a lot easier than constructing scripted dashboards. The encoding of the URL is RISON.
diff --git a/licenses/ELASTIC-LICENSE.txt b/licenses/ELASTIC-LICENSE.txt
deleted file mode 100644
index 7376ffc3ff10..000000000000
--- a/licenses/ELASTIC-LICENSE.txt
+++ /dev/null
@@ -1,223 +0,0 @@
-ELASTIC LICENSE AGREEMENT
-
-PLEASE READ CAREFULLY THIS ELASTIC LICENSE AGREEMENT (THIS "AGREEMENT"), WHICH
-CONSTITUTES A LEGALLY BINDING AGREEMENT AND GOVERNS ALL OF YOUR USE OF ALL OF
-THE ELASTIC SOFTWARE WITH WHICH THIS AGREEMENT IS INCLUDED ("ELASTIC SOFTWARE")
-THAT IS PROVIDED IN OBJECT CODE FORMAT, AND, IN ACCORDANCE WITH SECTION 2 BELOW,
-CERTAIN OF THE ELASTIC SOFTWARE THAT IS PROVIDED IN SOURCE CODE FORMAT. BY
-INSTALLING OR USING ANY OF THE ELASTIC SOFTWARE GOVERNED BY THIS AGREEMENT, YOU
-ARE ASSENTING TO THE TERMS AND CONDITIONS OF THIS AGREEMENT. IF YOU DO NOT AGREE
-WITH SUCH TERMS AND CONDITIONS, YOU MAY NOT INSTALL OR USE THE ELASTIC SOFTWARE
-GOVERNED BY THIS AGREEMENT. IF YOU ARE INSTALLING OR USING THE SOFTWARE ON
-BEHALF OF A LEGAL ENTITY, YOU REPRESENT AND WARRANT THAT YOU HAVE THE ACTUAL
-AUTHORITY TO AGREE TO THE TERMS AND CONDITIONS OF THIS AGREEMENT ON BEHALF OF
-SUCH ENTITY.
-
-Posted Date: April 20, 2018
-
-This Agreement is entered into by and between Elasticsearch BV ("Elastic") and
-You, or the legal entity on behalf of whom You are acting (as applicable,
-"You").
-
-1. OBJECT CODE END USER LICENSES, RESTRICTIONS AND THIRD PARTY OPEN SOURCE
-SOFTWARE
-
- 1.1 Object Code End User License. Subject to the terms and conditions of
- Section 1.2 of this Agreement, Elastic hereby grants to You, AT NO CHARGE and
- for so long as you are not in breach of any provision of this Agreement, a
- License to the Basic Features and Functions of the Elastic Software.
-
- 1.2 Reservation of Rights; Restrictions. As between Elastic and You, Elastic
- and its licensors own all right, title and interest in and to the Elastic
- Software, and except as expressly set forth in Sections 1.1, and 2.1 of this
- Agreement, no other license to the Elastic Software is granted to You under
- this Agreement, by implication, estoppel or otherwise. You agree not to: (i)
- reverse engineer or decompile, decrypt, disassemble or otherwise reduce any
- Elastic Software provided to You in Object Code, or any portion thereof, to
- Source Code, except and only to the extent any such restriction is prohibited
- by applicable law, (ii) except as expressly permitted in this Agreement,
- prepare derivative works from, modify, copy or use the Elastic Software Object
- Code or the Commercial Software Source Code in any manner; (iii) except as
- expressly permitted in Section 1.1 above, transfer, sell, rent, lease,
- distribute, sublicense, loan or otherwise transfer, Elastic Software Object
- Code, in whole or in part, to any third party; (iv) use Elastic Software
- Object Code for providing time-sharing services, any software-as-a-service,
- service bureau services or as part of an application services provider or
- other service offering (collectively, "SaaS Offering") where obtaining access
- to the Elastic Software or the features and functions of the Elastic Software
- is a primary reason or substantial motivation for users of the SaaS Offering
- to access and/or use the SaaS Offering ("Prohibited SaaS Offering"); (v)
- circumvent the limitations on use of Elastic Software provided to You in
- Object Code format that are imposed or preserved by any License Key, or (vi)
- alter or remove any Marks and Notices in the Elastic Software. If You have any
- question as to whether a specific SaaS Offering constitutes a Prohibited SaaS
- Offering, or are interested in obtaining Elastic's permission to engage in
- commercial or non-commercial distribution of the Elastic Software, please
- contact elastic_license@elastic.co.
-
- 1.3 Third Party Open Source Software. The Commercial Software may contain or
- be provided with third party open source libraries, components, utilities and
- other open source software (collectively, "Open Source Software"), which Open
- Source Software may have applicable license terms as identified on a website
- designated by Elastic. Notwithstanding anything to the contrary herein, use of
- the Open Source Software shall be subject to the license terms and conditions
- applicable to such Open Source Software, to the extent required by the
- applicable licensor (which terms shall not restrict the license rights granted
- to You hereunder, but may contain additional rights). To the extent any
- condition of this Agreement conflicts with any license to the Open Source
- Software, the Open Source Software license will govern with respect to such
- Open Source Software only. Elastic may also separately provide you with
- certain open source software that is licensed by Elastic. Your use of such
- Elastic open source software will not be governed by this Agreement, but by
- the applicable open source license terms.
-
-2. COMMERCIAL SOFTWARE SOURCE CODE
-
- 2.1 Limited License. Subject to the terms and conditions of Section 2.2 of
- this Agreement, Elastic hereby grants to You, AT NO CHARGE and for so long as
- you are not in breach of any provision of this Agreement, a limited,
- non-exclusive, non-transferable, fully paid up royalty free right and license
- to the Commercial Software in Source Code format, without the right to grant
- or authorize sublicenses, to prepare Derivative Works of the Commercial
- Software, provided You (i) do not hack the licensing mechanism, or otherwise
- circumvent the intended limitations on the use of Elastic Software to enable
- features other than Basic Features and Functions or those features You are
- entitled to as part of a Subscription, and (ii) use the resulting object code
- only for reasonable testing purposes.
-
- 2.2 Restrictions. Nothing in Section 2.1 grants You the right to (i) use the
- Commercial Software Source Code other than in accordance with Section 2.1
- above, (ii) use a Derivative Work of the Commercial Software outside of a
- Non-production Environment, in any production capacity, on a temporary or
- permanent basis, or (iii) transfer, sell, rent, lease, distribute, sublicense,
- loan or otherwise make available the Commercial Software Source Code, in whole
- or in part, to any third party. Notwithstanding the foregoing, You may
- maintain a copy of the repository in which the Source Code of the Commercial
- Software resides and that copy may be publicly accessible, provided that you
- include this Agreement with Your copy of the repository.
-
-3. TERMINATION
-
- 3.1 Termination. This Agreement will automatically terminate, whether or not
- You receive notice of such Termination from Elastic, if You breach any of its
- provisions.
-
- 3.2 Post Termination. Upon any termination of this Agreement, for any reason,
- You shall promptly cease the use of the Elastic Software in Object Code format
- and cease use of the Commercial Software in Source Code format. For the
- avoidance of doubt, termination of this Agreement will not affect Your right
- to use Elastic Software, in either Object Code or Source Code formats, made
- available under the Apache License Version 2.0.
-
- 3.3 Survival. Sections 1.2, 2.2. 3.3, 4 and 5 shall survive any termination or
- expiration of this Agreement.
-
-4. DISCLAIMER OF WARRANTIES AND LIMITATION OF LIABILITY
-
- 4.1 Disclaimer of Warranties. TO THE MAXIMUM EXTENT PERMITTED UNDER APPLICABLE
- LAW, THE ELASTIC SOFTWARE IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND,
- AND ELASTIC AND ITS LICENSORS MAKE NO WARRANTIES WHETHER EXPRESSED, IMPLIED OR
- STATUTORY REGARDING OR RELATING TO THE ELASTIC SOFTWARE. TO THE MAXIMUM EXTENT
- PERMITTED UNDER APPLICABLE LAW, ELASTIC AND ITS LICENSORS SPECIFICALLY
- DISCLAIM ALL IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
- PURPOSE AND NON-INFRINGEMENT WITH RESPECT TO THE ELASTIC SOFTWARE, AND WITH
- RESPECT TO THE USE OF THE FOREGOING. FURTHER, ELASTIC DOES NOT WARRANT RESULTS
- OF USE OR THAT THE ELASTIC SOFTWARE WILL BE ERROR FREE OR THAT THE USE OF THE
- ELASTIC SOFTWARE WILL BE UNINTERRUPTED.
-
- 4.2 Limitation of Liability. IN NO EVENT SHALL ELASTIC OR ITS LICENSORS BE
- LIABLE TO YOU OR ANY THIRD PARTY FOR ANY DIRECT OR INDIRECT DAMAGES,
- INCLUDING, WITHOUT LIMITATION, FOR ANY LOSS OF PROFITS, LOSS OF USE, BUSINESS
- INTERRUPTION, LOSS OF DATA, COST OF SUBSTITUTE GOODS OR SERVICES, OR FOR ANY
- SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES OF ANY KIND, IN CONNECTION WITH
- OR ARISING OUT OF THE USE OR INABILITY TO USE THE ELASTIC SOFTWARE, OR THE
- PERFORMANCE OF OR FAILURE TO PERFORM THIS AGREEMENT, WHETHER ALLEGED AS A
- BREACH OF CONTRACT OR TORTIOUS CONDUCT, INCLUDING NEGLIGENCE, EVEN IF ELASTIC
- HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
-
-5. MISCELLANEOUS
-
- This Agreement completely and exclusively states the entire agreement of the
- parties regarding the subject matter herein, and it supersedes, and its terms
- govern, all prior proposals, agreements, or other communications between the
- parties, oral or written, regarding such subject matter. This Agreement may be
- modified by Elastic from time to time, and any such modifications will be
- effective upon the "Posted Date" set forth at the top of the modified
- Agreement. If any provision hereof is held unenforceable, this Agreement will
- continue without said provision and be interpreted to reflect the original
- intent of the parties. This Agreement and any non-contractual obligation
- arising out of or in connection with it, is governed exclusively by Dutch law.
- This Agreement shall not be governed by the 1980 UN Convention on Contracts
- for the International Sale of Goods. All disputes arising out of or in
- connection with this Agreement, including its existence and validity, shall be
- resolved by the courts with jurisdiction in Amsterdam, The Netherlands, except
- where mandatory law provides for the courts at another location in The
- Netherlands to have jurisdiction. The parties hereby irrevocably waive any and
- all claims and defenses either might otherwise have in any such action or
- proceeding in any of such courts based upon any alleged lack of personal
- jurisdiction, improper venue, forum non conveniens or any similar claim or
- defense. A breach or threatened breach, by You of Section 2 may cause
- irreparable harm for which damages at law may not provide adequate relief, and
- therefore Elastic shall be entitled to seek injunctive relief without being
- required to post a bond. You may not assign this Agreement (including by
- operation of law in connection with a merger or acquisition), in whole or in
- part to any third party without the prior written consent of Elastic, which
- may be withheld or granted by Elastic in its sole and absolute discretion.
- Any assignment in violation of the preceding sentence is void. Notices to
- Elastic may also be sent to legal@elastic.co.
-
-6. DEFINITIONS
-
- The following terms have the meanings ascribed:
-
- 6.1 "Affiliate" means, with respect to a party, any entity that controls, is
- controlled by, or which is under common control with, such party, where
- "control" means ownership of at least fifty percent (50%) of the outstanding
- voting shares of the entity, or the contractual right to establish policy for,
- and manage the operations of, the entity.
-
- 6.2 "Basic Features and Functions" means those features and functions of the
- Elastic Software that are eligible for use under a Basic license, as set forth
- at https://www.elastic.co/subscriptions, as may be modified by Elastic from
- time to time.
-
- 6.3 "Commercial Software" means the Elastic Software Source Code in any file
- containing a header stating the contents are subject to the Elastic License or
- which is contained in the repository folder labeled "x-pack", unless a LICENSE
- file present in the directory subtree declares a different license.
-
- 6.4 "Derivative Work of the Commercial Software" means, for purposes of this
- Agreement, any modification(s) or enhancement(s) to the Commercial Software,
- which represent, as a whole, an original work of authorship.
-
- 6.5 "License" means a limited, non-exclusive, non-transferable, fully paid up,
- royalty free, right and license, without the right to grant or authorize
- sublicenses, solely for Your internal business operations to (i) install and
- use the applicable Features and Functions of the Elastic Software in Object
- Code, and (ii) permit Contractors and Your Affiliates to use the Elastic
- software as set forth in (i) above, provided that such use by Contractors must
- be solely for Your benefit and/or the benefit of Your Affiliates, and You
- shall be responsible for all acts and omissions of such Contractors and
- Affiliates in connection with their use of the Elastic software that are
- contrary to the terms and conditions of this Agreement.
-
- 6.6 "License Key" means a sequence of bytes, including but not limited to a
- JSON blob, that is used to enable certain features and functions of the
- Elastic Software.
-
- 6.7 "Marks and Notices" means all Elastic trademarks, trade names, logos and
- notices present on the Documentation as originally provided by Elastic.
-
- 6.8 "Non-production Environment" means an environment for development, testing
- or quality assurance, where software is not used for production purposes.
-
- 6.9 "Object Code" means any form resulting from mechanical transformation or
- translation of Source Code form, including but not limited to compiled object
- code, generated documentation, and conversions to other media types.
-
- 6.10 "Source Code" means the preferred form of computer software for making
- modifications, including but not limited to software source code,
- documentation source, and configuration files.
-
- 6.11 "Subscription" means the right to receive Support Services and a License
- to the Commercial Software.
diff --git a/rfcs/images/pulse_diagram.png b/rfcs/images/pulse_diagram.png
deleted file mode 100644
index a104fad0fe13..000000000000
Binary files a/rfcs/images/pulse_diagram.png and /dev/null differ
diff --git a/rfcs/text/0002_encrypted_attributes.md b/rfcs/text/0002_encrypted_attributes.md
deleted file mode 100644
index af86f726f188..000000000000
--- a/rfcs/text/0002_encrypted_attributes.md
+++ /dev/null
@@ -1,252 +0,0 @@
-- Start Date: 2019-03-22
-- RFC PR: [#33740](https://github.com/elastic/kibana/pull/33740)
-- OpenSearch Dashboards Issue: (leave this empty)
-
-# Summary
-
-In order to support the action service we need a way to encrypt/decrypt
-attributes on saved objects that works with security and spaces filtering as
-well as performing audit logging. Sufficiently hides the private key used and
-removes encrypted attributes from being exposed through regular means.
-
-# Basic example
-
-Register saved object type with the `encrypted_saved_objects` plugin:
-
-```typescript
-server.plugins.encrypted_saved_objects.registerType({
- type: 'server-action',
- attributesToEncrypt: new Set(['credentials', 'apiKey']),
-});
-```
-
-Use the same API to create saved objects with encrypted attributes as for any other saved object type:
-
-```typescript
-const savedObject = await server.savedObjects
- .getScopedSavedObjectsClient(request)
- .create('server-action', {
- name: 'my-server-action',
- data: { location: 'BBOX (100.0, ..., 0.0)', email: '...' },
- credentials: { username: 'some-user', password: 'some-password' },
- apiKey: 'dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvb'
- });
-
-// savedObject = {
-// id: 'dd9750b9-ef0a-444c-8405-4dfcc2e9d670',
-// type: 'server-action',
-// name: 'my-server-action',
-// data: { location: 'BBOX (100.0, ..., 0.0)', email: '...' },
-// };
-
-```
-
-Use dedicated method to retrieve saved object with decrypted attributes on behalf of OpenSearch Dashboards internal user:
-
-```typescript
-const savedObject = await server.plugins.encrypted_saved_objects.getDecryptedAsInternalUser(
- 'server-action',
- 'dd9750b9-ef0a-444c-8405-4dfcc2e9d670'
-);
-
-// savedObject = {
-// id: 'dd9750b9-ef0a-444c-8405-4dfcc2e9d670',
-// type: 'server-action',
-// name: 'my-server-action',
-// data: { location: 'BBOX (100.0, ..., 0.0)', email: '...' },
-// credentials: { username: 'some-user', password: 'some-password' },
-// apiKey: 'dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvb',
-// };
-```
-
-# Motivation
-
-Main motivation is the storage and usage of third-party credentials for use with
-the action service to do notifications. Also perform other types integrations,
-call webhooks using tokens.
-
-# Detailed design
-
-In order for this to be in basic it needs to be done as a wrapper around the
-saved object client. This can be added from the `x-pack` plugin.
-
-## General
-
-To be able to manage saved objects with encrypted attributes from any plugin one should
-do the following:
-
-1. Define `encrypted_saved_objects` plugin as a dependency.
-2. Add attributes to be encrypted in `mappings.json` file for the respective saved object type. These attributes should
-always have a `binary` type since they'll contain encrypted content as a `Base64` encoded string and should never be
-searchable or analyzed. This makes defining of attributes that require encryption explicit and auditable, and significantly
-simplifies implementation:
-```json
-{
- "server-action": {
- "properties": {
- "name": { "type": "keyword" },
- "data": {
- "properties": {
- "location": { "type": "geo_shape" },
- "email": { "type": "text" }
- }
- },
- "credentials": { "type": "binary" },
- "apiKey": { "type": "binary" }
- }
- }
-}
-```
-3. Register saved object type and attributes that should be encrypted with `encrypted_saved_objects` plugin:
-```typescript
-server.plugins.encrypted_saved_objects.registerType({
- type: 'server-action',
- attributesToEncrypt: new Set(['credentials', 'apiKey']),
- attributesToExcludeFromAAD: new Set(['data']),
-});
-```
-
-Notice the optional `attributesToExcludeFromAAD` property, it allows one to exclude some of the saved object attributes
-from Additional authenticated data (AAD), read more about that below in `Encryption and decryption` section.
-
-Since `encrypted_saved_objects` adds its own wrapper (`EncryptedSavedObjectsClientWrapper`) into `SavedObjectsClient`
-wrapper chain consumers will be able to create, update, delete and retrieve saved objects using standard Saved Objects API.
-Two main responsibilities of the wrapper are:
-
-* It encrypts attributes that are supposed to be encrypted during `create`, `bulkCreate` and `update` operations
-* It strips encrypted attributes from **any** saved object returned from the Saved Objects API
-
-As noted above the wrapper is stripping encrypted attributes from saved objects returned from the API methods, that means
-that there is no way at all to retrieve encrypted attributes using standard Saved Objects API unless `encrypted_saved_objects`
-plugin is disabled. This potentially can lead to the situation when consumer retrieves saved object, updates its non-encrypted
-properties and passes that same object to the `update` Saved Objects API method without re-defining encrypted attributes. In
-this case only specified attributes will be updated and encrypted attributes will stay untouched. And if these updated
-attributes are included into AAD, that is true by default for all attributes unless they are specifically excluded via
-`attributesToExcludeFromAAD`, then it will be no longer possible to decrypt encrypted attributes. At this stage we consider
-this as a developer mistake and don't prevent it from happening in any way apart from logging this type of event. Partial
-update of only attributes that are not the part of AAD will not cause this issue.
-
-Saved object ID is an essential part of AAD used during encryption process and hence should be as hard to guess as possible.
-To fulfil this requirement wrapper generates highly random IDs (UUIDv4) for the saved objects that contain encrypted
-attributes and hence consumers are not allowed to specify ID when calling `create` or `bulkCreate` method and if they try
-to do so the error will be thrown.
-
-To reduce the risk of unintentional decryption and consequent leaking of the sensitive information there is only one way
-to retrieve saved object and decrypt its encrypted attributes and it's exposed only through `encrypted_saved_objects` plugin:
-
-```typescript
-const savedObject = await server.plugins.encrypted_saved_objects.getDecryptedAsInternalUser(
- 'server-action',
- 'dd9750b9-ef0a-444c-8405-4dfcc2e9d670'
-);
-
-// savedObject = {
-// id: 'dd9750b9-ef0a-444c-8405-4dfcc2e9d670',
-// type: 'server-action',
-// name: 'my-server-action',
-// data: { location: 'BBOX (100.0, ..., 0.0)', email: '...' },
-// credentials: { username: 'some-user', password: 'some-password' },
-// apiKey: 'dGhpcyBpcyBub3QgYSByZWFsIHRva2VuIGJ1dCBpdCBpcyBvb',
-// };
-```
-
-As can be seen from the method name, the request to retrieve saved object and decrypt its attributes is performed on
-behalf of the internal OpenSearch Dashboards user and hence isn't supposed to be called within user request context.
-
-**Note:** the fact that saved object with encrypted attributes is created using standard Saved Objects API within a
-particular user and space context, but retrieved out of any context makes it unclear how consumers are supposed to
-provide that context and retrieve saved object from a particular space. Current plan for `getDecryptedAsInternalUser`
-method is to accept a third `BaseOptions` argument that allows consumers to specify `namespace` that they can retrieve
-from the request using public `spaces` plugin API.
-
-## Encryption and decryption
-
-Saved object attributes are encrypted using [@elastic/node-crypto](https://github.com/elastic/node-crypto) library. Please
-take a look at the source code of this library to know how encryption is performed exactly, what algorithm and encryption
-parameters are used, but in short it's AES Encryption with AES-256-GCM that uses random initialization vector and salt.
-
-As with encryption key for OpenSearch Dashboards's session cookie, master encryption key used by `encrypted_saved_objects` plugin can be
-defined as a configuration value (`xpack.encryptedSavedObjects.encryptionKey`) via `opensearch_dashboards.yml`, but it's **highly
-recommended** to define this key in the [OpenSearch Dashboards Keystore](https://www.elastic.co/guide/en/kibana/current/secure-settings.html)
-instead. The master key should be cryptographically safe and be equal or greater than 32 bytes.
-
-To prevent certain vectors of attacks where raw content of encrypted attributes of one saved object is copied to another
-saved object which would unintentionally allow it to decrypt content that was not supposed to be decrypted we rely on Additional
-authenticated data (AAD) during encryption and decryption. AAD consists of the following components:
-
-* Saved object ID
-* Saved object type
-* Saved object attributes
-
-AAD does not include encrypted attributes themselves and attributes defined in optional `attributesToExcludeFromAAD`
-parameter provided during saved object type registration with `encrypted_saved_objects` plugin. There are a number of
-reasons why one would want to exclude certain attributes from AAD:
-
-* if attribute contains large amount of data that can significantly slow down encryption and decryption, especially during
-bulk operations (e.g. large geo shape or arbitrary HTML document)
-* if attribute contains data that is supposed to be updated separately from encrypted attributes or attributes included
-into AAD (e.g some user defined content associated with the email action or alert)
-
-## Audit
-
-Encrypted attributes will most likely contain sensitive information and any attempt to access these should be properly
-logged to allow any further audit procedures. The following events will be logged with OpenSearch Dashboards audit log functionality:
-
-* Successful attempt to encrypt attributes (incl. saved object ID, type and attributes names)
-* Failed attempt to encrypt attribute (incl. saved object ID, type and attribute name)
-* Successful attempt to decrypt attributes (incl. saved object ID, type and attributes names)
-* Failed attempt to decrypt attribute (incl. saved object ID, type and attribute name)
-
-In addition to audit log events we'll issue ordinary log events for any attempts to save, update or decrypt saved objects
-with missing attributes that were supposed to be encrypted/decrypted based on the registration parameters.
-
-# Benefits
-
-* None of the registered types will expose their encrypted details. The saved
-objects with their unencrypted attributes could still be obtained and searched
-on. The wrapper will follow all the security and spaces filtering of saved
-objects so that only users with appropriate permissions will be able to obtain
-the scrubbed objects or _save_ objects with encrypted attributes.
-
-* No explicit access to a method that takes in an encrypted string exists. If the
-type was not registered no decryption is possible. No need to handle the saved object
-with the encrypted attributes reducing the risk of accidentally returning it in a
-handler.
-
-# Drawbacks
-
-* It isn't possible to decrypt existing encrypted attributes once encryption key changes
-* Possibly have a performance impact on Saved Objects API operations that require encryption/decryption
-* Will require non trivial tests to test functionality along with spaces and security
-* The attributes that are encrypted have to be defined and if they change they need to be migrated
-
-# Out of scope
-
-* Encryption key rotation mechanism, either regular or emergency
-* Mechanism that would detect and warn when OpenSearch Dashboards does not use keystore to store encryption key
-
-# Alternatives
-
-Only allow this to be used within the Actions service itself where the details
-of the saved object are handled there directly. And the saved objects are
-`hidden` but still use the security and spaces wrappers.
-
-# Adoption strategy
-
-Integration should be pretty easy which would include depending on the plugin, registering the desired saved object type
-with it and defining encrypted attributes in the `mappings.json`.
-
-# How we teach this
-
-The `encrypted_saved_objects` as the name of the `thing` where it's seen as a separate
-extension on top of the saved object service.
-
-Provide a README.md in the plugin directory with the usage examples.
-
-# Unresolved questions
-
-* Is it acceptable to have this plugin in Basic?
-* Are there any other use-cases that are not served with that interface?
-* How would this work with Saved Objects Export\Import API?
-* How would this work with migrations, if the attribute names wanted to be
- changed, a decrypt context would need to be created for migration?
diff --git a/rfcs/text/0004_application_service_mounting.md b/rfcs/text/0004_application_service_mounting.md
deleted file mode 100644
index c2b6d4c2fe6e..000000000000
--- a/rfcs/text/0004_application_service_mounting.md
+++ /dev/null
@@ -1,334 +0,0 @@
-- Start Date: 2019-05-10
-- RFC PR: (leave this empty)
-- OpenSearch Dashboards Issue: (leave this empty)
-
-# Summary
-
-A front-end service to manage registration and root-level routing for
-first-class applications.
-
-# Basic example
-
-
-```tsx
-// my_plugin/public/application.js
-
-import React from 'react';
-import ReactDOM from 'react-dom';
-
-import { MyApp } from './componnets';
-
-export function renderApp(context, { element }) {
- ReactDOM.render(
- ,
- element
- );
-
- return () => {
- ReactDOM.unmountComponentAtNode(element);
- };
-}
-```
-
-```tsx
-// my_plugin/public/plugin.js
-
-class MyPlugin {
- setup({ application }) {
- application.register({
- id: 'my-app',
- title: 'My Application',
- async mount(context, params) {
- const { renderApp } = await import('./application');
- return renderApp(context, params);
- }
- });
- }
-}
-```
-
-# Motivation
-
-By having centralized management of applications we can have a true single page
-application. It also gives us a single place to enforce authorization and/or
-licensing constraints on application access.
-
-By making the mounting interface of the ApplicationService generic, we can
-support many different rendering technologies simultaneously to avoid framework
-lock-in.
-
-# Detailed design
-
-## Interface
-
-```ts
-/** A context type that implements the Handler Context pattern from RFC-0003 */
-export interface AppMountContext {
- /** These services serve as an example, but are subject to change. */
- core: {
- http: {
- fetch(...): Promise;
- };
- i18n: {
- translate(
- id: string,
- defaultMessage: string,
- values?: Record
- ): string;
- };
- notifications: {
- toasts: {
- add(...): void;
- };
- };
- overlays: {
- showFlyout(render: (domElement) => () => void): Flyout;
- showModal(render: (domElement) => () => void): Modal;
- };
- uiSettings: { ... };
- };
- /** Other plugins can inject context by registering additional context providers */
- [contextName: string]: unknown;
-}
-
-export interface AppMountParams {
- /** The base path the application is mounted on. Used to configure routers. */
- appBasePath: string;
- /** The element the application should render into */
- element: HTMLElement;
-}
-
-export type Unmount = () => Promise | void;
-
-export interface AppSpec {
- /**
- * A unique identifier for this application. Used to build the route for this
- * application in the browser.
- */
- id: string;
-
- /**
- * The title of the application.
- */
- title: string;
-
- /**
- * A mount function called when the user navigates to this app's route.
- * @param context the `AppMountContext` generated for this app
- * @param params the `AppMountParams`
- * @returns An unmounting function that will be called to unmount the application.
- */
- mount(context: MountContext, params: AppMountParams): Unmount | Promise;
-
- /**
- * A EUI iconType that will be used for the app's icon. This icon
- * takes precendence over the `icon` property.
- */
- euiIconType?: string;
-
- /**
- * A URL to an image file used as an icon. Used as a fallback
- * if `euiIconType` is not provided.
- */
- icon?: string;
-
- /**
- * Custom capabilities defined by the app.
- */
- capabilities?: Partial;
-}
-
-export interface ApplicationSetup {
- /**
- * Registers an application with the system.
- */
- register(app: AppSpec): void;
- registerMountContext(
- contextName: T,
- provider: (context: Partial) => MountContext[T] | Promise
- ): void;
-}
-
-export interface ApplicationStart {
- /**
- * The UI capabilities for the current user.
- */
- capabilities: Capabilties;
-}
-```
-
-## Mounting
-
-When an app is registered via `register`, it must provide a `mount` function
-that will be invoked whenever the window's location has changed from another app
-to this app.
-
-This function is called with a `AppMountContext` and an
-`AppMountParams` which contains a `HTMLElement` for the application to
-render itself to. The mount function must also return a function that can be
-called by the ApplicationService to unmount the application at the given DOM
-Element. The mount function may return a Promise of an unmount function in order
-to import UI code dynamically.
-
-The ApplicationService's `register` method will only be available during the
-*setup* lifecycle event. This allows the system to know when all applications
-have been registered.
-
-The `mount` function will also get access to the `AppMountContext` that
-has many of the same core services available during the `start` lifecycle.
-Plugins can also register additional context attributes via the
-`registerMountContext` function.
-
-## Routing
-
-The ApplicationService will serve as the global frontend router for OpenSearch Dashboards,
-enabling OpenSearch Dashboards to be a 100% single page application. However, the router will
-only manage top-level routes. Applications themselves will need to implement
-their own routing as subroutes of the top-level route.
-
-An example:
-- "MyApp" is registered with `id: 'my-app'`
-- User navigates from myopensearchDashboards.com/app/home to myopensearchDashboards.com/app/my-app
-- ApplicationService sees the root app has changed and mounts the new
- application:
- - Calls the `Unmount` function returned my "Home"'s `mount`
- - Calls the `mount` function registered by "MyApp"
-- MyApp's internal router takes over rest of routing. Redirects to initial
- "overview" page: myopensearchDashboards.com/app/my-app/overview
-
-When setting up a router, your application should only handle the part of the
-URL following the `params.appBasePath` provided when you application is mounted.
-
-### Legacy Applications
-
-In order to introduce this service now, the ApplicationService will need to be
-able to handle "routing" to legacy applications. We will not be able to run
-multiple legacy applications on the same page load due to shared stateful
-modules in `ui/public`.
-
-Instead, the ApplicationService should do a full-page refresh when rendering
-legacy applications. Internally, this will be managed by registering legacy apps
-with the ApplicationService separately and handling those top-level routes by
-starting a full-page refresh rather than a mounting cycle.
-
-## Complete Example
-
-Here is a complete example that demonstrates rendering a React application with
-a full-featured router and code-splitting. Note that using React or any other
-3rd party tools featured here is not required to build a OpenSearch Dashboards Application.
-
-```tsx
-// my_plugin/public/application.tsx
-
-import React from 'react';
-import ReactDOM from 'react-dom';
-import { BrowserRouter, Route } from 'react-router-dom';
-import loadable from '@loadable/component';
-
-// Apps can choose to load components statically in the same bundle or
-// dynamically when routes are rendered.
-import { HomePage } from './pages';
-const LazyDashboard = loadable(() => import('./pages/dashboard'));
-
-const MyApp = ({ basename }) => (
- // Setup router's basename from the basename provided from MountContext
-
-
- {/* myopensearchDashboards.com/app/my-app/ */}
-
-
- {/* myopensearchDashboards.com/app/my-app/dashboard/42 */}
- }
- />
-
- ,
-);
-
-export function renderApp(context, params) {
- ReactDOM.render(
- // `params.appBasePath` would be `/app/my-app` in this example.
- // This exact string is not guaranteed to be stable, always reference the
- // provided value at `params.appBasePath`.
- ,
- params.element
- );
-
- return () => ReactDOM.unmountComponentAtNode(params.element);
-}
-```
-
-```tsx
-// my_plugin/public/plugin.tsx
-
-export class MyPlugin {
- setup({ application }) {
- application.register({
- id: 'my-app',
- async mount(context, params) {
- const { renderApp } = await import('./application');
- return renderApp(context, params);
- }
- });
- }
-}
-```
-
-## Core Entry Point
-
-Once we can support application routing for new and legacy applications, we
-should create a new entry point bundle that only includes Core and any necessary
-uiExports (hacks for example). This should be served by the backend whenever a
-`/app/` request is received for an app that the legacy platform does not
-have a bundle for.
-
-# Drawbacks
-
-- Implementing this will be significant work and requires migrating legacy code
- from `ui/chrome`
-- Making OpenSearch Dashboards a single page application may lead to problems if applications
- do not clean themselves up properly when unmounted
-- Application `mount` functions will have access to *setup* via the closure. We
- may want to lock down these APIs from being used after *setup* to encourage
- usage of the `MountContext` instead.
-- In order to support new applications being registered in the legacy platform,
- we will need to create a new `uiExport` that is imported during the new
- platform's *setup* lifecycle event. This is necessary because app registration
- must happen prior to starting the legacy platform. This is only an issue for
- plugins that are migrating using a shim in the legacy platform.
-
-# Alternatives
-
-- We could provide a full featured react-router instance that plugins could
- plug directly into. The downside is this locks us more into React and makes
- code splitting a bit more challenging.
-
-# Adoption strategy
-
-Adoption of the application service will have to happen as part of the migration
-of each plugin. We should be able to support legacy plugins registering new
-platform-style applications before they actually move all of their code
-over to the new platform.
-
-# How we teach this
-
-Introducing this service makes applications a first-class feature of the OpenSearch Dashboards
-platform. Right now, plugins manage their own routes and can export "navlinks"
-that get rendered in the navigation UI, however there is a not a self-contained
-concept like an application to encapsulate these related responsibilities. It
-will need to be emphasized that plugins can register zero, one, or multiple
-applications.
-
-Most new and existing OpenSearch Dashboards developers will need to understand how the
-ApplicationService works and how multiple apps run in a single page application.
-This should be accomplished through thorough documentation in the
-ApplicationService's API implementation as well as in general plugin development
-tutorials and documentation.
-
-# Unresolved questions
-
-- Are there any major caveats to having multiple routers on the page? If so, how
-can these be prevented or worked around?
-- How should global URL state be shared across applications, such as timepicker
-state?
diff --git a/rfcs/text/0008_pulse.md b/rfcs/text/0008_pulse.md
deleted file mode 100644
index f6498219606b..000000000000
--- a/rfcs/text/0008_pulse.md
+++ /dev/null
@@ -1,316 +0,0 @@
-- Start Date: 2020-02-07
-- RFC PR: [#57108](https://github.com/elastic/kibana/pull/57108)
-- Kibana Issue: (leave this empty)
-
-# Table of contents
-
-- [Summary](#summary)
-- [Motivation](#motivation)
-- [Detailed design](#detailed-design)
- - [Concepts](#concepts)
- - [Architecture](#architecture)
- 1. [Remote Pulse Service](#1-remote-pulse-service)
- - [Deployment](#deployment)
- - [Endpoints](#endpoints)
- - [Authenticate](#authenticate)
- - [Opt-In|Out](#opt-inout)
- - [Inject telemetry](#inject-telemetry)
- - [Retrieve instructions](#retrieve-instructions)
- - [Data model](#data-model)
- - [Access Control](#access-control)
- 2. [Local Pulse Service](#2-local-pulse-service)
- - [Data storage](#data-storage)
- - [Sending telemetry](#sending-telemetry)
- - [Instruction polling](#instruction-polling)
-- [Drawbacks](#drawbacks)
-- [Alternatives](#alternatives)
-- [Adoption strategy](#adoption-strategy)
-- [How we teach this](#how-we-teach-this)
-- [Unresolved questions](#unresolved-questions)
-
-# Summary
-
-Evolve our telemetry to collect more diverse data, enhance our products with that data and engage with users by enabling:
-
-1. _Two-way_ communication link between us and our products.
-2. Flexibility to collect diverse data and different granularity based on the type of data.
-3. Enhanced features in our products, allowing remote-driven _small tweaks_ to existing builds.
-4. All this while still maintaining transparency about what we send and making sure we don't track any of the user's data.
-
-# Basic example
-
-There is a POC implemented in the branch [`pulse_poc`](https://github.com/elastic/kibana/tree/pulse_poc) in this repo.
-
-It covers the following scenarios:
-
-- Track the behaviour of our users in the UI, reporting UI events throughout our platform.
-- Report to Elastic when an unexpected error occurs and keep track of it. When it's fixed, it lets the user know, encouraging them to update to their deployment to the latest release (PR [#56724](https://github.com/elastic/kibana/pull/56724)).
-- Keep track of the notifications and news in the newsfeed to know when they are read/kept unseen. This might help us on improving the way we communicate updates to the user (PR [#53596](https://github.com/elastic/kibana/pull/53596)).
-- Provide a cost estimate for running that cluster in Elastic Cloud, so the user is well-informed about our up-to-date offering and can decide accordingly (PR [#56324](https://github.com/elastic/kibana/pull/56324)).
-- Customised "upgrade guide" from your current version to the latest (PR [#56556](https://github.com/elastic/kibana/pull/56556)).
-
-![image](../images/pulse_diagram.png)
-_Basic example of the architecture_
-
-# Motivation
-
-Based on our current telemetry, we have many _lessons learned_ we want to tackle:
-
-- It only supports one type of data:
- - It makes simple tasks like reporting aggregations of usage based on a number of days [an overengineered solution](https://github.com/elastic/kibana/issues/46599#issuecomment-545024137)
- - When reporting arrays (i.e.: `ui_metrics`), it cannot be consumed, making the data useless.
-- _One index to rule them all_:
-The current unique document structure comes at a price:
- - People consuming that information finding it hard to understand each element in the document ([[DISCUSS] Data dictionary for product usage data](https://github.com/elastic/telemetry/issues/211))
- - Maintaining the mappings is a tedious and risky process. It involved increasing the setting for the limit of fields in a mapping and reindexing documents (now millions of them).
- - We cannot fully control the data we insert in the documents: If we set `mappings.dynamic: 'strict'`, we'll reject all the documents containing more information than the actually mapped, losing all the other content we do want to receive.
-- Opt-out ratio:
-We want to reduce the number of `opt-out`s by providing some valuable feedback to our users so that they want to turn telemetry ON because they do benefit from it.
-
-# Detailed design
-
-This design is going to be tackled by introducing some common concepts to be used by the main two main components in this architecture:
-
-1. Remote Pulse Service (RPS)
-2. Local Pulse Service (LPS)
-
-After that, it explains how we envision the architecture and design of each of those components.
-
-## Concepts
-
-There are some new concepts we'd like to introduce with this new way of reporting telemetry:
-
-- **Deployment Hash ID**
-This is the _anonymised_ random ID assigned for a deployment. It is used to link multiple pieces of information for further analysis like cross-referencing different bits of information from different sources.
-- **Channels**
-This is each stream of data that have common information. Typically each channel will have a well defined source of information, different to the rest. It will also result in a different structure to the rest of channels. However, all the channels will maintain a minimum piece of common schema for cross-references (like **Deployment Hash ID** and **timestamp**).
-- **Instructions**
-These are the messages generated in the form of feedback to the different channels.
-Typically, channels will follow a bi-directional communication process _(Local <-> Remote)_ but there might be channels that do not generate any kind of instruction _(Local -> Remote)_ and, similarly, some other channels that do not provide any telemetry at all, but allows Pulse to send updates to our products _(Local <- Remote)_.
-
-## Phased implementation
-
-At the moment of writing this document, anyone can push _fake_ telemetry data to our Telemetry cluster. They only need to know the public encryption key, the endpoint and the format of the data, all of that easily retrievable. We take that into consideration when analysing the data we have at the moment and it is a risk we are OK with for now.
-
-But, given that we aim to provide feedback to the users and clusters in the form of instructions, the **Security and Integrity of the information** is critical. We need to come up with a solution that ensures the instructions are created based on data that was uniquely created (signed?) by the source. If we cannot ensure that, we should not allow that piece of information to be used in the generation of the instructions for that cluster and we should mark it so we know it could be maliciously injected when using it in our analysis.
-
-But also, we want to be able to ship the benefits of Pulse on every release. That's why we are thinking on a phased release, starting with limited functionality and evolving to the final complete vision of this product. This RFC suggests the following phased implementation:
-
-1. **Be able to ingest granular data**
-With the introduction of the **channels**, we can start receiving granular data that will help us all on our analysis. At this point, the same _security_ features as the current telemetry are considered: The payload is encrypted by the Kibana server so no mediator can spoof the data.
-The same risks as the current telemetry still apply at this point: anyone can _impersonate_ and send the data on behalf of another cluster, making the collected information useless.
-Because this information cannot be used to generate any instruction, we may not care about the **Deployment Hash ID** at this stage. This means no authentication is required to push data.
-The works at this point in time will be focused on creating the initial infraestructure, receiving early data and start with the migration of the current telemetry into the new channel-based model. Finally, start exploring the new visualisations we can provide with this new model of data.
-
-2. **Secured ingest channel**
-In this phase, our efforts will focus on securing the communications and integrity of the data. This includes:
- - **Generation of the Deployment Hash ID**:
- Discussions on whether it should be self-generated and accepted/rejected by the Remote Pulse Service (RPS) or it should be generated and assigned by the RPS because it is the only one that can ensure uniqueness.
- - **Locally store the Deployment Hash ID as an encrypted saved object**:
- This comes back with a caveat: OSS versions will not be able to receive instructions. We will need to maintain a fallback mechanism to the phase 1 logic (it may be a desired scenario because it could happen the encrypted saved objects are not recoverable due to an error in the deployment and we should still be able to apply that fallback).
- - **Authenticity of the information (Local -> Remote)**:
- We need to _sign_ the data in some way the RPS can confirm the information reported as for a _Deployment Hash ID_ comes from the right source.
- - **Authenticity of the information (Remote -> Local)**:
- We need the Local Pulse Service (LPS) to be able to confirm the responses from the RPS data has not been altered by any mediator. It could be done via encryption using a key provided by the LPS. This should be provided to the RPS inside an encrypted payload in the same fashion we currently encrypt the telemetry.
- - **Integrity of the data in the channels**:
- We need to ensure an external plugin cannot push data to channels to avoid malicious corruption of the data. We could achieve this by either making this plugin only available to Kibana-shipped plugins or storing the `pluginID` that is pushing the data to have better control of the source of the data (then an ingest pipeline can reject any source of data that should not be accepted).
-
- All the suggestions in this phase can be further discussed at that point (I will create another RFC to discuss those terms after this RFC is approved and merged).
-
-3. **Instruction handling**
-This final phase we'll implement the instruction generation and handling at the same time we are adding more **channels**.
-We can discuss at this point if we want to be able to provide _harmless_ instructions for those deployments that are not _secured_ (i.e.: Cloud cost estimations, User-profiled-based marketing updates, ...).
-
-## Architecture
-
-As mentioned earlier, at the beginning of this chapter, there are two main components in this architecture:
-
-1. Remote Pulse Service
-2. Local Pulse Service
-
-### 1. Remote Pulse Service
-
-This is the service that will receive and store the telemetry from all the _opted-in_ deployments. It will also generate the messages we want to report back to each deployment (aka: instructions).
-
-#### Deployment
-
-- The service will be hosted by Elastic.
-- Most likely maintained by the Infra team.
-- GCP is contemplated at this moment, but we need to confirm how would it affect us regarding the FedRamp approvals (and similar).
-- Exposes an API (check [Endpoints](#endpoints) to know more) to inject the data and retrieve the _instructions_.
-- The data will be stored in an OpenSearch cluster.
-
-#### Endpoints
-
-The following endpoints **will send every payload** detailed in below **encrypted** with a similar mechanism to the current telemetry encryption.
-
-##### Authenticate
-
-This Endpoint will be used to retrieve a randomised `deploymentID` and a `token` for the cluster to use in all the subsequent requests. Ideally, it will provide some sort of identifier (like `cluster_uuid` or `license.uuid`) so we can revoke its access to any of the endpoints if explicitly requested ([Blocking telemetry input](https://github.com/elastic/telemetry/pull/221) and [Delete previous telemetry data](https://github.com/elastic/telemetry/issues/209)).
-
-I'd appreciate some insights here to come up with a strong handshake mechanism to avoid stealing identities.
-
-In order to _dereference_ the data, we can store these mappings in a Vault or Secrets provider instead of an index in our OpenSearch.
-
-_NB: Not for phase 1_
-
-##### Opt-In|Out
-
-Similar to the current telemetry, we want to keep track of when the user opts in or out of telemetry. The implementation can be very similar to the current one. But we recently learned we need to add the origin to know what application has telemetry disabled (Kibana, Beats, Enterprise Search, ...). This makes me wonder if we will ever want to provide a granular option for the user to be able to cherry-pick about what channels are sent and which ones should be disabled.
-
-##### Inject telemetry
-
-In order to minimise the amount of requests, this `POST` should accept bulks of data in the payload (mind the payload size limits if any). It will require authentication based on the `deploymentID` and `token` explained in the [previous endpoint](#authenticate) (_NB: Not for phase 1_).
-
-The received payload will be pushed to a streaming technology (AWS Firehose, Google Pub/Sub, ...). This way we can maintain a buffer in cases the ingestion of data spikes or we need to stop our OpenSearch cluster for any maintenance purposes.
-
-A subscriber to that stream will receive that info a split the payload into smaller documents per channel and index them into their separate indices.
-
-This indexing should also trigger some additional processes like the **generation of instructions** and _special views_ (only if needed, check the point [Access control](#access-control) for more details).
-
-_NB: We might want to consider some sort of piggy-backing to include the instructions in the response. But for the purpose of this RFC, scalability and separation of concerns, I'd rather keep it for future possible improvements._
-
-##### Retrieve instructions
-
-_NB: Only after phase 3_
-
-This `GET` endpoint should return the list of instructions generated for that deployment. To control the likely ever-growing list of instructions for each deployment, it will accept a `since` query parameter where the requester can specify the timestamp ever since it was to retrieve the new values.
-
-This endpoint will read the `instructions-*` indices, filtering `updated-at` by the `since` query parameter (if provided) and it will return the results, grouping them by channels.
-
-Additionally, we can consider accepting an additional query parameter to retrieve only specific channels. For use cases like distributed components (endpoint, apm, beats, ...) polling instructions themselves.
-
-#### Data model
-
-The storage of each of the documents, will be based on monthly-rolling indices split by channels. This means we'll have indices like `pulse-raw-{CHANNEL_NAME}-YYYY.MM` and `pulse-instructions-{CHANNEL_NAME}-YYYY.MM` (final names TBD).
-
-The first group will be used to index all the incoming documents from the telemetry. While the second one will contain the instructions to be sent to the deployments.
-
-The mapping for those indices will be **`strict`** to avoid anyone storing unwanted/not-allowed info. The indexer defined in [the _Inject telemetry_ endpoint](#inject-telemetry) will need to handle accordingly the errors derived from the strict mapping.
-We'll set up a process to add new mappings and their descriptions before every new release.
-
-#### Access control
-
-- The access to _raw_ data indices will be very limited. Only granted to those in need of troubleshooting the service and maintaining mappings (this is the Pulse/Telemetry team at the moment).
-- Special views (as in aggregations/visualisations/snapshots of the data stored in special indices via separated indexers/aggregators/ES transform or via _BigQuery_ or similar) will be defined for different roles in the company to help them to take informed decisions based on the data.
-This way we'll be able to control "who can see what" on a very granual basis. It will also provide us with more flexibility to change to structure of the _raw_ if needed.
-
-### 2. Local Pulse Service
-
-This refers to the plugin running in Kibana in each of our customers' deployments. It will be a core service in NP, available for all plugins to get the existing channels, to send pieces of data, and subscribe to instructions.
-
-The channel handlers are only defined inside the pulse context and are used to normalise the data for each channel before sending it to the remote service. The CODEOWNERS should notify the Pulse team every time there's an intended change in this context.
-
-#### Data storage
-
-For the purpose of transparency, we want the user to be able to retrieve the telemetry we send at any point, so we should store the information we send for each channel in their own local _dot_ internal indices (similar to a copy of the `pulse-raw-*` and `pulse-instructions-*` indices in our remote service). We may want to also sync back from the remote service any updates we do to the documents: enrichment of the document, anonymisation, categorisation when it makes sense in that specific channel, ...
-
-In the same effort, we could even provide some _dashboards_ in Kibana for specific roles in the cluster to understand more about their deployment.
-
-Only those specific roles (admin?) should have access to these local indices, unless they grant permissions to other users they want to share this information with.
-
-The users should be able to control how long they want to keep that information for via ILM. A default ILM policy will be setup during the startup if it doesn't exist.
-
-#### Sending telemetry
-
-The telemetry will be sent, preferably, from the server. Only falling back to the browser in case we detect the server is behind firewalls and it cannot reach the service or if the user explicitly sets the behaviour in the config.
-
-Periodically, the process (either in the server or the browser) will retrieve the telemetry to be sent by the channels, compile it into 1 bulk payload and send it encrypted to the [ingest endpoint](#inject-telemetry) explained earlier.
-
-How often it sends the data, depends on the channel specifications. We will have 3 levels of periodicity:
-
-- `URGENT`: The data is sent as soon as possible.
-- `HIGH`: Sent every hour.
-- `NORMAL`: Sent every 24 hours.
-- `LOW`: Sent every 3 days.
-
-Some throttling policy should be applied to avoid exploiting the exceeded use of `URGENT`.
-
-#### Instruction polling
-
-Similarly to the sending of the telemetry, the instruction polling should happen only on one end (either the server or the browser). It will store the responses in the local index for each channel and the plugins reacting to those instructions will be able to consume that information based on their own needs (either load only the new ones or all the historic data at once).
-
-Depending on the subscriptions to the channels by the plugins, the polling will happen with different periodicity, similar to the one described in the chapter above.
-
-#### Exposing channels to the plugins
-
-The plugins will be able to send messages and/or consume instructions for any channel by using the methods provided as part of the `coreContext` in the `setup` and `start` lifecycle methods in a fashion like (types to be properly defined when implementing it):
-
-```typescript
-const coreContext: CoreSetup | CoreStart = {
- ...existingCoreContext,
- pulse: {
- sendToChannel: async (channelName: keyof Channels, payload: Channels[channelName]) => void,
- instructionsFromChannel$: (channelName: keyof Channels) => Observable,
- },
-}
-```
-
-Plugins will simply need to call `core.pulse.sendToChannel('errors', myUnexpectedErrorIWantToReport)` whenever they want to report any new data to that channel. This will call the channel's handler to store the data.
-
-Similarly, they'll be able to subscribe to channels like:
-
-```typescript
-core.pulse.instructionsFromChannel$('ui_behaviour_tracking')
- .pipe(filterInstructionsForMyPlugin) // Initially, we won't filter the instructions based on the plugin ID (might not be necessary in all cases)
- .subscribe(changeTheOrderOfTheComponents);
-```
-
-Internally in those methods we should append the `pluginId` to know who is sending/receiving the info.
-
-##### The _legacy_ collection
-
-The current telemetry collection via the `UsageCollector` service will be maintained until all the current telemetry is fully migrated into their own channels. In the meantime, the current existing telemetry will be sent to Pulse as the `legacy` channel. This way we can maintain the same architecture for the old and new telemetry to come. At this stage, there is no need for any plugin to update their logic unless they want to send more granular data using other (even specific to that plugin) channels.
-
-The mapping for this `legacy` channel will be kept `dynamic: false` instead of `strict` to ensure compatibility.
-
-# Drawbacks
-
-- Pushing data into telemetry nowadays is as simple as implementing your own `usageCollector`. For consuming, though, the telemetry team needs to update the mappings. But as soon as they do so, the previous data is available. Now we'll be more strict about the mapping. Rejecting any data that does not comply. Changing the structure of the reported data will result in data loss in that channel.
-- Hard dependency on the Pulse team's availability to update the metrics and on the Infra team to deploy the instruction handlers.
-- Testing architecture: any dockerised way to test the local dev environment?
-- We'll increase the local usage of indices. Making it more expensive to users to maintain the cluster. We need be to careful with this! Although it might not change much, compared to the current implementation, if any plugin decides to maintain its own index/saved objects to do aggregations afterwards. Similarly, more granularity per channel, may involve more network usage.
-- It is indeed a breaking change, but it can be migrated over-time as new features, making use of the instructions.
-- We need to update other products already reporting telemetry from outside Kibana (like Beats, Enterprise Search, Logstash, ...) to use the new way of pushing telemetry.
-
-# Alternatives
-
-> What other designs have been considered?
-
-We currently have the newsfeed to be able to communicate to the user. This is actually pulling in Kibana from a public API to retrieve the list of entries to be shown in the notification bar. But this is only limitted to notifications to the user while the new _intructions_ can provide capabilities like self-update/self-configuration of components like endpoints, elasticsearch, ...
-
-> What is the impact of not doing this?
-
-Users might not see any benefit from providing telemetry and will opt-out. The quality of the telemetry will likely not be as good (or it will require a higher effort on the plugin end to provide it like in [the latest lens effort](https://github.com/elastic/kibana/issues/46599#issuecomment-545024137))
-
-# Adoption strategy
-
-Initially, we'll focus on the remote service and move the current telemetry to report as a `"legacy"` channel to the new Pulse service.
-
-Then, we'll focus on doing the client side, providing new APIs to report the data, aiming for the minimum changes on the public end. For instance, the current usage collectors already report an ID, we can work on those IDs mapping to a channel (only grouping them when it makes sense). Nevertheless, it will require the devs to engage with the Pulse team for the mappings and definitions to be properly set up and updated. And any views to be added.
-
-Finally, the instruction handling APIs are completely new and it will require development on both _remote_ and _local_ ends for the instruction generation and handling.
-
-# How we teach this
-
-> What names and terminology work best for these concepts and why? How is this
-idea best presented? As a continuation of existing Kibana patterns?
-
-We have 3 points of view to show here:
-
-- From the users perspective, we need to show the value for them to have the telemetry activated.
-- From the devs, how to generate data and consume instructions.
-- From the PMs, how to consume the views + definitions of the fields.
-
-> Would the acceptance of this proposal mean the Kibana documentation must be
-re-organized or altered? Does it change how Kibana is taught to new developers
-at any level?
-
-This telemetry is supposed to be internal only. Only internal developers will be able to add to this. So the documentation will only be for internal puposes. As mentioned in the _Adoption strategy_, the idea is that the devs to report new data to telemetry will need to engage with the Pulse team.
-
-> How should this feature be taught to existing Kibana developers?
-
-# Unresolved questions
-
-- Pending to define a proper handshake in the authentication mechanism to reduce the chance of a man-in-the-middle attack or DDoS. => We already have some ideas thanks to @jportner and @kobelb but it will be resolved during the _Phase 2_ design.
-- Opt-in/out per channel?
diff --git a/rfcs/text/0012_encryption_key_rotation.md b/rfcs/text/0012_encryption_key_rotation.md
deleted file mode 100644
index 66bfa6b147eb..000000000000
--- a/rfcs/text/0012_encryption_key_rotation.md
+++ /dev/null
@@ -1,119 +0,0 @@
-- Start Date: 2020-07-22
-- RFC PR: [#72828](https://github.com/elastic/kibana/pull/72828)
-- Kibana Issue: (leave this empty)
-
-# Summary
-
-This RFC proposes a way of the encryption key (`xpack.encryptedSavedObjects.encryptionKey`) rotation that would allow administrators to seamlessly change existing encryption key without any data loss and manual intervention.
-
-# Basic example
-
-When administrators decide to rotate encryption key they will have to generate a new one and move the old key(s) to the `keyRotation` section in the `opensearch_dashboards.yml`:
-
-```yaml
-xpack.encryptedSavedObjects:
- encryptionKey: "NEW-encryption-key"
- keyRotation:
- decryptionOnlyKeys: ["OLD-encryption-key-1", "OLD-encryption-key-2"]
-```
-
-Before old decryption-only key is disposed administrators may want to call a dedicated and _protected_ API endpoint that will go through all registered Saved Objects with encrypted attributes and try to re-encrypt them with the primary encryption key:
-
-```http request
-POST https://localhost:5601/api/encrypted_saved_objects/rotate_key?conflicts=abort
-Content-Type: application/json
-Osd-Xsrf: true
-```
-
-# Motivation
-
-Today when encryption key changes we can no longer decrypt Saved Objects attributes that were previously encrypted with the `EncryptedSavedObjects` plugin. We handle this case in two different ways depending on whether consumers explicitly requested decryption or not:
-
-* If consumers explicitly request decryption via `getDecryptedAsInternalUser()` we abort operation and throw exception.
-* If consumers fetch Saved Objects with encrypted attributes that should be automatically decrypted (the ones with `dangerouslyExposeValue: true` marker) via standard Saved Objects APIs we don't abort operation, but rather strip all encrypted attributes from the response and record decryption error in the `error` Saved Object field.
-* If OpenSearch Dashboards tries to migrate encrypted Saved Objects at the start up time we abort operation and throw exception.
-
-In both of these cases we throw or record error with the specific type to allow consumers to gracefully handle this scenario and either drop Saved Objects with unrecoverable encrypted attributes or facilitate the process of re-entering and re-encryption of the new values.
-
-This approach works reasonably well in some scenarios, but it may become very troublesome if we have to deal with lots of Saved Objects. Moreover, we'd like to recommend our users to periodically rotate encryption keys even if they aren't compromised. Hence, we need to provide a way of seamless migration of the existing encrypted Saved Objects to a new encryption key.
-
-There are two main scenarios we'd like to cover in this RFC:
-
-## Encryption key is not available
-
-Administrators may lose existing encryption key or explicitly decide to not use it if it was compromised and users can no longer trust encrypted content that may have been tampered with. In this scenario encrypted portion of the existing Saved Objects is considered lost, and the only way to recover from this state is a manual intervention described previously. That means `EncryptedSavedObjects` plugin consumers __should__ continue supporting this scenario even after we implement a proper encryption key rotation mechanism described in this RFC.
-
-## Encryption key is available, but needs to be rotated
-
-In this scenario a new encryption key (primary encryption key) will be generated, and we will use it to encrypt new or updated Saved Objects. We will still need to know the old encryption key to decrypt existing attributes, but we will no longer use this key to encrypt any of the new or existing Saved Objects. It's also should be possible to have multiple old decryption-only keys.
-
-The old old decryption-only keys should be eventually disposed and users should have a way to make sure all existing Saved Objects are re-encrypted with the new primary encryption key.
-
-__NOTE:__ users can get into a state when different Saved Objects are encrypted with different encryption keys even if they didn't intend to rotate the encryption key. We anticipate that it can happen during initial Elastic Stack HA setup, when by mistake or intentionally different OpenSearch Dashboards instances were using different encryption keys. Key rotation mechanism can help to fix this issue without a data loss.
-
-# Detailed design
-
-The core idea is that when the encryption key needs to be rotated then a new key is generated and becomes a primary one, and the old one moves to the `keyRotation` section:
-
-```yaml
-xpack.encryptedSavedObjects:
- encryptionKey: "NEW-encryption-key"
- keyRotation:
- decryptionOnlyKeys: ["OLD-encryption-key"]
-```
-
-As the name implies, the key from the `decryptionOnlyKeys` is only used to decrypt content that we cannot decrypt with the primary encryption key. It's allowed to have multiple decryption-only keys at the same time. When user creates a new Saved Object or updates the existing one then its content is always encrypted with the primary encryption key. Config schema won't allow having the same key in `encryptionKey` and `decryptionOnlyKeys`.
-
-Having multiple decryption keys at the same time brings one problem though: we need to figure out which key to use to decrypt specific Saved Object. If our encryption keys could have a unique ID that we would store together with the encrypted data (we cannot use encryption key hash for that for obvious reasons) we could know for sure which key to use, but we don't have such functionality right now and it may not be the easiest one to manage through `yml` configuration anyway.
-
-Instead, this RFC proposes to try available existing decryption keys one by one to decrypt Saved Object and always start from the primary one. This way we won't incur any penalty while decrypting Saved Objects that are already encrypted with the primary encryption key, but there will still be some cost when we have to perform multiple decryption attempts. See the [`Drawbacks`](#drawbacks) section for the details.
-
-Technically just having `decryptionOnlyKeys` would be enough to cover the majority of the use cases, but the old decryption-only keys should be eventually disposed. At this point administrators would like to make sure _all_ Saved Objects are encrypted with the new primary encryption key. Another reason to re-encrypt all existing Saved Objects with the new key at once is to preventively reduce the performance impact of the multiple decryption attempts.
-
-We'd like to make this process as simple as possible while meeting the following requirements:
-
-* It should not be required to restart OpenSearch Dashboards to perform this type of migration since Saved Objects encrypted with the another encryption key can theoretically appear at any point in time.
-* It should be possible to integrate this operation into other operational flows our users may have and any user-friendly key management UIs we may introduce in this future.
-* Any possible failures that may happen during this operation shouldn't make OpenSearch Dashboards nonfunctional.
-* Ordinary users should not be able to trigger this migration since it may consume a considerable amount of computing resources.
-
-We think that the best option we have right now is a dedicated API endpoint that would trigger this migration:
-
-```http request
-POST https://localhost:5601/api/encrypted_saved_objects/rotate_key?conflicts=abort
-Content-Type: application/json
-Osd-Xsrf: true
-```
-
-This will be a protected endpoint and only user with enough privileges will be able to use it.
-
-Under the hood we'll scroll over all Saved Objects that are registered with `EncryptedSavedObjects` plugin and re-encrypt attributes only for those of them that can only be decrypted with any of the old decryption-only keys. Saved Objects that can be decrypted with the primary encryption key will be ignored. We'll also ignore the ones that cannot be decrypted with any of the available decryption keys at all, and presumably return their IDs in the response.
-
-As for any other encryption or decryption operation we'll record relevant bits in the audit logs.
-
-# Benefits
-
-* The concept of decryption-only keys is easy to grasp and allows OpenSearch Dashboards to function even if it has a mix of Saved Objects encrypted with different encryption keys.
-* Support of the key rotation out of the box decreases the chances of the data loss and makes `EncryptedSavedObjects` story more secure and approachable overall.
-
-# Drawbacks
-
-* Multiple decryption attempts affect performance. See [the performance test results](https://github.com/elastic/kibana/pull/72420#issue-453400211) for more details, but making two decryption attempts is basically twice as slow as with a single attempt. Although it's only relevant for the encrypted Saved Objects migration performed at the start up time and batch operations that trigger automatic decryption (only for the Saved Objects registered with `dangerouslyExposeValue: true` marker that nobody is using in Kibana right now), we may have more use cases in the future.
-* Historically we supported Kibana features with either configuration or dedicated UI, but in this case we want to introduce an API endpoint that _should be_ used directly. We may have a key management UI in the future though.
-
-# Alternatives
-
-We cannot think of any better alternative for `decryptionOnlyKeys` at the moment, but instead of API endpoint for the batch re-encryption we could potentially use another `opensearch_dashboards.yml` config option. For example `keyRotation.mode: onWrite | onStart | both`, but it feels a bit hacky and cannot be really integrated with anything else.
-
-# Adoption strategy
-
-Adoption strategy is pretty straightforward since the feature is an enhancement and doesn't bring any BWC concerns.
-
-# How we teach this
-
-Key rotation is a well-known paradigm. We'll update `README.md` of the `EncryptedSavedObjects` plugin and create a dedicated section in the public Kibana documentation.
-
-# Unresolved questions
-
-* Is it reasonable to have this feature in Basic?
-* Are there any other use-cases that are not covered by the proposal?
diff --git a/vars/agentInfo.groovy b/vars/agentInfo.groovy
deleted file mode 100644
index 166a86c16926..000000000000
--- a/vars/agentInfo.groovy
+++ /dev/null
@@ -1,40 +0,0 @@
-def print() {
- catchError(catchInterruptions: false, buildResult: null) {
- def startTime = sh(script: "date -d '-3 minutes' -Iseconds | sed s/+/%2B/", returnStdout: true).trim()
- def endTime = sh(script: "date -d '+1 hour 30 minutes' -Iseconds | sed s/+/%2B/", returnStdout: true).trim()
-
- def resourcesUrl =
- (
- "https://infra-stats.elastic.co/app/kibana#/visualize/edit/8bd92360-1b92-11ea-b719-aba04518cc34" +
- "?_g=(time:(from:'${startTime}',to:'${endTime}'))" +
- "&_a=(query:'host.name:${env.NODE_NAME}')"
- )
- .replaceAll("'", '%27') // Need to escape ' because of the shell echo below, but can't really replace "'" with "\'" because of groovy sandbox
- .replaceAll(/\)$/, '%29') // This is just here because the URL parsing in the Jenkins console doesn't work right
-
- def logsStartTime = sh(script: "date -d '-3 minutes' +%s", returnStdout: true).trim()
- def logsUrl =
- (
- "https://infra-stats.elastic.co/app/infra#/logs" +
- "?_g=()&flyoutOptions=(flyoutId:!n,flyoutVisibility:hidden,surroundingLogsId:!n)" +
- "&logFilter=(expression:'host.name:${env.NODE_NAME}',kind:kuery)" +
- "&logPosition=(position:(time:${logsStartTime}000),streamLive:!f)"
- )
- .replaceAll("'", '%27')
- .replaceAll('\\)', '%29')
-
- sh script: """
- set +x
- echo 'Resource Graph:'
- echo '${resourcesUrl}'
- echo ''
- echo 'Agent Logs:'
- echo '${logsUrl}'
- echo ''
- echo 'SSH Command:'
- echo "ssh -F ssh_config \$(hostname --ip-address)"
- """, label: "Worker/Agent/Node debug links"
- }
-}
-
-return this
diff --git a/vars/buildState.groovy b/vars/buildState.groovy
deleted file mode 100644
index 365705661350..000000000000
--- a/vars/buildState.groovy
+++ /dev/null
@@ -1,30 +0,0 @@
-import groovy.transform.Field
-
-public static @Field JENKINS_BUILD_STATE = [:]
-
-def add(key, value) {
- if (!buildState.JENKINS_BUILD_STATE.containsKey(key)) {
- buildState.JENKINS_BUILD_STATE[key] = value
- return true
- }
-
- return false
-}
-
-def set(key, value) {
- buildState.JENKINS_BUILD_STATE[key] = value
-}
-
-def get(key) {
- return buildState.JENKINS_BUILD_STATE[key]
-}
-
-def has(key) {
- return buildState.JENKINS_BUILD_STATE.containsKey(key)
-}
-
-def get() {
- return buildState.JENKINS_BUILD_STATE
-}
-
-return this
diff --git a/vars/catchErrors.groovy b/vars/catchErrors.groovy
deleted file mode 100644
index 2a1b55d83260..000000000000
--- a/vars/catchErrors.groovy
+++ /dev/null
@@ -1,15 +0,0 @@
-// Basically, this is a shortcut for catchError(catchInterruptions: false) {}
-// By default, catchError will swallow aborts/timeouts, which we almost never want
-// Also, by wrapping it in an additional try/catch, we cut down on spam in Pipeline Steps
-def call(Map params = [:], Closure closure) {
- try {
- closure()
- } catch (ex) {
- params.catchInterruptions = false
- catchError(params) {
- throw ex
- }
- }
-}
-
-return this
diff --git a/vars/esSnapshots.groovy b/vars/esSnapshots.groovy
deleted file mode 100644
index 884fbcdb17ae..000000000000
--- a/vars/esSnapshots.groovy
+++ /dev/null
@@ -1,50 +0,0 @@
-def promote(snapshotVersion, snapshotId) {
- def snapshotDestination = "${snapshotVersion}/archives/${snapshotId}"
- def MANIFEST_URL = "https://storage.googleapis.com/kibana-ci-es-snapshots-daily/${snapshotDestination}/manifest.json"
-
- dir('verified-manifest') {
- def verifiedSnapshotFilename = 'manifest-latest-verified.json'
-
- sh """
- curl -O '${MANIFEST_URL}'
- mv manifest.json ${verifiedSnapshotFilename}
- """
-
- googleStorageUpload(
- credentialsId: 'kibana-ci-gcs-plugin',
- bucket: "gs://kibana-ci-es-snapshots-daily/${snapshotVersion}",
- pattern: verifiedSnapshotFilename,
- sharedPublicly: false,
- showInline: false,
- )
- }
-
- // This would probably be more efficient if we could just copy using gsutil and specifying buckets for src and dest
- // But we don't currently have access to the GCS credentials in a way that can be consumed easily from here...
- dir('transfer-to-permanent') {
- googleStorageDownload(
- credentialsId: 'kibana-ci-gcs-plugin',
- bucketUri: "gs://kibana-ci-es-snapshots-daily/${snapshotDestination}/*",
- localDirectory: '.',
- pathPrefix: snapshotDestination,
- )
-
- def manifestJson = readFile file: 'manifest.json'
- writeFile(
- file: 'manifest.json',
- text: manifestJson.replace("kibana-ci-es-snapshots-daily/${snapshotDestination}", "kibana-ci-es-snapshots-permanent/${snapshotVersion}")
- )
-
- // Ideally we would have some delete logic here before uploading,
- // But we don't currently have access to the GCS credentials in a way that can be consumed easily from here...
- googleStorageUpload(
- credentialsId: 'kibana-ci-gcs-plugin',
- bucket: "gs://kibana-ci-es-snapshots-permanent/${snapshotVersion}",
- pattern: '*.*',
- sharedPublicly: false,
- showInline: false,
- )
- }
-}
-
-return this
diff --git a/vars/getCheckoutInfo.groovy b/vars/getCheckoutInfo.groovy
deleted file mode 100644
index f9d797f8127c..000000000000
--- a/vars/getCheckoutInfo.groovy
+++ /dev/null
@@ -1,50 +0,0 @@
-def call(branchOverride) {
- def repoInfo = [
- branch: branchOverride ?: env.ghprbSourceBranch,
- targetBranch: env.ghprbTargetBranch,
- targetsTrackedBranch: true
- ]
-
- if (repoInfo.branch == null) {
- if (!(params.branch_specifier instanceof String)) {
- throw new Exception(
- "Unable to determine branch automatically, either pass a branch name to getCheckoutInfo() or use the branch_specifier param."
- )
- }
-
- // strip prefix from the branch specifier to make it consistent with ghprbSourceBranch
- repoInfo.branch = params.branch_specifier.replaceFirst(/^(refs\/heads\/|origin\/)/, "")
- }
-
- repoInfo.commit = sh(
- script: "git rev-parse HEAD",
- label: "determining checked out sha",
- returnStdout: true
- ).trim()
-
- if (repoInfo.targetBranch) {
- // Try to clone fetch from Github up to 8 times, waiting 15 secs between attempts
- retryWithDelay(8, 15) {
- sh(
- script: "git fetch origin ${repoInfo.targetBranch}",
- label: "fetch latest from '${repoInfo.targetBranch}' at origin"
- )
- }
-
- repoInfo.mergeBase = sh(
- script: "git merge-base HEAD FETCH_HEAD",
- label: "determining merge point with '${repoInfo.targetBranch}' at origin",
- returnStdout: true
- ).trim()
-
- def pkgJson = readFile("package.json")
- def releaseBranch = toJSON(pkgJson).branch
- repoInfo.targetsTrackedBranch = releaseBranch == repoInfo.targetBranch
- }
-
- print "repoInfo: ${repoInfo}"
-
- return repoInfo
-}
-
-return this
diff --git a/vars/githubCommitStatus.groovy b/vars/githubCommitStatus.groovy
deleted file mode 100644
index 248d226169a6..000000000000
--- a/vars/githubCommitStatus.groovy
+++ /dev/null
@@ -1,55 +0,0 @@
-def defaultCommit() {
- if (buildState.has('checkoutInfo')) {
- return buildState.get('checkoutInfo').commit
- }
-}
-
-def onStart(commit = defaultCommit(), context = 'kibana-ci') {
- catchError {
- if (githubPr.isPr() || !commit) {
- return
- }
-
- create(commit, 'pending', 'Build started.', context)
- }
-}
-
-def onFinish(commit = defaultCommit(), context = 'kibana-ci') {
- catchError {
- if (githubPr.isPr() || !commit) {
- return
- }
-
- def status = buildUtils.getBuildStatus()
-
- if (status == 'SUCCESS' || status == 'UNSTABLE') {
- create(commit, 'success', 'Build completed successfully.', context)
- } else if(status == 'ABORTED') {
- create(commit, 'error', 'Build aborted or timed out.', context)
- } else {
- create(commit, 'error', 'Build failed.', context)
- }
- }
-}
-
-def trackBuild(commit, context, Closure closure) {
- onStart(commit, context)
- catchError {
- closure()
- }
- onFinish(commit, context)
-}
-
-// state: error|failure|pending|success
-def create(sha, state, description, context) {
- withGithubCredentials {
- return githubApi.post("repos/elastic/kibana/statuses/${sha}", [
- state: state,
- description: description,
- context: context,
- target_url: env.BUILD_URL
- ])
- }
-}
-
-return this
diff --git a/vars/githubPr.groovy b/vars/githubPr.groovy
deleted file mode 100644
index 546a6785ac2f..000000000000
--- a/vars/githubPr.groovy
+++ /dev/null
@@ -1,308 +0,0 @@
-/**
- Wraps the main/important part of a job, executes it, and then publishes a comment to GitHub with the status.
-
- It will check for the existence of GHPRB env variables before doing any actual PR work,
- so it can be used to wrap code that is executed in both PR and non-PR contexts.
-
- Inside the comment, it will hide a JSON blob containing build data (status, etc).
-
- Then, the next time it posts a comment, it will:
- 1. Read the previous comment and parse the json
- 2. Create a new comment, add a summary of up to 5 previous builds to it, and append this build's data to the hidden JSON
- 3. Delete the old comment
-
- So, there is only ever one build status comment on a PR at any given time, the most recent one.
-*/
-def withDefaultPrComments(closure) {
- catchErrors {
- // kibanaPipeline.notifyOnError() needs to know if comments are enabled, so lets track it with a global
- // isPr() just ensures this functionality is skipped for non-PR builds
- buildState.set('PR_COMMENTS_ENABLED', isPr())
- catchErrors {
- closure()
- }
- sendComment(true)
- }
-}
-
-def sendComment(isFinal = false) {
- if (!buildState.get('PR_COMMENTS_ENABLED')) {
- return
- }
-
- def status = buildUtils.getBuildStatus()
- if (status == "ABORTED") {
- return
- }
-
- def lastComment = getLatestBuildComment()
- def info = getLatestBuildInfo(lastComment) ?: [:]
- info.builds = (info.builds ?: []).takeRight(5) // Rotate out old builds
-
- // If two builds are running at the same time, the first one should not post a comment after the second one
- if (info.number && info.number.toInteger() > env.BUILD_NUMBER.toInteger()) {
- return
- }
-
- def shouldUpdateComment = !!info.builds.find { it.number == env.BUILD_NUMBER }
-
- def message = getNextCommentMessage(info, isFinal)
-
- if (shouldUpdateComment) {
- updateComment(lastComment.id, message)
- } else {
- createComment(message)
-
- if (lastComment && lastComment.user.login == 'kibanamachine') {
- deleteComment(lastComment.id)
- }
- }
-}
-
-// Checks whether or not this currently executing build was triggered via a PR in the elastic/kibana repo
-def isPr() {
- return !!(env.ghprbPullId && env.ghprbPullLink && env.ghprbPullLink =~ /\/elastic\/kibana\//)
-}
-
-def getLatestBuildComment() {
- return getComments()
- .reverse()
- .find { (it.user.login == 'elasticmachine' || it.user.login == 'kibanamachine') && it.body =~ //
- if (!matches || !matches[0]) {
- return null
- }
-
- return toJSON(matches[0][1].trim())
-}
-
-def getLatestBuildInfo() {
- return getLatestBuildInfo(getLatestBuildComment())
-}
-
-def getLatestBuildInfo(comment) {
- return comment ? getBuildInfoFromComment(comment.body) : null
-}
-
-def getHistoryText(builds) {
- if (!builds || builds.size() < 1) {
- return ""
- }
-
- def list = builds
- .reverse()
- .collect { build ->
- if (build.status == "SUCCESS") {
- return "* :green_heart: [Build #${build.number}](${build.url}) succeeded ${build.commit}"
- } else if(build.status == "UNSTABLE") {
- return "* :yellow_heart: [Build #${build.number}](${build.url}) was flaky ${build.commit}"
- } else {
- return "* :broken_heart: [Build #${build.number}](${build.url}) failed ${build.commit}"
- }
- }
- .join("\n")
-
- return "### History\n${list}"
-}
-
-def getTestFailuresMessage() {
- def failures = testUtils.getFailures()
- if (!failures) {
- return ""
- }
-
- def messages = []
- messages << "---\n\n### [Test Failures](${env.BUILD_URL}testReport)"
-
- failures.take(3).each { failure ->
- messages << """
-${failure.fullDisplayName}
-
-[Link to Jenkins](${failure.url})
-"""
-
- if (failure.stdOut) {
- messages << "\n#### Standard Out\n```\n${failure.stdOut}\n```"
- }
-
- if (failure.stdErr) {
- messages << "\n#### Standard Error\n```\n${failure.stdErr}\n```"
- }
-
- if (failure.stacktrace) {
- messages << "\n#### Stack Trace\n```\n${failure.stacktrace}\n```"
- }
-
- messages << " \n\n---"
- }
-
- if (failures.size() > 3) {
- messages << "and ${failures.size() - 3} more failures, only showing the first 3."
- }
-
- return messages.join("\n")
-}
-
-def getBuildStatusIncludingMetrics() {
- def status = buildUtils.getBuildStatus()
-
- if (status == 'SUCCESS' && shouldCheckCiMetricSuccess() && !ciStats.getMetricsSuccess()) {
- return 'FAILURE'
- }
-
- return status
-}
-
-def getNextCommentMessage(previousCommentInfo = [:], isFinal = false) {
- def info = previousCommentInfo ?: [:]
- info.builds = previousCommentInfo.builds ?: []
-
- // When we update an in-progress comment, we need to remove the old version from the history
- info.builds = info.builds.findAll { it.number != env.BUILD_NUMBER }
-
- def messages = []
-
- def status = isFinal
- ? getBuildStatusIncludingMetrics()
- : buildUtils.getBuildStatus()
-
- if (!isFinal) {
- def failuresPart = status != 'SUCCESS' ? ', with failures' : ''
- messages << """
- ## :hourglass_flowing_sand: Build in-progress${failuresPart}
- * [continuous-integration/kibana-ci/pull-request](${env.BUILD_URL})
- * Commit: ${getCommitHash()}
- * This comment will update when the build is complete
- """
- } else if (status == 'SUCCESS') {
- messages << """
- ## :green_heart: Build Succeeded
- * [continuous-integration/kibana-ci/pull-request](${env.BUILD_URL})
- * Commit: ${getCommitHash()}
- """
- } else if(status == 'UNSTABLE') {
- def message = """
- ## :yellow_heart: Build succeeded, but was flaky
- * [continuous-integration/kibana-ci/pull-request](${env.BUILD_URL})
- * Commit: ${getCommitHash()}
- """.stripIndent()
-
- def failures = retryable.getFlakyFailures()
- if (failures && failures.size() > 0) {
- def list = failures.collect { " * ${it.label}" }.join("\n")
- message += "* Flaky suites:\n${list}"
- }
-
- messages << message
- } else {
- messages << """
- ## :broken_heart: Build Failed
- * [continuous-integration/kibana-ci/pull-request](${env.BUILD_URL})
- * Commit: ${getCommitHash()}
- * [Pipeline Steps](${env.BUILD_URL}flowGraphTable) (look for red circles / failed steps)
- * [Interpreting CI Failures](https://www.elastic.co/guide/en/kibana/current/interpreting-ci-failures.html)
- """
- }
-
- if (status != 'SUCCESS' && status != 'UNSTABLE') {
- try {
- def steps = getFailedSteps()
- if (steps?.size() > 0) {
- def list = steps.collect { "* [${it.displayName}](${it.logs})" }.join("\n")
- messages << "### Failed CI Steps\n${list}"
- }
- } catch (ex) {
- buildUtils.printStacktrace(ex)
- print "Error retrieving failed pipeline steps for PR comment, will skip this section"
- }
- }
-
- messages << getTestFailuresMessage()
-
- if (isFinal) {
- messages << ciStats.getMetricsReport()
- }
-
- if (info.builds && info.builds.size() > 0) {
- messages << getHistoryText(info.builds)
- }
-
- messages << "To update your PR or re-run it, just comment with:\n`@elasticmachine merge upstream`"
-
- info.builds << [
- status: status,
- url: env.BUILD_URL,
- number: env.BUILD_NUMBER,
- commit: getCommitHash()
- ]
-
- messages << """
-
- """
-
- return messages
- .findAll { !!it } // No blank strings
- .collect { it.stripIndent().trim() } // This just allows us to indent various strings above, but leaves them un-indented in the comment
- .join("\n\n")
-}
-
-def createComment(message) {
- if (!isPr()) {
- error "Trying to post a GitHub PR comment on a non-PR or non-elastic PR build"
- }
-
- withGithubCredentials {
- return githubApi.post("repos/elastic/kibana/issues/${env.ghprbPullId}/comments", [ body: message ])
- }
-}
-
-def getComments() {
- withGithubCredentials {
- return githubIssues.getComments(env.ghprbPullId)
- }
-}
-
-def updateComment(commentId, message) {
- if (!isPr()) {
- error "Trying to post a GitHub PR comment on a non-PR or non-elastic PR build"
- }
-
- withGithubCredentials {
- def path = "repos/elastic/kibana/issues/comments/${commentId}"
- def json = toJSON([ body: message ]).toString()
-
- def resp = githubApi([ path: path ], [ method: "POST", data: json, headers: [ "X-HTTP-Method-Override": "PATCH" ] ])
- return toJSON(resp)
- }
-}
-
-def deleteComment(commentId) {
- withGithubCredentials {
- def path = "repos/elastic/kibana/issues/comments/${commentId}"
- return githubApi([ path: path ], [ method: "DELETE" ])
- }
-}
-
-def getCommitHash() {
- return env.ghprbActualCommit
-}
-
-def getFailedSteps() {
- return jenkinsApi.getFailedSteps()?.findAll { step ->
- step.displayName != 'Check out from version control'
- }
-}
-
-def shouldCheckCiMetricSuccess() {
- // disable ciMetrics success check when a PR is targetting a non-tracked branch
- if (buildState.has('checkoutInfo') && !buildState.get('checkoutInfo').targetsTrackedBranch) {
- return false
- }
-
- return true
-}
diff --git a/vars/jenkinsApi.groovy b/vars/jenkinsApi.groovy
deleted file mode 100644
index 57818593ffeb..000000000000
--- a/vars/jenkinsApi.groovy
+++ /dev/null
@@ -1,21 +0,0 @@
-def getSteps() {
- def url = "${env.BUILD_URL}api/json?tree=actions[nodes[iconColor,running,displayName,id,parents]]"
- def responseRaw = httpRequest([ method: "GET", url: url ])
- def response = toJSON(responseRaw)
-
- def graphAction = response?.actions?.find { it._class == "org.jenkinsci.plugins.workflow.job.views.FlowGraphAction" }
-
- return graphAction?.nodes
-}
-
-def getFailedSteps() {
- def steps = getSteps()
- def failedSteps = steps?.findAll { (it.iconColor == "red" || it.iconColor == "red_anime") && it._class == "org.jenkinsci.plugins.workflow.cps.nodes.StepAtomNode" }
- failedSteps.each { step ->
- step.logs = "${env.BUILD_URL}execution/node/${step.id}/log".toString()
- }
-
- return failedSteps
-}
-
-return this
diff --git a/vars/kibanaPipeline.groovy b/vars/kibanaPipeline.groovy
deleted file mode 100644
index fae649b93383..000000000000
--- a/vars/kibanaPipeline.groovy
+++ /dev/null
@@ -1,444 +0,0 @@
-def withPostBuildReporting(Map params, Closure closure) {
- try {
- closure()
- } finally {
- def parallelWorkspaces = []
- try {
- parallelWorkspaces = getParallelWorkspaces()
- } catch(ex) {
- print ex
- }
-
- if (params.runErrorReporter) {
- catchErrors {
- runErrorReporter([pwd()] + parallelWorkspaces)
- }
- }
-
- catchErrors {
- publishJunit()
- }
-
- catchErrors {
- def parallelWorkspace = "${env.WORKSPACE}/parallel"
- if (fileExists(parallelWorkspace)) {
- dir(parallelWorkspace) {
- def workspaceTasks = [:]
-
- parallelWorkspaces.each { workspaceDir ->
- workspaceTasks[workspaceDir] = {
- dir(workspaceDir) {
- catchErrors {
- runbld.junit()
- }
- }
- }
- }
-
- if (workspaceTasks) {
- parallel(workspaceTasks)
- }
- }
- }
- }
- }
-}
-
-def getParallelWorkspaces() {
- def workspaces = []
- def parallelWorkspace = "${env.WORKSPACE}/parallel"
- if (fileExists(parallelWorkspace)) {
- dir(parallelWorkspace) {
- // findFiles only returns files if you use glob, so look for a file that should be in every valid workspace
- workspaces = findFiles(glob: '*/opensearch-dashboards/package.json')
- .collect {
- // get the paths to the OpenSearch Dashboards directories for the parallel workspaces
- return parallelWorkspace + '/' + it.path.tokenize('/').dropRight(1).join('/')
- }
- }
- }
-
- return workspaces
-}
-
-def notifyOnError(Closure closure) {
- try {
- closure()
- } catch (ex) {
- // If this is the first failed step, it's likely that the error hasn't propagated up far enough to mark the build as a failure
- currentBuild.result = 'FAILURE'
- catchErrors {
- githubPr.sendComment(false)
- }
- catchErrors {
- // an empty map is a valid config, but is falsey, so let's use .has()
- if (buildState.has('SLACK_NOTIFICATION_CONFIG')) {
- slackNotifications.sendFailedBuild(buildState.get('SLACK_NOTIFICATION_CONFIG'))
- }
- }
- throw ex
- }
-}
-
-def withFunctionalTestEnv(List additionalEnvs = [], Closure closure) {
- // This can go away once everything that uses the deprecated workers.parallelProcesses() is moved to task queue
- def parallelId = env.TASK_QUEUE_PROCESS_ID ?: env.CI_PARALLEL_PROCESS_NUMBER
-
- def opensearchDashboardsPort = "61${parallelId}1"
- def opensearchPort = "61${parallelId}2"
- def opensearchTransportPort = "61${parallelId}3"
- def ingestManagementPackageRegistryPort = "61${parallelId}4"
- def alertingProxyPort = "61${parallelId}5"
-
- withEnv([
- "CI_GROUP=${parallelId}",
- "REMOVE_OPENSEARCH_DASHBOARDS_INSTALL_DIR=1",
- "CI_PARALLEL_PROCESS_NUMBER=${parallelId}",
- "TEST_OPENSEARCH_DASHBOARDS_HOST=localhost",
- "TEST_OPENSEARCH_DASHBOARDS_PORT=${opensearchDashboardsPort}",
- "TEST_OPENSEARCH_DASHBOARDS_URL=http://elastic:changeme@localhost:${opensearchDashboardsPort}",
- "TEST_ES_URL=http://elastic:changeme@localhost:${opensearchPort}",
- "TEST_ES_TRANSPORT_PORT=${opensearchTransportPort}",
- "OSD_NP_PLUGINS_BUILT=true",
- "INGEST_MANAGEMENT_PACKAGE_REGISTRY_PORT=${ingestManagementPackageRegistryPort}",
- "ALERTING_PROXY_PORT=${alertingProxyPort}"
- ] + additionalEnvs) {
- closure()
- }
-}
-
-def functionalTestProcess(String name, Closure closure) {
- return {
- notifyOnError {
- withFunctionalTestEnv(["JOB=${name}"], closure)
- }
- }
-}
-
-def functionalTestProcess(String name, String script) {
- return functionalTestProcess(name) {
- retryable(name) {
- runbld(script, "Execute ${name}")
- }
- }
-}
-
-def ossCiGroupProcess(ciGroup) {
- return functionalTestProcess("ciGroup" + ciGroup) {
- withEnv([
- "CI_GROUP=${ciGroup}",
- "JOB=opensearch-dashboards-ciGroup${ciGroup}",
- ]) {
- retryable("opensearch-dashboards-ciGroup${ciGroup}") {
- runbld("./test/scripts/jenkins_ci_group.sh", "Execute opensearch-dashboards-ciGroup${ciGroup}")
- }
- }
- }
-}
-
-def xpackCiGroupProcess(ciGroup) {
- return functionalTestProcess("xpack-ciGroup" + ciGroup) {
- withEnv([
- "CI_GROUP=${ciGroup}",
- "JOB=xpack-opensearch-dashboards-ciGroup${ciGroup}",
- ]) {
- retryable("xpack-opensearch-dashboards-ciGroup${ciGroup}") {
- runbld("./test/scripts/jenkins_xpack_ci_group.sh", "Execute xpack-opensearch-dashboards-ciGroup${ciGroup}")
- }
- }
- }
-}
-
-def uploadGcsArtifact(uploadPrefix, pattern) {
- googleStorageUpload(
- credentialsId: 'opensearch-dashboards-ci-gcs-plugin',
- bucket: "gs://${uploadPrefix}",
- pattern: pattern,
- sharedPublicly: true,
- showInline: true,
- )
-}
-
-def withGcsArtifactUpload(workerName, closure) {
- def uploadPrefix = "opensearch-dashboards-ci-artifacts/jobs/${env.JOB_NAME}/${BUILD_NUMBER}/${workerName}"
- def ARTIFACT_PATTERNS = [
- 'target/opensearch-dashboards-*',
- 'target/test-metrics/*',
- 'target/opensearch-dashboards-security-solution/**/*.png',
- 'target/junit/**/*',
- 'target/test-suites-ci-plan.json',
- 'test/**/screenshots/session/*.png',
- 'test/**/screenshots/failure/*.png',
- 'test/**/screenshots/diff/*.png',
- 'test/functional/failure_debug/html/*.html',
- 'x-pack/test/**/screenshots/session/*.png',
- 'x-pack/test/**/screenshots/failure/*.png',
- 'x-pack/test/**/screenshots/diff/*.png',
- 'x-pack/test/functional/failure_debug/html/*.html',
- 'x-pack/test/functional/apps/reporting/reports/session/*.pdf',
- ]
-
- withEnv([
- "GCS_UPLOAD_PREFIX=${uploadPrefix}"
- ], {
- try {
- closure()
- } finally {
- catchErrors {
- ARTIFACT_PATTERNS.each { pattern ->
- uploadGcsArtifact(uploadPrefix, pattern)
- }
-
- dir(env.WORKSPACE) {
- ARTIFACT_PATTERNS.each { pattern ->
- uploadGcsArtifact(uploadPrefix, "parallel/*/opensearch-dashboards/${pattern}")
- }
- }
- }
- }
- })
-}
-
-def publishJunit() {
- junit(testResults: 'target/junit/**/*.xml', allowEmptyResults: true, keepLongStdio: true)
-
- dir(env.WORKSPACE) {
- junit(testResults: 'parallel/*/opensearch-dashboards/target/junit/**/*.xml', allowEmptyResults: true, keepLongStdio: true)
- }
-}
-
-def sendMail(Map params = [:]) {
- // If the build doesn't have a result set by this point, there haven't been any errors and it can be marked as a success
- // The e-mail plugin for the infra e-mail depends upon this being set
- currentBuild.result = currentBuild.result ?: 'SUCCESS'
-
- def buildStatus = buildUtils.getBuildStatus()
- if (buildStatus != 'SUCCESS' && buildStatus != 'ABORTED') {
- node('flyweight') {
- sendInfraMail()
- sendOpenSearchDashboardsMail(params)
- }
- }
-}
-
-def sendInfraMail() {
- catchErrors {
- step([
- $class: 'Mailer',
- notifyEveryUnstableBuild: true,
- recipients: 'infra-root+build@elastic.co',
- sendToIndividuals: false
- ])
- }
-}
-
-def sendOpenSearchDashboardsMail(Map params = [:]) {
- def config = [to: 'build-opensearch-dashboards@elastic.co'] + params
-
- catchErrors {
- def buildStatus = buildUtils.getBuildStatus()
- if(params.NOTIFY_ON_FAILURE && buildStatus != 'SUCCESS' && buildStatus != 'ABORTED') {
- emailext(
- config.to,
- subject: "${env.JOB_NAME} - Build # ${env.BUILD_NUMBER} - ${buildStatus}",
- body: '${SCRIPT,template="groovy-html.template"}',
- mimeType: 'text/html',
- )
- }
- }
-}
-
-def bash(script, label) {
- sh(
- script: "#!/bin/bash\n${script}",
- label: label
- )
-}
-
-def doSetup() {
- notifyOnError {
- retryWithDelay(2, 15) {
- try {
- runbld("./test/scripts/jenkins_setup.sh", "Setup Build Environment and Dependencies")
- } catch (ex) {
- try {
- // Setup expects this directory to be missing, so we need to remove it before we do a retry
- bash("rm -rf ../elasticsearch", "Remove elasticsearch sibling directory, if it exists")
- } finally {
- throw ex
- }
- }
- }
- }
-}
-
-def buildOss(maxWorkers = '') {
- notifyOnError {
- withEnv(["OSD_OPTIMIZER_MAX_WORKERS=${maxWorkers}"]) {
- runbld("./test/scripts/jenkins_build_opensearch_dashboards.sh", "Build OSS/Default OpenSearch Dashboards")
- }
- }
-}
-
-def buildXpack(maxWorkers = '') {
- notifyOnError {
- withEnv(["OSD_OPTIMIZER_MAX_WORKERS=${maxWorkers}"]) {
- runbld("./test/scripts/jenkins_xpack_build_opensearch_dashboards.sh", "Build X-Pack OpenSearch Dashboards")
- }
- }
-}
-
-def runErrorReporter() {
- return runErrorReporter([pwd()])
-}
-
-def runErrorReporter(workspaces) {
- def status = buildUtils.getBuildStatus()
- def dryRun = status != "ABORTED" ? "" : "--no-github-update"
-
- def globs = workspaces.collect { "'${it}/target/junit/**/*.xml'" }.join(" ")
-
- bash(
- """
- source src/dev/ci_setup/setup_env.sh
- node scripts/report_failed_tests ${dryRun} ${globs}
- """,
- "Report failed tests, if necessary"
- )
-}
-
-def call(Map params = [:], Closure closure) {
- def config = [timeoutMinutes: 135, checkPrChanges: false, setCommitStatus: false] + params
-
- stage("OpenSearch Dashboards Pipeline") {
- timeout(time: config.timeoutMinutes, unit: 'MINUTES') {
- timestamps {
- ansiColor('xterm') {
- if (config.setCommitStatus) {
- buildState.set('shouldSetCommitStatus', true)
- }
- if (config.checkPrChanges && githubPr.isPr()) {
- pipelineLibraryTests()
-
- print "Checking PR for changes to determine if CI needs to be run..."
-
- if (prChanges.areChangesSkippable()) {
- print "No changes requiring CI found in PR, skipping."
- return
- }
- }
- try {
- closure()
- } finally {
- if (config.setCommitStatus) {
- githubCommitStatus.onFinish()
- }
- }
- }
- }
- }
- }
-}
-
-// Creates a task queue using withTaskQueue, and copies the bootstrapped OpenSearch Dashboards repo into each process's workspace
-// Note that node_modules are mostly symlinked to save time/space. See test/scripts/jenkins_setup_parallel_workspace.sh
-def withCiTaskQueue(Map options = [:], Closure closure) {
- def setupClosure = {
- // This can't use runbld, because it expects the source to be there, which isn't yet
- bash("${env.WORKSPACE}/opensearch-dashboards/test/scripts/jenkins_setup_parallel_workspace.sh", "Set up duplicate workspace for parallel process")
- }
-
- def config = [parallel: 24, setup: setupClosure] + options
-
- withTaskQueue(config) {
- closure.call()
- }
-}
-
-def scriptTask(description, script) {
- return {
- withFunctionalTestEnv {
- notifyOnError {
- runbld(script, description)
- }
- }
- }
-}
-
-def scriptTaskDocker(description, script) {
- return {
- withDocker(scriptTask(description, script))
- }
-}
-
-def buildDocker() {
- sh(
- script: "./.ci/build_docker.sh",
- label: 'Build CI Docker image'
- )
-}
-
-def withDocker(Closure closure) {
- docker
- .image('opensearch-dashboards-ci')
- .inside(
- "-v /etc/runbld:/etc/runbld:ro -v '${env.JENKINS_HOME}:${env.JENKINS_HOME}' -v '/dev/shm/workspace:/dev/shm/workspace' --shm-size 2GB --cpus 4",
- closure
- )
-}
-
-def buildOssPlugins() {
- runbld('./test/scripts/jenkins_build_plugins.sh', 'Build OSS Plugins')
-}
-
-def buildXpackPlugins() {
- runbld('./test/scripts/jenkins_xpack_build_plugins.sh', 'Build X-Pack Plugins')
-}
-
-def withTasks(Map params = [worker: [:]], Closure closure) {
- catchErrors {
- def config = [name: 'ci-worker', size: 'xxl', ramDisk: true] + (params.worker ?: [:])
-
- workers.ci(config) {
- withCiTaskQueue(parallel: 24) {
- parallel([
- docker: {
- retry(2) {
- buildDocker()
- }
- },
-
- // There are integration tests etc that require the plugins to be built first, so let's go ahead and build them before set up the parallel workspaces
- ossPlugins: { buildOssPlugins() },
- xpackPlugins: { buildXpackPlugins() },
- ])
-
- catchErrors {
- closure()
- }
- }
- }
- }
-}
-
-def allCiTasks() {
- withTasks {
- tasks.check()
- tasks.lint()
- tasks.test()
- tasks.functionalOss()
- tasks.functionalXpack()
- }
-}
-
-def pipelineLibraryTests() {
- whenChanged(['vars/', '.ci/pipeline-library/']) {
- workers.base(size: 'flyweight', bootstrapped: false, ramDisk: false) {
- dir('.ci/pipeline-library') {
- sh './gradlew test'
- }
- }
- }
-}
-
-return this
diff --git a/vars/prChanges.groovy b/vars/prChanges.groovy
deleted file mode 100644
index d082672c065a..000000000000
--- a/vars/prChanges.groovy
+++ /dev/null
@@ -1,80 +0,0 @@
-import groovy.transform.Field
-
-public static @Field PR_CHANGES_CACHE = []
-
-// if all the changed files in a PR match one of these regular
-// expressions then CI will be skipped for that PR
-def getSkippablePaths() {
- return [
- /^docs\//,
- /^rfcs\//,
- /^.ci\/.+\.yml$/,
- /^.ci\/es-snapshots\//,
- /^.ci\/pipeline-library\//,
- /^.ci\/Jenkinsfile_[^\/]+$/,
- /^\.github\//,
- /\.md$/,
- ]
-}
-
-// exclusion regular expressions that will invalidate paths that
-// match one of the skippable path regular expressions
-def getNotSkippablePaths() {
- return [
- // this file is auto-generated and changes to it need to be validated with CI
- /^docs\/developer\/plugin-list.asciidoc$/,
- // don't skip CI on prs with changes to plugin readme files (?i) is for case-insensitive matching
- /(?i)\/plugins\/[^\/]+\/readme\.(md|asciidoc)$/,
- ]
-}
-
-def areChangesSkippable() {
- if (!githubPr.isPr()) {
- return false
- }
-
- try {
- def skippablePaths = getSkippablePaths()
- def notSkippablePaths = getNotSkippablePaths()
- def files = getChangedFiles()
-
- // 3000 is the max files GH API will return
- if (files.size() >= 3000) {
- return false
- }
-
- files = files.findAll { file ->
- def skippable = skippablePaths.find { regex -> file =~ regex} && !notSkippablePaths.find { regex -> file =~ regex }
- return !skippable
- }
-
- return files.size() < 1
- } catch (ex) {
- buildUtils.printStacktrace(ex)
- print "Error while checking to see if CI is skippable based on changes. Will run CI."
- return false
- }
-}
-
-def getChanges() {
- if (!PR_CHANGES_CACHE && env.ghprbPullId) {
- withGithubCredentials {
- def changes = githubPrs.getChanges(env.ghprbPullId)
- if (changes) {
- PR_CHANGES_CACHE.addAll(changes)
- }
- }
- }
-
- return PR_CHANGES_CACHE
-}
-
-def getChangedFiles() {
- def changes = getChanges()
- def changedFiles = changes.collect { it.filename }
- def renamedFiles = changes.collect { it.previousFilename }.findAll { it }
-
- return changedFiles + renamedFiles
-}
-
-return this
diff --git a/vars/retryWithDelay.groovy b/vars/retryWithDelay.groovy
deleted file mode 100644
index 83fd94c6f2b1..000000000000
--- a/vars/retryWithDelay.groovy
+++ /dev/null
@@ -1,18 +0,0 @@
-def call(retryTimes, delaySecs, closure) {
- retry(retryTimes) {
- try {
- closure()
- } catch (org.jenkinsci.plugins.workflow.steps.FlowInterruptedException ex) {
- throw ex // Immediately re-throw build abort exceptions, don't sleep first
- } catch (Exception ex) {
- sleep delaySecs
- throw ex
- }
- }
-}
-
-def call(retryTimes, Closure closure) {
- call(retryTimes, 15, closure)
-}
-
-return this
diff --git a/vars/retryable.groovy b/vars/retryable.groovy
deleted file mode 100644
index ed84a00ece49..000000000000
--- a/vars/retryable.groovy
+++ /dev/null
@@ -1,75 +0,0 @@
-import groovy.transform.Field
-
-public static @Field GLOBAL_RETRIES_ENABLED = false
-public static @Field MAX_GLOBAL_RETRIES = 1
-public static @Field CURRENT_GLOBAL_RETRIES = 0
-public static @Field FLAKY_FAILURES = []
-
-def setMax(max) {
- retryable.MAX_GLOBAL_RETRIES = max
-}
-
-def enable() {
- retryable.GLOBAL_RETRIES_ENABLED = true
-}
-
-def enable(max) {
- enable()
- setMax(max)
-}
-
-def haveReachedMaxRetries() {
- return retryable.CURRENT_GLOBAL_RETRIES >= retryable.MAX_GLOBAL_RETRIES
-}
-
-def getFlakyFailures() {
- return retryable.FLAKY_FAILURES
-}
-
-def printFlakyFailures() {
- catchErrors {
- def failures = getFlakyFailures()
-
- if (failures && failures.size() > 0) {
- print "This build had the following flaky failures:"
- failures.each {
- print "\n${it.label}"
- buildUtils.printStacktrace(it.exception)
- }
- }
- }
-}
-
-def call(label, Closure closure) {
- if (!retryable.GLOBAL_RETRIES_ENABLED) {
- closure()
- return
- }
-
- try {
- closure()
- } catch (ex) {
- if (haveReachedMaxRetries()) {
- print "Couldn't retry '${label}', have already reached the max number of retries for this build."
- throw ex
- }
-
- retryable.CURRENT_GLOBAL_RETRIES++
- buildUtils.printStacktrace(ex)
- unstable "${label} failed but is retryable, trying a second time..."
-
- def JOB = env.JOB ? "${env.JOB}-retry" : ""
- withEnv([
- "JOB=${JOB}",
- ]) {
- closure()
- }
-
- retryable.FLAKY_FAILURES << [
- label: label,
- exception: ex,
- ]
-
- unstable "${label} failed on the first attempt, but succeeded on the second. Marking it as flaky."
- }
-}
diff --git a/vars/runbld.groovy b/vars/runbld.groovy
deleted file mode 100644
index e52bc244c65c..000000000000
--- a/vars/runbld.groovy
+++ /dev/null
@@ -1,17 +0,0 @@
-def call(script, label, enableJunitProcessing = false) {
- def extraConfig = enableJunitProcessing ? "" : "--config ${env.WORKSPACE}/kibana/.ci/runbld_no_junit.yml"
-
- sh(
- script: "/usr/local/bin/runbld -d '${pwd()}' ${extraConfig} ${script}",
- label: label ?: script
- )
-}
-
-def junit() {
- sh(
- script: "/usr/local/bin/runbld -d '${pwd()}' ${env.WORKSPACE}/kibana/test/scripts/jenkins_runbld_junit.sh",
- label: "Process JUnit reports with runbld"
- )
-}
-
-return this
diff --git a/vars/slackNotifications.groovy b/vars/slackNotifications.groovy
deleted file mode 100644
index 02aad14d8ba3..000000000000
--- a/vars/slackNotifications.groovy
+++ /dev/null
@@ -1,228 +0,0 @@
-def getFailedBuildBlocks() {
- def messages = [
- getFailedSteps(),
- getTestFailures(),
- ]
-
- return messages
- .findAll { !!it } // No blank strings
- .collect { markdownBlock(it) }
-}
-
-def dividerBlock() {
- return [ type: "divider" ]
-}
-
-// If a message is longer than the limit, split it up by '\n' into parts, and return as many parts as will fit within the limit
-def shortenMessage(message, sizeLimit = 3000) {
- if (message.size() <= sizeLimit) {
- return message
- }
-
- def truncatedMessage = "[...truncated...]"
-
- def parts = message.split("\n")
- message = ""
-
- for(def part in parts) {
- if ((message.size() + part.size() + truncatedMessage.size() + 1) > sizeLimit) {
- break;
- }
- message += part+"\n"
- }
-
- message += truncatedMessage
-
- return message.size() <= sizeLimit ? message : truncatedMessage
-}
-
-def markdownBlock(message) {
- return [
- type: "section",
- text: [
- type: "mrkdwn",
- text: shortenMessage(message, 3000), // 3000 is max text length for `section`s only
- ],
- ]
-}
-
-def contextBlock(message) {
- return [
- type: "context",
- elements: [
- [
- type: 'mrkdwn',
- text: message, // Not sure what the size limit is here, I tried 10000s of characters and it still worked
- ]
- ]
- ]
-}
-
-def getFailedSteps() {
- try {
- def steps = jenkinsApi.getFailedSteps()?.findAll { step ->
- step.displayName != 'Check out from version control'
- }
-
- if (steps?.size() > 0) {
- def list = steps.collect { "• <${it.logs}|${it.displayName}>" }.join("\n")
- return "*Failed Steps*\n${list}"
- }
- } catch (ex) {
- buildUtils.printStacktrace(ex)
- print "Error retrieving failed pipeline steps for PR comment, will skip this section"
- }
-
- return ""
-}
-
-def getTestFailures() {
- def failures = testUtils.getFailures()
- if (!failures) {
- return ""
- }
-
- def messages = []
- messages << "*Test Failures*"
-
- def list = failures.take(10).collect {
- def name = it
- .fullDisplayName
- .split(/\./, 2)[-1]
- // Only the following three characters need to be escaped for link text, per Slack's docs
- .replaceAll('&', '&')
- .replaceAll('<', '<')
- .replaceAll('>', '>')
-
- return "• <${it.url}|${name}>"
- }.join("\n")
-
- def moreText = failures.size() > 10 ? "\n• ...and ${failures.size()-10} more" : ""
- return "*Test Failures*\n${list}${moreText}"
-}
-
-def getDefaultDisplayName() {
- return "${env.JOB_NAME} ${env.BUILD_DISPLAY_NAME}"
-}
-
-def getDefaultContext(config = [:]) {
- def progressMessage = ""
- if (config && !config.isFinal) {
- progressMessage = "In-progress"
- } else {
- def duration = currentBuild.durationString.replace(' and counting', '')
- progressMessage = "${buildUtils.getBuildStatus().toLowerCase().capitalize()} after ${duration}"
- }
-
- return contextBlock([
- progressMessage,
- "",
- ].join(' · '))
-}
-
-def getStatusIcon(config = [:]) {
- if (config && !config.isFinal) {
- return ':hourglass_flowing_sand:'
- }
-
- def status = buildUtils.getBuildStatus()
- if (status == 'UNSTABLE') {
- return ':yellow_heart:'
- }
-
- return ':broken_heart:'
-}
-
-def getBackupMessage(config) {
- return "${getStatusIcon(config)} ${config.title}\n\nFirst attempt at sending this notification failed. Please check the build."
-}
-
-def sendFailedBuild(Map params = [:]) {
- def config = [
- channel: '#kibana-operations-alerts',
- title: "*<${env.BUILD_URL}|${getDefaultDisplayName()}>*",
- message: getDefaultDisplayName(),
- color: 'danger',
- icon: ':jenkins:',
- username: 'Kibana Operations',
- isFinal: false,
- ] + params
-
- config.context = config.context ?: getDefaultContext(config)
-
- def title = "${getStatusIcon(config)} ${config.title}"
- def message = "${getStatusIcon(config)} ${config.message}"
-
- def blocks = [markdownBlock(title)]
- getFailedBuildBlocks().each { blocks << it }
- blocks << dividerBlock()
- blocks << config.context
-
- def channel = config.channel
- def timestamp = null
-
- def previousResp = buildState.get('SLACK_NOTIFICATION_RESPONSE')
- if (previousResp) {
- // When using `timestamp` to update a previous message, you have to use the channel ID from the previous response
- channel = previousResp.channelId
- timestamp = previousResp.ts
- }
-
- def resp = slackSend(
- channel: channel,
- timestamp: timestamp,
- username: config.username,
- iconEmoji: config.icon,
- color: config.color,
- message: message,
- blocks: blocks
- )
-
- if (!resp) {
- resp = slackSend(
- channel: config.channel,
- username: config.username,
- iconEmoji: config.icon,
- color: config.color,
- message: message,
- blocks: [markdownBlock(getBackupMessage(config))]
- )
- }
-
- if (resp) {
- buildState.set('SLACK_NOTIFICATION_RESPONSE', resp)
- }
-}
-
-def onFailure(Map options = [:]) {
- catchError {
- def status = buildUtils.getBuildStatus()
- if (status != "SUCCESS") {
- catchErrors {
- options.isFinal = true
- sendFailedBuild(options)
- }
- }
- }
-}
-
-def onFailure(Map options = [:], Closure closure) {
- if (options.disabled) {
- catchError {
- closure()
- }
-
- return
- }
-
- buildState.set('SLACK_NOTIFICATION_CONFIG', options)
-
- // try/finally will NOT work here, because the build status will not have been changed to ERROR when the finally{} block executes
- catchError {
- closure()
- }
-
- onFailure(options)
-}
-
-return this
diff --git a/vars/task.groovy b/vars/task.groovy
deleted file mode 100644
index 0c07b519b6fe..000000000000
--- a/vars/task.groovy
+++ /dev/null
@@ -1,5 +0,0 @@
-def call(Closure closure) {
- withTaskQueue.addTask(closure)
-}
-
-return this
diff --git a/vars/tasks.groovy b/vars/tasks.groovy
deleted file mode 100644
index dcce665652d3..000000000000
--- a/vars/tasks.groovy
+++ /dev/null
@@ -1,131 +0,0 @@
-def call(List closures) {
- withTaskQueue.addTasks(closures)
-}
-
-def check() {
- tasks([
- kibanaPipeline.scriptTask('Check Telemetry Schema', 'test/scripts/checks/telemetry.sh'),
- kibanaPipeline.scriptTask('Check TypeScript Projects', 'test/scripts/checks/ts_projects.sh'),
- kibanaPipeline.scriptTask('Check Doc API Changes', 'test/scripts/checks/doc_api_changes.sh'),
- kibanaPipeline.scriptTask('Check Types', 'test/scripts/checks/type_check.sh'),
- kibanaPipeline.scriptTask('Check Bundle Limits', 'test/scripts/checks/bundle_limits.sh'),
- kibanaPipeline.scriptTask('Check i18n', 'test/scripts/checks/i18n.sh'),
- kibanaPipeline.scriptTask('Check File Casing', 'test/scripts/checks/file_casing.sh'),
- kibanaPipeline.scriptTask('Check Lockfile Symlinks', 'test/scripts/checks/lock_file_symlinks.sh'),
- kibanaPipeline.scriptTask('Check Licenses', 'test/scripts/checks/licenses.sh'),
- kibanaPipeline.scriptTask('Verify NOTICE', 'test/scripts/checks/verify_notice.sh'),
- kibanaPipeline.scriptTask('Test Projects', 'test/scripts/checks/test_projects.sh'),
- kibanaPipeline.scriptTask('Test Hardening', 'test/scripts/checks/test_hardening.sh'),
- ])
-}
-
-def lint() {
- tasks([
- kibanaPipeline.scriptTask('Lint: eslint', 'test/scripts/lint/eslint.sh'),
- kibanaPipeline.scriptTask('Lint: sasslint', 'test/scripts/lint/sasslint.sh'),
- ])
-}
-
-def test() {
- tasks([
- // These 2 tasks require isolation because of hard-coded, conflicting ports and such, so let's use Docker here
- kibanaPipeline.scriptTaskDocker('Jest Integration Tests', 'test/scripts/test/jest_integration.sh'),
- kibanaPipeline.scriptTaskDocker('Mocha Tests', 'test/scripts/test/mocha.sh'),
-
- kibanaPipeline.scriptTask('Jest Unit Tests', 'test/scripts/test/jest_unit.sh'),
- kibanaPipeline.scriptTask('API Integration Tests', 'test/scripts/test/api_integration.sh'),
- kibanaPipeline.scriptTask('X-Pack SIEM cyclic dependency', 'test/scripts/test/xpack_siem_cyclic_dependency.sh'),
- kibanaPipeline.scriptTask('X-Pack List cyclic dependency', 'test/scripts/test/xpack_list_cyclic_dependency.sh'),
- kibanaPipeline.scriptTask('X-Pack Jest Unit Tests', 'test/scripts/test/xpack_jest_unit.sh'),
- ])
-}
-
-def functionalOss(Map params = [:]) {
- def config = params ?: [
- serverIntegration: true,
- ciGroups: true,
- firefox: true,
- accessibility: true,
- pluginFunctional: true,
- visualRegression: false,
- ]
-
- task {
- kibanaPipeline.buildOss(6)
-
- if (config.ciGroups) {
- def ciGroups = 1..12
- tasks(ciGroups.collect { kibanaPipeline.ossCiGroupProcess(it) })
- }
-
- if (config.firefox) {
- task(kibanaPipeline.functionalTestProcess('oss-firefox', './test/scripts/jenkins_firefox_smoke.sh'))
- }
-
- if (config.accessibility) {
- task(kibanaPipeline.functionalTestProcess('oss-accessibility', './test/scripts/jenkins_accessibility.sh'))
- }
-
- if (config.pluginFunctional) {
- task(kibanaPipeline.functionalTestProcess('oss-pluginFunctional', './test/scripts/jenkins_plugin_functional.sh'))
- }
-
- if (config.visualRegression) {
- task(kibanaPipeline.functionalTestProcess('oss-visualRegression', './test/scripts/jenkins_visual_regression.sh'))
- }
-
- if (config.serverIntegration) {
- task(kibanaPipeline.scriptTaskDocker('serverIntegration', './test/scripts/server_integration.sh'))
- }
- }
-}
-
-def functionalXpack(Map params = [:]) {
- def config = params ?: [
- ciGroups: true,
- firefox: true,
- accessibility: true,
- pluginFunctional: true,
- savedObjectsFieldMetrics:true,
- pageLoadMetrics: false,
- visualRegression: false,
- ]
-
- task {
- kibanaPipeline.buildXpack(10)
-
- if (config.ciGroups) {
- def ciGroups = 1..10
- tasks(ciGroups.collect { kibanaPipeline.xpackCiGroupProcess(it) })
- }
-
- if (config.firefox) {
- task(kibanaPipeline.functionalTestProcess('xpack-firefox', './test/scripts/jenkins_xpack_firefox_smoke.sh'))
- }
-
- if (config.accessibility) {
- task(kibanaPipeline.functionalTestProcess('xpack-accessibility', './test/scripts/jenkins_xpack_accessibility.sh'))
- }
-
- if (config.visualRegression) {
- task(kibanaPipeline.functionalTestProcess('xpack-visualRegression', './test/scripts/jenkins_xpack_visual_regression.sh'))
- }
-
- if (config.savedObjectsFieldMetrics) {
- task(kibanaPipeline.functionalTestProcess('xpack-savedObjectsFieldMetrics', './test/scripts/jenkins_xpack_saved_objects_field_metrics.sh'))
- }
-
- whenChanged([
- 'x-pack/plugins/security_solution/',
- 'x-pack/test/security_solution_cypress/',
- 'x-pack/plugins/triggers_actions_ui/public/application/sections/action_connector_form/',
- 'x-pack/plugins/triggers_actions_ui/public/application/context/actions_connectors_context.tsx',
- ]) {
- if (githubPr.isPr()) {
- task(kibanaPipeline.functionalTestProcess('xpack-securitySolutionCypress', './test/scripts/jenkins_security_solution_cypress.sh'))
- }
- }
- }
-}
-
-return this
diff --git a/vars/whenChanged.groovy b/vars/whenChanged.groovy
deleted file mode 100644
index c58ec83f2b05..000000000000
--- a/vars/whenChanged.groovy
+++ /dev/null
@@ -1,57 +0,0 @@
-/*
- whenChanged('some/path') { yourCode() } can be used to execute pipeline code in PRs only when changes are detected on paths that you specify.
- The specified code blocks will also always be executed during the non-PR jobs for tracked branches.
-
- You have the option of passing in path prefixes, or regexes. Single or multiple.
- Path specifications are NOT globby, they are only prefixes.
- Specifying multiple will treat them as ORs.
-
- Example Usages:
- whenChanged('a/path/prefix/') { someCode() }
- whenChanged(startsWith: 'a/path/prefix/') { someCode() } // Same as above
- whenChanged(['prefix1/', 'prefix2/']) { someCode() }
- whenChanged(regex: /\.test\.js$/) { someCode() }
- whenChanged(regex: [/abc/, /xyz/]) { someCode() }
-*/
-
-def call(String startsWithString, Closure closure) {
- return whenChanged([ startsWith: startsWithString ], closure)
-}
-
-def call(List startsWithStrings, Closure closure) {
- return whenChanged([ startsWith: startsWithStrings ], closure)
-}
-
-def call(Map params, Closure closure) {
- if (!githubPr.isPr()) {
- return closure()
- }
-
- def files = prChanges.getChangedFiles()
- def hasMatch = false
-
- if (params.regex) {
- params.regex = [] + params.regex
- print "Checking PR for changes that match: ${params.regex.join(', ')}"
- hasMatch = !!files.find { file ->
- params.regex.find { regex -> file =~ regex }
- }
- }
-
- if (!hasMatch && params.startsWith) {
- params.startsWith = [] + params.startsWith
- print "Checking PR for changes that start with: ${params.startsWith.join(', ')}"
- hasMatch = !!files.find { file ->
- params.startsWith.find { str -> file.startsWith(str) }
- }
- }
-
- if (hasMatch) {
- print "Changes found, executing pipeline."
- closure()
- } else {
- print "No changes found, skipping."
- }
-}
-
-return this
diff --git a/vars/withGithubCredentials.groovy b/vars/withGithubCredentials.groovy
deleted file mode 100644
index 224e49af1bd6..000000000000
--- a/vars/withGithubCredentials.groovy
+++ /dev/null
@@ -1,9 +0,0 @@
-def call(closure) {
- withCredentials([
- string(credentialsId: '2a9602aa-ab9f-4e52-baf3-b71ca88469c7', variable: 'GITHUB_TOKEN'),
- ]) {
- closure()
- }
-}
-
-return this
diff --git a/vars/withTaskQueue.groovy b/vars/withTaskQueue.groovy
deleted file mode 100644
index 8132d6264744..000000000000
--- a/vars/withTaskQueue.groovy
+++ /dev/null
@@ -1,154 +0,0 @@
-import groovy.transform.Field
-
-public static @Field TASK_QUEUES = [:]
-public static @Field TASK_QUEUES_COUNTER = 0
-
-/**
- withTaskQueue creates a queue of "tasks" (just plain closures to execute), and executes them with your desired level of concurrency.
- This way, you can define, for example, 40 things that need to execute, then only allow 10 of them to execute at once.
-
- Each "process" will execute in a separate, unique, empty directory.
- If you want each process to have a bootstrapped kibana repo, check out kibanaPipeline.withCiTaskQueue
-
- Using the queue currently requires an agent/worker.
-
- Usage:
-
- withTaskQueue(parallel: 10) {
- task { print "This is a task" }
-
- // This is the same as calling task() multiple times
- tasks([ { print "Another task" }, { print "And another task" } ])
-
- // Tasks can queue up subsequent tasks
- task {
- buildThing()
- task { print "I depend on buildThing()" }
- }
- }
-
- You can also define a setup task that each process should execute one time before executing tasks:
- withTaskQueue(parallel: 10, setup: { sh "my-setup-scrupt.sh" }) {
- ...
- }
-
-*/
-def call(Map options = [:], Closure closure) {
- def config = [ parallel: 10 ] + options
- def counter = ++TASK_QUEUES_COUNTER
-
- // We're basically abusing withEnv() to create a "scope" for all steps inside of a withTaskQueue block
- // This way, we could have multiple task queue instances in the same pipeline
- withEnv(["TASK_QUEUE_ID=${counter}"]) {
- withTaskQueue.TASK_QUEUES[env.TASK_QUEUE_ID] = [
- tasks: [],
- tmpFile: sh(script: 'mktemp', returnStdout: true).trim()
- ]
-
- closure.call()
-
- def processesExecuting = 0
- def processes = [:]
- def iterationId = 0
-
- for(def i = 1; i <= config.parallel; i++) {
- def j = i
- processes["task-queue-process-${j}"] = {
- catchErrors {
- withEnv([
- "TASK_QUEUE_PROCESS_ID=${j}",
- "TASK_QUEUE_ITERATION_ID=${++iterationId}"
- ]) {
- dir("${WORKSPACE}/parallel/${j}/kibana") {
- if (config.setup) {
- config.setup.call(j)
- }
-
- def isDone = false
- while(!isDone) { // TODO some kind of timeout?
- catchErrors {
- if (!getTasks().isEmpty()) {
- processesExecuting++
- catchErrors {
- def task
- try {
- task = getTasks().pop()
- } catch (java.util.NoSuchElementException ex) {
- return
- }
-
- task.call()
- }
- processesExecuting--
- // If a task finishes, and no new tasks were queued up, and nothing else is executing
- // Then all of the processes should wake up and exit
- if (processesExecuting < 1 && getTasks().isEmpty()) {
- taskNotify()
- }
- return
- }
-
- if (processesExecuting > 0) {
- taskSleep()
- return
- }
-
- // Queue is empty, no processes are executing
- isDone = true
- }
- }
- }
- }
- }
- }
- }
- parallel(processes)
- }
-}
-
-// If we sleep in a loop using Groovy code, Pipeline Steps is flooded with Sleep steps
-// So, instead, we just watch a file and `touch` it whenever something happens that could modify the queue
-// There's a 20 minute timeout just in case something goes wrong,
-// in which case this method will get called again if the process is actually supposed to be waiting.
-def taskSleep() {
- sh(script: """#!/bin/bash
- TIMESTAMP=\$(date '+%s' -d "0 seconds ago")
- for (( i=1; i<=240; i++ ))
- do
- if [ "\$(stat -c %Y '${getTmpFile()}')" -ge "\$TIMESTAMP" ]
- then
- break
- else
- sleep 5
- if [[ \$i == 240 ]]; then
- echo "Waited for new tasks for 20 minutes, exiting in case something went wrong"
- fi
- fi
- done
- """, label: "Waiting for new tasks...")
-}
-
-// Used to let the task queue processes know that either a new task has been queued up, or work is complete
-def taskNotify() {
- sh "touch '${getTmpFile()}'"
-}
-
-def getTasks() {
- return withTaskQueue.TASK_QUEUES[env.TASK_QUEUE_ID].tasks
-}
-
-def getTmpFile() {
- return withTaskQueue.TASK_QUEUES[env.TASK_QUEUE_ID].tmpFile
-}
-
-def addTask(Closure closure) {
- getTasks() << closure
- taskNotify()
-}
-
-def addTasks(List closures) {
- closures.reverse().each {
- getTasks() << it
- }
- taskNotify()
-}
diff --git a/vars/workers.groovy b/vars/workers.groovy
deleted file mode 100644
index e857bcc4dd1c..000000000000
--- a/vars/workers.groovy
+++ /dev/null
@@ -1,195 +0,0 @@
-// "Workers" in this file will spin up an instance, do some setup etc depending on the configuration, and then execute some work that you define
-// e.g. workers.base(name: 'my-worker') { sh "echo 'ready to execute some kibana scripts'" }
-
-def label(size) {
- switch(size) {
- case 'flyweight':
- return 'flyweight'
- case 's':
- return 'docker && linux && immutable'
- case 's-highmem':
- return 'docker && tests-s'
- case 'l':
- return 'docker && tests-l'
- case 'xl':
- return 'docker && tests-xl'
- case 'xl-highmem':
- return 'docker && tests-xl-highmem'
- case 'xxl':
- return 'docker && tests-xxl && gobld/machineType:custom-64-270336'
- }
-
- error "unknown size '${size}'"
-}
-
-/*
- The base worker that all of the others use. Will clone the scm (assumed to be kibana), and run kibana bootstrap processes by default.
-
- Parameters:
- size - size of worker label to use, e.g. 's' or 'xl'
- ramDisk - Should the workspace be mounted in memory? Default: true
- bootstrapped - If true, download kibana dependencies, run osd bootstrap, etc. Default: true
- name - Name of the worker for display purposes, filenames, etc.
- scm - Jenkins scm configuration for checking out code. Use `null` to disable checkout. Default: inherited from job
-*/
-def base(Map params, Closure closure) {
- def config = [size: '', ramDisk: true, bootstrapped: true, name: 'unnamed-worker', scm: scm] + params
- if (!config.size) {
- error "You must specify an agent size, such as 'xl' or 's', when using workers.base()"
- }
-
- node(label(config.size)) {
- agentInfo.print()
-
- if (config.ramDisk) {
- // Move to a temporary workspace, so that we can symlink the real workspace into /dev/shm
- def originalWorkspace = env.WORKSPACE
- ws('/tmp/workspace') {
- sh(
- script: """
- mkdir -p /dev/shm/workspace
- mkdir -p '${originalWorkspace}' # create all of the directories leading up to the workspace, if they don't exist
- rm --preserve-root -rf '${originalWorkspace}' # then remove just the workspace, just in case there's stuff in it
- ln -s /dev/shm/workspace '${originalWorkspace}'
- """,
- label: "Move workspace to RAM - /dev/shm/workspace"
- )
- }
- }
-
- sh(
- script: "mkdir -p ${env.WORKSPACE}/tmp",
- label: "Create custom temp directory"
- )
-
- def checkoutInfo = [:]
-
- if (config.scm) {
- // Try to clone from Github up to 8 times, waiting 15 secs between attempts
- retryWithDelay(8, 15) {
- checkout scm
- }
-
- dir("kibana") {
- checkoutInfo = getCheckoutInfo()
-
- // use `checkoutInfo` as a flag to indicate that we've already reported the pending commit status
- if (buildState.get('shouldSetCommitStatus') && !buildState.has('checkoutInfo')) {
- buildState.set('checkoutInfo', checkoutInfo)
- githubCommitStatus.onStart()
- }
- }
-
- ciStats.reportGitInfo(
- checkoutInfo.branch,
- checkoutInfo.commit,
- checkoutInfo.targetBranch,
- checkoutInfo.mergeBase
- )
- }
-
- withEnv([
- "CI=true",
- "HOME=${env.JENKINS_HOME}",
- "PR_SOURCE_BRANCH=${env.ghprbSourceBranch ?: ''}",
- "PR_TARGET_BRANCH=${env.ghprbTargetBranch ?: ''}",
- "PR_AUTHOR=${env.ghprbPullAuthorLogin ?: ''}",
- "TEST_BROWSER_HEADLESS=1",
- "GIT_BRANCH=${checkoutInfo.branch}",
- "TMPDIR=${env.WORKSPACE}/tmp", // For Chrome and anything else that respects it
- ]) {
- withCredentials([
- string(credentialsId: 'vault-addr', variable: 'VAULT_ADDR'),
- string(credentialsId: 'vault-role-id', variable: 'VAULT_ROLE_ID'),
- string(credentialsId: 'vault-secret-id', variable: 'VAULT_SECRET_ID'),
- ]) {
- // scm is configured to check out to the ./kibana directory
- dir('kibana') {
- if (config.bootstrapped) {
- kibanaPipeline.doSetup()
- }
-
- closure()
- }
- }
- }
- }
-}
-
-// Worker for ci processes. Extends the base worker and adds GCS artifact upload, error reporting, junit processing
-def ci(Map params, Closure closure) {
- def config = [ramDisk: true, bootstrapped: true, runErrorReporter: true] + params
-
- return base(config) {
- kibanaPipeline.withGcsArtifactUpload(config.name) {
- kibanaPipeline.withPostBuildReporting(config) {
- closure()
- }
- }
- }
-}
-
-// Worker for running the current intake jobs. Just runs a single script after bootstrap.
-def intake(jobName, String script) {
- return {
- ci(name: jobName, size: 's-highmem', ramDisk: true) {
- withEnv(["JOB=${jobName}"]) {
- kibanaPipeline.notifyOnError {
- runbld(script, "Execute ${jobName}")
- }
- }
- }
- }
-}
-
-// Worker for running functional tests. Runs a setup process (e.g. the kibana build) then executes a map of closures in parallel (e.g. one for each ciGroup)
-def functional(name, Closure setup, Map processes) {
- return {
- parallelProcesses(name: name, setup: setup, processes: processes, delayBetweenProcesses: 20, size: 'xl')
- }
-}
-
-/*
- Creates a ci worker that can run a setup process, followed by a group of processes in parallel.
-
- Parameters:
- name: Name of the worker for display purposes, filenames, etc.
- setup: Closure to execute after the agent is bootstrapped, before starting the parallel work
- processes: Map of closures that will execute in parallel after setup. Each closure is passed a unique number.
- delayBetweenProcesses: Number of seconds to wait between starting the parallel processes. Useful to spread the load of heavy init processes, e.g. Elasticsearch starting up. Default: 0
- size: size of worker label to use, e.g. 's' or 'xl'
-*/
-def parallelProcesses(Map params) {
- def config = [name: 'parallel-worker', setup: {}, processes: [:], delayBetweenProcesses: 0, size: 'xl'] + params
-
- ci(size: config.size, name: config.name) {
- config.setup()
-
- def nextProcessNumber = 1
- def process = { processName, processClosure ->
- def processNumber = nextProcessNumber
- nextProcessNumber++
-
- return {
- if (config.delayBetweenProcesses && config.delayBetweenProcesses > 0) {
- // This delay helps smooth out CPU load caused by OpenSearch/Kibana instances starting up at the same time
- def delay = (processNumber-1)*config.delayBetweenProcesses
- sleep(delay)
- }
-
- withEnv(["CI_PARALLEL_PROCESS_NUMBER=${processNumber}"]) {
- processClosure()
- }
- }
- }
-
- def processes = [:]
- config.processes.each { processName, processClosure ->
- processes[processName] = process(processName, processClosure)
- }
-
- parallel(processes)
- }
-}
-
-return this