This repository has been archived by the owner on Jan 29, 2024. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 53
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Merge branch 'main' into search-index-to-algolia
- Loading branch information
Showing
12 changed files
with
213 additions
and
11 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -86,6 +86,7 @@ exceptions: | |
- MySQL | ||
- New Relic | ||
- NodeJS | ||
- OAuth | ||
- Okta | ||
- OneLogin | ||
- OpenSearch | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,44 @@ | ||
Grafana® OAuth configuration and security considerations | ||
============================================================ | ||
|
||
Grafana® version 9.5.5 introduced significant changes to the OAuth email lookup behavior to enhance security. However, some users may need to revert to the previous behavior as seen in Grafana 9.5.3. This section provides information on how to revert to the 9.5.3 behavior using the ``oauth_allow_insecure_email_lookup`` configuration option, its implications, and the associated security threats. | ||
|
||
Security considerations | ||
------------------------ | ||
Before reverting to the previous behavior of Grafana version 9.5.3, it is important to consider the security risks involved. | ||
|
||
Authentication bypass vulnerability | ||
````````````````````````````````````` | ||
|
||
By enabling the ``oauth_allow_insecure_email_lookup`` configuration option, the system becomes susceptible to a critical authentication bypass vulnerability using Azure AD OAuth. This vulnerability is officially identified as CVE-2023-3128 and could potentially grant attackers access to sensitive information or unauthorized actions. For more information, refer to the following links: | ||
|
||
* `Grafana Labs Security Advisory: CVE-2023-3128 <https://grafana.com/security/security-advisories/cve-2023-3128/>`_ | ||
* `Alternative link for CVE-2023-3128 <https://cve.report/CVE-2023-3128>`_ | ||
|
||
|
||
Configuring OAuth email lookup | ||
------------------------------------ | ||
|
||
To revert to the OAuth email lookup behavior of Grafana version 9.5.3, you can use the ``oauth_allow_insecure_email_lookup`` configuration option. | ||
|
||
|
||
Enable configuration | ||
``````````````````````` | ||
To enable this configuration, include the following line in your Grafana configuration file: | ||
|
||
.. code:: | ||
[auth] | ||
oauth_allow_insecure_email_lookup = true | ||
This will restore the behavior to that of Grafana version 9.5.3. However, please be aware of the potential security risks if you choose to do so. | ||
|
||
Upgrade to Grafana 9.5.5 | ||
----------------------------- | ||
|
||
In Grafana 9.5.5, the insecure email lookup behavior has been removed to mitigate the security threat. We recommend upgrading to this version to ensure the security of your system. | ||
|
||
Additional resources | ||
--------------------- | ||
|
||
For more information on configuring authentication in Grafana, refer to the `official Grafana documentation <https://grafana.com/docs/grafana/v9.5/setup-grafana/configure-security/configure-authentication/>`_. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,18 +1,22 @@ | ||
Use SASL Authentication with Apache Kafka® | ||
========================================== | ||
Use SASL authentication with Aiven for Apache Kafka® | ||
====================================================== | ||
|
||
Aiven offers a choice of :doc:`authentication methods for Apache Kafka® <../concepts/auth-types>`, including `SASL <https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer>`_ (Simple Authentication and Security Layer). | ||
Aiven offers a selection of :doc:`authentication methods for Apache Kafka® <../concepts/auth-types>`, including `SASL <https://en.wikipedia.org/wiki/Simple_Authentication_and_Security_Layer>`_ (Simple Authentication and Security Layer). | ||
|
||
1. Scroll down the **Service overview** page to the **Advanced configuration** section and select **Change**. | ||
|
||
2. Turn on the setting labelled ``kafka_authentication_methods.sasl``, and click **Save advanced configuration**. | ||
1. Log in to `Aiven Console <https://console.aiven.io/>`_ and choose your project. | ||
2. From the list of services, choose the Aiven for Apache Kafka service for which you wish to enable SASL. | ||
3. On the **Overview** page of the selected service, scroll down to the **Advanced configuration** section. | ||
4. Select **Change**. | ||
5. Enable the ``kafka_authentication_methods.sasl`` setting, and then select **Save advanced configuration**. | ||
|
||
.. image:: /images/products/kafka/enable-sasl.png | ||
:alt: Enable SASL authentication for Apache Kafka | ||
:width: 100% | ||
|
||
The connection information at the top of the **Service overview** page will now offer the ability to connect via SASL or via Client Certificate. | ||
The **Connection information** at the top of the **Overview** page will now offer the ability to connect via SASL or via Client Certificate. | ||
|
||
.. image:: /images/products/kafka/sasl-connect.png | ||
:alt: Choose between SASL and certificate connection details | ||
|
||
These connections are on a different port, but the host, CA and user credentials stay the same. | ||
.. note:: | ||
Although these connections use a different port, the host, CA, and user credentials remain consistent. |
50 changes: 50 additions & 0 deletions
50
docs/products/kafka/howto/optimizing-resource-usage-for-kafka-startup-plan.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,50 @@ | ||
Optimizing resource usage for Kafka® Startup-2 Plan | ||
=================================================== | ||
|
||
The Kafka Startup-2 Plan has been optimized for lightweight operations, making it ideal for applications that handle fewer messages per second and don't demand high throughput. But sometimes, you might encounter an alert showing high resource usage. This alert is generally triggered when the Kafka broker memory drops too low and CPU idle time is less than 15%. Understanding the reasons behind these alerts and their mitigation ensures optimized Kafka usage and consistent application performance. | ||
|
||
What triggers high resource usage? | ||
---------------------------------- | ||
|
||
There are a few things that can trigger high resource usage on the Kafka Startup-2 plan: | ||
|
||
- **High Kafka Traffic:** | ||
Heavy traffic due to too many producer/consumer requests can cause an overload, leading to increased CPU and memory usage on the Kafka broker. When a Kafka cluster is overloaded, it may struggle to correctly assign leadership for a partition, potentially causing service disruptions. | ||
|
||
- **Excessive Kafka Partitions:** | ||
An excessive number of Kafka partitions for the brokers to manage effectively can lead to increased memory usage and IO load. | ||
|
||
- **Too many client connections:** | ||
When there are too many client connections, the memory usage can dip significantly. When your service's memory is low, it starts to use swap space, adding to the IO load. Regular use of swap space indicates your system may not have enough resources for its workload. | ||
|
||
Additional causes of high resource usage | ||
---------------------------------------- | ||
|
||
- **Datadog Integration:** | ||
While Datadog provides valuable monitoring, its agent is IO-heavy. On a Startup-2 Plan not designed for high IO operations, Datadog can significantly increase the IO load. Moreover, Datadog load increases with the number of topic partitions in your Kafka service. | ||
|
||
- **Karapace Integration:** | ||
Karapace provides a REST API for Kafka, but this can consume a substantial amount of memory and contribute to a high load when REST API is used in high demand. | ||
|
||
Strategies to minimize resource usage | ||
------------------------------------- | ||
|
||
- **Reduce Topic Partition Limit:** | ||
Decreasing the number of topic partitions reduces the load on the Kafka service. | ||
|
||
- **Disable Datadog Integration:** | ||
If Datadog sends too many metrics, it could affect the reliability of the service and hinder the backup of topic configurations. It is recommended to turn off the integration of the Datadog service. | ||
|
||
- **Enable Quotas:** | ||
Quotas can manage the resources consumed by clients, preventing any single client from using too much of the broker's resources. | ||
|
||
- **Limit the Number of Integrations:** | ||
For smaller plans like the Startup-2, consider limiting the number of integrations to manage resource consumption effectively. | ||
|
||
- **Upgrade Your Plan:** | ||
If your application demands more resources, upgrading to a larger Kafka plan can ensure stable operation. | ||
|
||
Integration advisory for Kafka Startup-2 plan | ||
----------------------------------------------- | ||
|
||
The Kafka Startup-2 plan runs on relatively small machines. Enabling integrations like Datadog or Karapace may consume more resources than this plan can handle, affecting your cluster's performance. If you notice any issues with your cluster or need more resources for your integrations, consider upgrading to a higher plan. |
85 changes: 85 additions & 0 deletions
85
docs/products/opensearch/howto/resolve-shards-too-large.rst
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,85 @@ | ||
Manage large shards in OpenSearch® | ||
===================================== | ||
|
||
Ensuring an optimal shard size is a critical consideration when operating within OpenSearch. It is recommended that the size of individual shards in OpenSearch® should not exceed 50GB as a best practice. | ||
|
||
While OpenSearch does not explicitly enforce this shard size limit. However, exceeding this limit may result in OpenSearch being unable to relocate or recover index shards, potentially leading to data loss. | ||
|
||
Aiven proactively monitors shard sizes for all OpenSearch services. If a service's shard exceeds the recommended size, prompt notifications are sent using the user alert ``user_alert_resource_usage_es_shard_too_large``. Below, you'll find recommended solutions on how to address this alert. | ||
|
||
|
||
Solutions to address large shards | ||
----------------------------------- | ||
When dealing with excessively large shards, you can consider the one of the following solutions: | ||
|
||
1. Delete records from the index | ||
````````````````````````````````` | ||
If your application permits, permanently delete records, such as old logs or unnecessary records, from your index. For example, to delete records older than five days, use the following query: | ||
|
||
:: | ||
|
||
POST /my-index/_delete_by_query | ||
{ | ||
"query": { | ||
"range" : { | ||
"@timestamp" : { | ||
"lte" : now-5d | ||
|
||
} | ||
} | ||
} | ||
} | ||
|
||
|
||
2. Re-index into several small indices | ||
``````````````````````````````````````` | ||
You can split your index into several smaller indices based on certain criteria. For example, to create an index for each ``event_type``, you can use following script:: | ||
|
||
|
||
POST _reindex | ||
{ | ||
|
||
"source": { | ||
"index": "logs-all-events" | ||
}, | ||
"dest": { | ||
"index": "logs-2-" | ||
}, | ||
"script": { | ||
"lang": "painless", | ||
"source": "ctx._index = 'logs-2-' + (ctx._source.event_type)" | ||
} | ||
} | ||
|
||
|
||
3. Re-index into a new index with increased shard count | ||
````````````````````````````````````````````````````````` | ||
Another strategy involves re-indexing data into a fresh index while increasing the number of shards. To create a new index with 2 shards, use the following commands: | ||
|
||
.. code-block:: python | ||
PUT /my_new_index/_settings | ||
{ | ||
"index" : { | ||
"number_of_shards" : 2 | ||
} | ||
} | ||
Once the new index is set up, proceed to re-index your data using the following commands: | ||
|
||
.. code-block:: python | ||
POST _reindex | ||
{ | ||
"source": { | ||
"index": "my_old_index" | ||
}, | ||
"dest": { | ||
"index": "my_new_index" | ||
} | ||
} | ||
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.