Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

On The Fly Encryption Feature Proposal #3469

Open
aysee opened this issue May 27, 2022 · 13 comments · May be fixed by #12902
Open

On The Fly Encryption Feature Proposal #3469

aysee opened this issue May 27, 2022 · 13 comments · May be fixed by #12902
Labels
discuss Issues intended to help drive brainstorming and decision making feature New feature or request RFC Issues requesting major changes security Anything security related

Comments

@aysee
Copy link

aysee commented May 27, 2022

Feature Proposal

This document is a proposal of On The Fly encryption feature that allows OpenSearch to encrypt search indices on the Directory level using different encryption keys per index.

Why we need it

Enterprise customers require additional controls over data they store in multi-tenanted cloud services. Data encryption with a customer provided key is one of the features these customers are asking for. This feature allows customers to manage their own master key and then give a cloud service access to encrypt or decrypt customer’s data with derived data keys. A customer can revoke master key in a case of a security incident making their data non-decryptable.

This feature enables a better data isolation in a multi-tenanted service, allows for a better audit trail, and for an added security.

OpenSearch does not provide fine-grained multi-tenanted encryption solution yet. It’s either enabled for the whole cluster or for a data node, or is fully disable. When we use a search index per tenant, there is no way to configure encryption per index. Having a separate OpenSearch cluster per tenant is too expensive.

Proposal

image

The proposal is to implement a new Lucene Directory that will encrypt or decrypt shard data on the fly. We can use existing settings.store.type configuration to enable encryption when we create an index. For example:

{
  "settings": {
  "store": {
    "type": "cryptofs"
  }
}

In this case cryptofs becomes a new Store Type. OpenSearch will use CryptoDirectory for this specific store type.

Potentially, we can implement CryptoDyrectory as a simple FilterDirectory to leverage existing Index Input and Output classes, however this approach won’t allow us to leverage buffered reads and writes. Lucene issues frequent single byte read and write calls, so it’s better to read from and write into an encrypted buffer instead of decrypting and encrypting single bytes every time.

We propose to override Lucene IndexInput and IndexOutput with a new encrypting implementations to leverage existing IO buffer optimization. CryptoDirectory will extend FSDirectory and will instantiate overridden versions of these inputs.

Also, Index Input and Output classes provide access to underlying IO streams, it allows to leverage existing optimized stream encryption libraries.

Encryption

Concrete encryption algorithm can be made configurable, but it’s critical to use no-padding algorithms to keep Lucene’s random IO access support.

Concrete crypto provider will be also configurable. Crypto providers like Amazon Corretto, SunJCE, or Bouncy Castle come with their own tradeoffs. Consumer of this On The Fly encryption feature should be able to make a decision based on their specific performance, FIPS compliance, or runtime environment requirements.

{
  "settings": {
    ...
    "encryption": {
      "algorithm": "AES/GCM/NoPadding",
      "provider": "SunJCE",
      ...
      }
  }
}

Key management

Each index shard will require one or multiple data keys to encrypt data. We can start with only one data key per shard to simplify key management. But this solution can evolve, for example OpenSearch can generate new data keys according to a time-based or usage-based criteria.

All shard data keys will be derived from one master key defined on the index level. When OpenSearch creates a new index, CryptoDirectoryFactory will reach out to a Key Management Service (KMS) to generate a data key pair. Encrypted version of the data key can be persisted in a key file inside the shard data folder itself. Any encryption or decryption operation will require a plain text version of the key, CryptoDirectory will need to make a call to the KMS service to decrypt encrypted data key. It will cache this plain text key version in a short lived cache for performance reasons.

Here is how we can configure a KMS when we create an index:

{
    "settings": {
        "store": {
            "type": "cryptofs"
        },
        "encryption": {
            "kms_type": "aws_kms",
            "master_key": "arn:aws:kms:us-west-2:111122223333:key/943842d0-f961-4322-aff5-e9581e7271b7"
        }
    }
}

This configuration can support multiple KMS vendors if required.

Key revocation and restoration

When customer revokes access to a master key, OpenSearch cannot decrypt encrypted data keys anymore. It will be able to decrypt encrypted data with a cached plain text version of a key until key cache expires, but after that any requests will start failing. OpenSearch will require a special error code to convey this error to consumers.

Any background operations like merge or refresh will also start failing - they will require a special handling to avoid data corruption.

Key restoration will require no specific logic. Once customer restores key access, then OpenSearch can use immediately to decrypt data keys.

Key rotation and re-encryption

This proposal does not cover managed key rotation and re-encryption. OpenSearch re-indexing satisfies both of these requirements during initial implementation phase.

Audit trail

Customers will be interested in monitoring how OpenSearch uses their encryption keys. Any KMS requests will be logged automatically on the customer’s KMS side. However when OpenSearch uses these data key to encrypt of decrypt data, no logs will be produced.

Performance

Encryption comes with a performance cost. Actual performance degradation will depend on a request type and on encryption algorithm. For example, according to our initial performance benchmarking overhead on injection and simple queries is less than on complex queries with functions and aggregates.

Concrete acceptable performance degradation numbers are still TBD.

Shipment options

We would like this feature to be available in managed AWS OpenSearch service. We can either ship this feature as a community plugin or implement it inside OpenSearch itself.

@aysee aysee added enhancement Enhancement or improvement to existing feature or request untriaged labels May 27, 2022
@tlfeng tlfeng added feature New feature or request security Anything security related discuss Issues intended to help drive brainstorming and decision making RFC Issues requesting major changes and removed enhancement Enhancement or improvement to existing feature or request untriaged labels May 27, 2022
@elfisher
Copy link

Thanks for putting this proposal together @aysee! I'm a big fan of expanding the encryption options within OpenSearch to make it more flexible. There are a couple of things that come to mind we might want to think more about from an experience perspective (@setiah would want your thoughts on this too).

  1. On the key management side it looks like there would be a need to have both a remote key management stores supported, like KMS, as well as a local key store since some users are not running this in the cloud.
  2. I think we could add some management aspect of this to the user experience of the existing security UI. It might be a nice element to highlight when enabled on specific indexes/patterns. I'd also be interested in having the configuration calls logged in the OpenSearch audit logs.
  3. Since data from a single index might be split across multiple hosts, how are you thinking about key/config propagation?

@aysee
Copy link
Author

aysee commented Jun 13, 2022

Thank you @elfisher for your feedback and questions!

Key management

A simple answer is that we can make an abstraction layer for a key store communication - define an SPI, make it pluggable, and configurable. If we define basic operations like data encryption key generation and data key decryption, we can support multiple key stores. However, the devil is in the details and multiple key store support may be harder to achieve in reality. Anyways, it's a very valid point but I propose take to baby steps here and start with something we know having extensibility and backward compatibility in mind. It would be also nice, if you can provide more details about these local key stores.

Security UI

There are multiple use cases here. Marking indices as encrypted is one use case. Having a dedicated UI that displays encrypted indices with the corresponding key configuration is a different use case. Do you think this functionality should be a part of the Feature Request or should it be build later on top of it?

Interesting point regarding OpenSearch audit logs. What events do you have in mind? Re-indexing encrypted index into a plain text index would be suspicious for sure. Anything else, like encrypted index creation?

Key/config propagation

We propose that all the master key details will be stored in Index settings. Index itself will know a master key Id, where it resides, and what is the key management service or key store type. When OpenSearch creates an index shard, it can use this configuration to derive a data key from the master key. Each shard will have its own data key, there is no need to have same data key across shard and it would be a hard thing to do when shards are on different nodes.
Key configuration should be a static Index configuration, allowing key changes may lead to crypto shredding when done improperly. Key rotation or re-encryption can be covered with re-indexing initially - create a new index with a new key config and then re-index.

@dblock
Copy link
Member

dblock commented Jun 14, 2022

How will this feature interact with snapshots? Specifically, should one be able to take a snapshot without decrypting?

@willyborankin
Copy link
Contributor

willyborankin commented Jun 15, 2022

The idea is good
But (IMHO):

  • When Lucene starts merge of segments for the shard it will break encryption which leads to data-loss
  • When you will change/roll a master key it will break your encryption partially, in memory data most probably be ok, on the disk no which leads to partial data-loss
  • If you send this data to the snapshot switching the key will leave your snapshot useless, which leads to data-loss
  • How this solution will work with big shards size of > 64GB?
  • How will it work for the remote segments? Rotation of the master key will lead to partial data loss

It affects the size of index data on the disk and in the memory as well since encrypted data is worse than non-encrypted

For snapshotting encryption we already introduced a plugin here: https://github.com/aiven/encrypted-repository-opensearch and it was added here: opensearch-project/project-website#812 as a community plugin

@aysee
Copy link
Author

aysee commented Jun 17, 2022

@dblock @willyborankin thank you for your questions and feedback. I'm replying to both of you because there is a certain overlap in Snapshot related functionality that both of you brought up.

Snapshots
Snapshots are currently out of the scope of this proposal. When OpenSearch creates a snapshot, it will decrypt an index and will store index data in plain text. Decryption happens automatically because OpenSearch creates a Directory based on the index store.type. Snapshots may or may not have different encryption requirements. Snapshot encryption might be solved using different tools and technologies. E.g. storing tenant's index snapshots on a S3 buckets encrypted with SSE-KMS, or using the plugin referenced by @willyborankin, or by any other means. Some use cases also require no snapshots at all. I'd prefer no to bloat this proposal with snapshot encryption.

Shard merge
OpenSearch uses same Directory approach to merge shards. In this case, it reads segment data, decrypts it, and encrypts again when it writes a merged shard. We have not observed any merge-related issues when we POCed on it. @willyborankin please let me know if you have any specific use cases in mind, we can double check them.

Change/roll a master key
Key rotation and data re-encryption is outside of this proposal's scope. Key management becomes complex very quickly. We can achieve both key rotation and re-encryption by re-indexing an index into a new index that uses a new key and then by swapping these indices. Potentially, we can evolve this solution in future to support key rotation, but not re-encryption. Key rotation would simply mean data key re-encryption with a new master key. But data re-encryption will be risky and error prone, it's still safer to re-index.
I also propose to distinguish between data loss and crypto shredding. If a customer revokes a master key on purpose, OpenSearch cannot decrypt data anymore, it's crypto shredded on purpose. If a customer rotates a master key, old master key might still be used for decryption purposes for some time. During that time we can schedule reindexing. If reindexing fails within that time then it's a data loss.

How this solution will work with big shards size of > 64GB?

Initialization Vector (IV) must not be used to encrypt more than 64Gb of data. Encrypting more data with the same IV makes the key vulnerable. We propose to have a separate IV for a segment, not a shard. It means, we might have problems with segment files that have more than 64Gb. There are multiple ways to fix it:

  • "Chunk" big files internally - generate a new IV each 64Gb and store it inside a segment file, in the beginning of each "chunk". It will require careful IV-aware positioning when we read from that file.
  • Limit segment size for encrypted indices.

Besides that, our proposal should have no issues with such big shards.

How will it work for the remote segments? Rotation of the master key will lead to partial data loss

We don't cover master key rotation yet, so should not be an issue.

It affects the size of index data on the disk and in the memory as well since encrypted data is worse than non-encrypted

Encryption adds almost no overhead on the persisted data. The overhead will be: data key or keys, IV per file, custom Lucene headers and footers per file.
Yes, we will have to pay CPU and memory price for this kind of index encryption. We will need to account for that when we do sizing estimates.

@willyborankin
Copy link
Contributor

@dblock @willyborankin thank you for your questions and feedback. I'm replying to both of you because there is a certain overlap in Snapshot related functionality that both of you brought up.

Snapshots Snapshots are currently out of the scope of this proposal. When OpenSearch creates a snapshot, it will decrypt an index and will store index data in plain text. Decryption happens automatically because OpenSearch creates a Directory based on the index store.type. Snapshots may or may not have different encryption requirements. Snapshot encryption might be solved using different tools and technologies. E.g. storing tenant's index snapshots on a S3 buckets encrypted with SSE-KMS, or using the plugin referenced by @willyborankin, or by any other means. Some use cases also require no snapshots at all. I'd prefer no to bloat this proposal with snapshot encryption.

I agree they need to be independent, some customers could use encrypted file systems and store encrypted snapshots in clouds, using their own keys or build-in functionality provided by clouds.

Shard merge OpenSearch uses same Directory approach to merge shards. In this case, it reads segment data, decrypts it, and encrypts again when it writes a merged shard. We have not observed any merge-related issues when we POCed on it. @willyborankin please let me know if you have any specific use cases in mind, we can double check them.

Thank you for your explanation now it is clear.

Change/roll a master key Key rotation and data re-encryption is outside of this proposal's scope. Key management becomes complex very quickly. We can achieve both key rotation and re-encryption by re-indexing an index into a new index that uses a new key and then by swapping these indices. Potentially, we can evolve this solution in future to support key rotation, but not re-encryption. Key rotation would simply mean data key re-encryption with a new master key. But data re-encryption will be risky and error prone, it's still safer to re-index. I also propose to distinguish between data loss and crypto shredding. If a customer revokes a master key on purpose, OpenSearch cannot decrypt data anymore, it's crypto shredded on purpose. If a customer rotates a master key, old master key might still be used for decryption purposes for some time. During that time we can schedule reindexing. If reindexing fails within that time then it's a data loss.

Got it.

How this solution will work with big shards size of > 64GB?

Initialization Vector (IV) must not be used to encrypt more than 64Gb of data. Encrypting more data with the same IV makes the key vulnerable. We propose to have a separate IV for a segment, not a shard. It means, we might have problems with segment files that have more than 64Gb. There are multiple ways to fix it:

  • "Chunk" big files internally - generate a new IV each 64Gb and store it inside a segment file, in the beginning of each "chunk". It will require careful IV-aware positioning when we read from that file.

I especially asked this question due to the problem I thought existed for merging procedure.
But I'm for switching IV each 64 GB, instead of limiting it to 64 GB. Such problem exists for the encrypted plugin, which I'm going to fix soon.

  • Limit segment size for encrypted indices.

Besides that, our proposal should have no issues with such big shards.

How will it work for the remote segments? Rotation of the master key will lead to partial data loss

We don't cover master key rotation yet, so should not be an issue.

It affects the size of index data on the disk and in the memory as well since encrypted data is worse than non-encrypted

Encryption adds almost no overhead on the persisted data. The overhead will be: data key or keys, IV per file, custom Lucene headers and footers per file. Yes, we will have to pay CPU and memory price for this kind of index encryption. We will need to account for that when we do sizing estimates.

Got it.
Thank you for your explanation.

@asonje
Copy link

asonje commented Aug 15, 2022

This is an interesting proposal @aysee. What is the current status? I'd be happy to contribute towards the implementation if needed.

@asonje
Copy link

asonje commented Mar 2, 2023

I have been working on an implementation of this feature based on the proposed design and recommendations so far. It is almost ready and I expect to create a PR in a few weeks.

@wbeckler
Copy link

wbeckler commented Jul 7, 2023

@asonje How's this looking? Do you want to link a draft PR for others to take a look and help on?

@asonje
Copy link

asonje commented Jul 7, 2023

Yes @wbeckler , I am working on some internal validation and will be ready with a PR soon.

@asonje
Copy link

asonje commented Jul 21, 2023

PR #8791 largely follows the design outlined here. A new cryptofs store type index.store.type is introduced which instantiates a CryptoDirectory (a hybrid directory) that encrypts files as they are written and decrypts files as they are read.

The encryption algorithm chosen is AES/CTR/NoPadding with 256-bit keys. AES CTR supports random IO and provides the necessary level of data confidentiality. This however does not guarantee data integrity (like GCM). The crypto provider can be configured via the setting index.store.crypto.provider with the default being ‘SunJCE’. The user is responsible for installing and configuring any non-default crypto provider.

The index owner provides credentials to a key management store, which provides a master key data key pair. Each shard has a unique data key which is encrypted(by the master key) and stored on disk. A LocalKeystoreManager was implemented which makes use of a java.security.keystore as a local store. The user provides the path to this store along with an alias and a password. index.store.kms.alias,index.store.kms.password, index.store.kms.path.

Multiple key management store vendors can and should be supported including an OpenSearch cluster-wide KMS service.

@varun-lodaya
Copy link
Contributor

Looks like the draft PR was closed due to inactivity. Are we still tracking this change for any future release?

@peternied
Copy link
Member

There has been a considerable time since this issue was first opened and the state of the architecture with OpenSearch has changed. There were several questions asked and responded to in a the single 'big' comment channel - @aysee could you see about updating the description of the issue to capture these updates so its clear what the intention is for this feature?

I'd recommend augmenting the existing plan around the following areas;

  • There are already several implementations of encryption enhancements offered by many parties such as AWS, Iron Core Labs, and Eliatra - how does this proposal fit with those existing ecosystems?

OpenSearch does not provide fine-grained multi-tenanted encryption solution yet. It’s either enabled for the whole cluster or for a data node, or is fully disable. When we use a search index per tenant, there is no way to configure encryption per index. Having a separate OpenSearch cluster per tenant is too expensive.

  • This sounds like a much broader effort - are there other related RFC / proposals? Supporting multi-tenanted isolation in the same cluster has a very large surface area. Having a clear threat model that outlines the role of this feature encryption alongside other features would help us understand how this fix into this picture and if we've got the right building blocks in place when rolling out this feature.

@asonje asonje linked a pull request Mar 25, 2024 that will close this issue
8 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discuss Issues intended to help drive brainstorming and decision making feature New feature or request RFC Issues requesting major changes security Anything security related
Projects
Status: New
Development

Successfully merging a pull request may close this issue.

9 participants