-
Notifications
You must be signed in to change notification settings - Fork 337
Description
Background
W.r.t Resource Access Control, Doc-Level Security (DLS) approach has been updated in PR #5016. Recently, the plan shifted from implementing abstract APIs in OpenSearch core to modifying the Security plugin so it can automatically invoke resource-access-control for relevant indices. However, this method also has certain drawbacks around thread exhaustion. To guide the final decision, three primary approaches have been considered:
Below is an updated version of the approaches, each with Advantages and Limitations sections.
1. Terms Lookup Query
Description
Leverage TLQ to dynamically fetch resource-sharing information from a separate index, then match requested resource IDs against those entries.
Advantages
- Native Query: Uses standard OpenSearch query features, so minimal custom logic is required.
- Simple Integration: If resource-sharing data is already in a single document, TLQ is straightforward to set up.
- Built-in Caching: TLQ benefits from the caching and query optimizations provided by OpenSearch.
Limitations
- Single Document Constraint: TLQ requires all resource IDs for a user (or resource) to be in a single document, which is rarely feasible for real-world data scattered across multiple docs.
- Scalability Issues: Merging large sets of resource IDs into one document can become unwieldy, leading to performance or storage problems.
- Narrow Applicability: If the resource-sharing model is more complex, TLQ quickly becomes impractical as a generic solution.
2. In-Memory Map
Description
Load resource-sharing configuration into an in-memory map—similar to how the Security plugin loads its main security configuration. This map would be updated in near real-time whenever resource-sharing information changes (e.g., new resources or updated permissions).
Advantages
- Fast Lookups: In-memory data structures can offer very quick read performance.
- Direct Integration: Follows the same pattern as existing Security config, which is already well understood.
- Low Runtime Query Overhead: No need to perform frequent index lookups if all sharing data is already in memory.
Limitations
- Frequent Updates: Resource-sharing data can change often (user grants/revokes), leading to continuous map updates.
- Scalability & Distribution: Synchronizing frequent changes across a cluster can become a bottleneck and risk DoS if updates spike.
- Operational Complexity: Requires robust mechanisms to keep the in-memory map consistent across all nodes.
3. Plugins Make API Calls
Description
Expose new APIs that other plugins can call whenever they need to check if a user has access to a given resource. These APIs handle the logic for determining resource access, potentially using the DLS approach behind the scenes or another method.
Advantages
- Flexibility: Can be applied to resources stored in an index or elsewhere (e.g., external systems).
- Centralized Logic: Minimizes the risk of misconfiguration by consolidating access checks in one place.
- Extensibility: Provides a uniform interface, making it easier to evolve or integrate new resource types in the future.
Limitations
- Implementation Complexity: Requires designing and maintaining well-defined, backward-compatible APIs.
- Human Error: Plugin developers must remember to call these APIs correctly and consistently.
- Performance Overheads: Multiple API calls could introduce latency, especially under high load.
Conclusion
- Terms Lookup Query is too restrictive due to the single-document requirement.
- In-Memory Map could create scalability issues with frequent updates.
- APIs for Resource Verification are more flexible and extensible, albeit with higher implementation complexity and reliance on proper usage by plugin developers.
Feedback from plugin developers and end users will help guide the final choice. Each approach has trade-offs in terms of performance, maintainability, and extensibility.