Skip to content

FIWARE/odrl-authorization

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

odrl-authorization

The FIWARE ODRL Authorization (ODRL-Authorization) is an integrated suite of components designed to facilitate authorization using Verifiable Credentials.

This repository provides a description of the FIWARE Verifiable Credential Authorization and deployment recipes.

This project is part of FIWARE. For more information check the FIWARE Catalogue entry for Security.

📚 Documentation 🎯 Roadmap
Table of Contents

Table of Contents

Overview

FIWARE ODRL Authorization enables management of access authorization to services using an Attribute-Based Access Control (ABAC) model expressed with ODRL policies. In this architecture, ODRL is the policy language for expressing permissions, constraints and obligations; those ODRL policies are translated into executable Rego rules that the Open Policy Agent (OPA) evaluates at runtime.

The goal is to deliver a pluggable, standards-aligned authorization plane that:

  • Accepts access requests (including contexts derived from Verifiable Credentials / VPs) at the gateway (APISIX).

  • Evaluates requests against ABAC policies authored in ODRL (after translation to Rego).

  • Returns enforceable allow/deny decisions that APISIX uses to permit or block traffic

Release Information

FIWARE ODRL Authorization uses a continious integration flow, where every merge to the main-branch triggers a new release. Versioning follows Semantic Versioning 2.0.0, therefor only major changes will contain breaking changes. Important releases will be listed below, with additional information linked:

Core components

Deployment

Local Deployment

The FIWARE ODRL Authorization provides a minimal local deployment setup intended for development and testing purposes.

The requirements for the local deployment are:

In order to interact with the system, the following tools are also helpful:

⚠️ In current Linux installations, br_netfilter is disabled by default. That leads to networking issues inside the k3s cluster and will prevent the connector to start up properly. Make sure that its enabled via modprobe br_netfilter. See Stackoverflow for more.

To start the deployment, just use:

    mvn clean deploy -Plocal

Deployment with Helm

The odrl-authorization is a Helm Umbrella-Chart, containing all the sub-charts of the different components and their dependencies. Its sources can be foundhere.

The chart is available at the repository https://fiware.github.io/odrl-authorization/. You can install it via:

    # add the repo
    helm repo add odrl-authorization https://fiware.github.io/odrl-authorization/
    # install the chart
    helm install <DeploymentName> odrl-authorization/odrl-authorization -n <Namespace> -f values.yaml

Note, that due to the app-of-apps structure of the deployment and the different dependencies between the components, a deployment without providing any configuration values will not work. Make sure to provide a values.yaml file for the deployment, specifying all necessary parameters. This includes setting parameters of the endpoints, DNS information (providing Ingress or OpenShift Route parameters), structure and type of the required VCs, internal hostnames of the different components and providing the configuration of the DID and keys/certs.

Configurations for all sub-charts (and sub-dependencies) can be managed through the top-level values.yaml of the chart. It contains the default values of each component and additional parameter shared between the components. The configuration of the applications can be changed under the key <APPLICATION_NAME>, please see the individual applications and there sub-charts for the available options.

The chart is published and released on each merge to master.

Testing

In order to test the helm-chart provided for the FIWARE ODRL authorization, an integration-test framework based on Cucumber and Junit5 is provided: it.

The tests can be executed via:

    mvn clean integration-test -Ptest

They will spin up the Local Deployment and run the test-scenarios against it.

APISIX Deployment Modes

APISIX can operate in four distinct deployment modes. Each mode determines how routes are stored, managed, and persisted, as well as which components are responsible for maintaining the routing configuration.

Comparison Table

Mode ETCD Ingress Controller Route Source Persistence Notes
1. With ETCD and with Ingress Controller ✔️ ✔️ APISIX CRDs, Kubernetes Ingress, Admin API ✔️ Persisted in ETCD Recommended for Kubernetes-native setups
2. With ETCD and without Ingress Controller ✔️ Admin API only ✔️ Persisted in ETCD Chart-defined routes are not initialized
3. Without ETCD and with Ingress Controller ✔️ APISIX CRDs, Kubernetes Ingress, Admin API ❌ In-memory only Requires at least one route to start
4. Without ETCD and without Ingress Controller Static ConfigMap (apisix.yaml) ✔️ Persisted only in ConfigMap Under development; installation may fail but upgrades will work

1. With ETCD and with the Ingress Controller

In this mode, APISIX persists all route definitions in ETCD. Routes may be defined via APISIX CRDs, standard Kubernetes Ingress resources, or the Admin API. Because the configuration is stored in ETCD, all routes—including those created through the Admin API—will remain available after restarts.

apisix:
  ingress-controller:
    enabled: true
  apisix:
    deployment:
      role: traditional
      role_traditional:
        config_provider: yaml
  etcd:
    enabled: true

2. With ETCD and without the Ingress Controller

In this configuration, ETCD persists the routes, but no Ingress Controller is available to manage them. As a result, routes can only be created or updated using the APISIX Admin API. Chart-defined routes are not initialized automatically.

apisix:
  ingress-controller:
    enabled: false
  apisix:
    deployment:
      role: traditional
      role_traditional:
        config_provider: yaml
  etcd:
    enabled: true

3. Without ETCD and with the Ingress Controller

When ETCD is disabled, APISIX loads all routes from APISIX CRDs and stores them in memory. The Ingress Controller continuously synchronizes APISIX with these CRDs. Although the Admin API can still modify routes, such changes will not persist across restarts. Kubernetes Ingress objects may also be used to define new routes.

Warning

APISIX requires at least one route to exist for the service to start correctly.

apisix:
  ingress-controller:
    enabled: true
  apisix:
    deployment:
      role: traditional
      role_traditional:
        config_provider: yaml
  etcd:
    enabled: false

4. Without ETCD and without the Ingress Controller

In this mode, routes are defined statically within the apisix.yaml ConfigMap. APISIX loads these routes at startup, and the configuration remains unchanged unless the ConfigMap or Helm values are manually updated. This mode is suitable for simple or fully static environments.

Warning

This mode is currently under development. Installation may fail, but upgrades will function correctly.

apisix:
  ingress-controller:
    enabled: false
  apisix:
    deployment:
      mode: standalone
      role: data_plane
  etcd:
    enabled: false

How to contribute

Please, check the doc here.

License

odrl-authorization is licensed under Apache v2.0.

For the avoidance of doubt, the owners of this software wish to make a clarifying public statement as follows:

Please note that software derived as a result of modifying the source code of this software in order to fix a bug or incorporate enhancements is considered a derivative work of the product. Software that merely uses or aggregates (i.e. links to) an otherwise unmodified version of existing software is not considered a derivative work, and therefore it does not need to be released as under the same license, or even released as open source.