Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kube-mgmt/opa feature parity #3645

Open
EdwardCooke opened this issue Oct 14, 2024 · 5 comments
Open

kube-mgmt/opa feature parity #3645

EdwardCooke opened this issue Oct 14, 2024 · 5 comments
Labels
enhancement New feature or request stale

Comments

@EdwardCooke
Copy link

EdwardCooke commented Oct 14, 2024

Describe the solution you'd like
Right now I'm using kube-mgmt/opa to expose an endpoint for use as a Kubernetes API Authorization webhook.
For example, my api authentication webhook calls https://opa-opa-kube-mgmt.opa-auth-system.svc.cluster.local:8181/v0/data/k8sallow/allow.

That policy is set using a config map that kube-mgmt injects into OPA.

Can I do the same using gatekeeper? And if so, how? I looked over the documentation and couldn't find a way of doing this.

Anything else you would like to add:
I'm using the latest kube-mgmt with the latest opa image.

Here's the policies I use kube-mgmt to inject into opa:

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    openpolicyagent.org/policy: rego
  name: policy-k8sallow
  namespace: opa-auth-system
data:
  k8sallow: |
    package k8sallow

    import rego.v1

    namespace := input.spec.resourceAttributes.namespace
    x := data.kubernetes.namespaces[namespace]

    default override_allow := false
    valid_groups := split(data.kubernetes.namespaces[namespace].metadata.annotations["vecc/groups"], ",")

    override_allow if {
      input.spec.resourceAttributes.resource == "namespaces"
      input.spec.resourceAttributes.verb == "list"
    }
    else if {
      input.spec.nonResourceAttributes.path in {
        "/api",
        "/apis",
        "/openapi/v2"
      }
    }
    else if {
      some group in input.spec.groups
      group == "oidc:Admins"
    }

    groups_match if {
      not override_allow
      some group in valid_groups
      some member_of in input.spec.groups
      group == member_of
    }

    deny contains "Cluster scope not allowed" if {
      not override_allow
      not namespace
    }

    deny contains "Namespace denied, not assigned to namespace" if {
      namespace
      not override_allow
      not groups_match
    }

    allow := {
      "apiVersion": "authorization.k8s.io/v1",
      "kind": "SubjectAccessReview",
      "status": {
        "allowed": allowed,
        "denied": denied,
        "reason": reason
      }
    }

    default allowed := false
    default denied := false
    default reason = ""

    reason := concat(";", deny)

    allowed if {
      override_allow
    }

    denied if {
      reason != ""
    }

  k8sallow_authz: |
    package system.authz

    import rego.v1

    allow if { input.path == [ "v0", "data", "k8sallow", "allow" ] }

Another policy that would fit into gatekeeper I think pretty well.

apiVersion: v1
kind: ConfigMap
metadata:
  labels:
    openpolicyagent.org/policy: rego
  name: policy-k8srestartonly
  namespace: opa-auth-system
data:
  k8srestartonly: |
    package k8srestartonly

    import rego.v1

    old_object := input.request.oldObject
    new_object := input.request.object

    default uid := ""
    default allowed := false
    default is_developers := false
    default message := ""

    allow := {
      "apiVersion": "admission.k8s.io/v1",
      "kind": "AdmissionReview",
      "response": {
        "uid": uid,
        "allowed": allowed,
        "status": {
          "message": message
        }
      }
    }

    uid := input.request.uid

    is_usrestricted if {
      some group in input.request.userInfo.groups
      group == "oidc:Admins"
    }

    allowed if {
      is_usrestricted
    }
    else if {
      x := json.remove(old_object, [ "status", "metadata/generation", "metadata/managedFields", "spec/template/metadata/annotations/kubectl.kubernetes.io~1restartedAt" ])
      y := json.remove(new_object, [ "status", "metadata/generation", "metadata/managedFields", "spec/template/metadata/annotations/kubectl.kubernetes.io~1restartedAt" ])
      x == y
    }

    message := "Modifying the object is prohibited" if {
      not allowed
    }

  k8srestartonly_authz: |
    package system.authz

    import rego.v1

    allow if { input.path == [ "v0", "data", "k8srestartonly", "allow" ] }

Environment:

  • Gatekeeper version: None yet
  • Kubernetes version: (use kubectl version):
Client Version: v1.31.1
Kustomize Version: v5.4.2
Server Version: v1.31.1
@EdwardCooke EdwardCooke added the enhancement New feature or request label Oct 14, 2024
@maxsmythe
Copy link
Contributor

Unfortunately Gatekeeper does not currently provide an authorization webhook.

@ritazh
Copy link
Member

ritazh commented Oct 16, 2024

Not exactly what you asked for, but take a look at https://kubernetes.io/blog/2024/04/26/multi-webhook-and-modular-authorization-made-much-easier/

conditions for invocation with CEL rules to pre-filter requests before they are dispatched to webhooks, helping you prevent unnecessary invocations.

@EdwardCooke
Copy link
Author

That’s what I use. Along with the GitHub issue I opened due to incorrect documentation on that and all the other pages for using that method.

Right now I have it calling opa directly was hoping for something with gatekeeper.

@EdwardCooke
Copy link
Author

Thanks @maxsmythe that answers my question. I’ll continue with the setup I have then and revisit it later.

Copy link

stale bot commented Dec 20, 2024

This issue has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.

@stale stale bot added the stale label Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request stale
Projects
None yet
Development

No branches or pull requests

3 participants