Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Regression issue using ExecWatch: Getting forbidden spite of being authenticated #3473

Closed
Sgitario opened this issue Sep 16, 2021 · 6 comments
Labels

Comments

@Sgitario
Copy link
Contributor

Sgitario commented Sep 16, 2021

In previous versions of fabric8, we were doing:

public String execOnPod(String namespace, String podName, String containerId, String... command)
            throws InterruptedException {

        ByteArrayOutputStream out = new ByteArrayOutputStream();
        CountDownLatch execLatch = new CountDownLatch(1);

        try (ExecWatch execWatch = client.pods().inNamespace(namespace).withName(podName).inContainer(containerId)
                .writingOutput(out)
                .usingListener(new ExecListener() {
                    @Override
                    public void onOpen(Response response) {
                        // do nothing
                    }

                    @Override
                    public void onFailure(Throwable throwable, Response response) {
                        execLatch.countDown();
                    }

                    @Override
                    public void onClose(int i, String s) {
                        execLatch.countDown();
                    }
                })
                .exec(input)) {
            execLatch.await(TIMEOUT_DEFAULT.ZERO.toMinutes(), TimeUnit.MINUTES);
            return out.toString();
        }
    }

This is now returning:

{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {
    
  },
  "status": "Failure",
  "message": "pods \"prometheus-user-workload-0\" is forbidden: User \"system:anonymous\" cannot get resource \"pods/exec\" in API group \"\" in the namespace \"openshift-user-workload-monitoring\"",
  "reason": "Forbidden",
  "details": {
    "name": "prometheus-user-workload-0",
    "kind": "pods"
  },
  "code": 403
}

Note that we are authenticated using the token: oc login ...

This is not working in 5.7.3 or 5.7.2, but it worked in the past.

Sgitario added a commit to Sgitario/quarkus-test-suite that referenced this issue Sep 16, 2021
Sgitario added a commit to Sgitario/quarkus-test-framework that referenced this issue Sep 16, 2021
Sgitario added a commit to Sgitario/quarkus-test-suite that referenced this issue Sep 16, 2021
Sgitario added a commit to quarkus-qe/quarkus-test-framework that referenced this issue Sep 16, 2021
Sgitario added a commit to quarkus-qe/quarkus-test-suite that referenced this issue Sep 16, 2021
Sgitario added a commit to Sgitario/quarkus-test-suite that referenced this issue Sep 17, 2021
Sgitario added a commit to quarkus-qe/quarkus-test-suite that referenced this issue Sep 17, 2021
@manusa manusa added the bug label Sep 22, 2021
@manusa
Copy link
Member

manusa commented Sep 22, 2021

I'm not sure if this is related to recent changes in the Pod Exec feature related code or to Config and cluster authentication (#3445).

I assume that you have tested the same code with the different client versions on the same cluster (same environment | different client).

@Sgitario
Copy link
Contributor Author

I'm not sure if this is related to recent changes in the Pod Exec feature related code or to Config and cluster authentication (#3445).

I assume that you have tested the same code with the different client versions on the same cluster (same environment | different client).

Yep, this was working on the Fabric8 dependencies being used by Quarkus 1.13

@tsaarni
Copy link
Contributor

tsaarni commented Sep 22, 2021

I doubt it could be #3445 since it was just merged yesterday and this issue is found in previous releases. Or that #3445 could fix it either, since the error says is forbidden: User "system:anonymous" i.e. also the user is not set.

@manusa
Copy link
Member

manusa commented Sep 22, 2021

I doubt it could be #3445 since it was just merged yesterday and this issue is found in previous releases. Or that #3445 could fix it either, since the error says is forbidden: User "system:anonymous" i.e. also the user is not set.

Yep, wrong PR linked, but there were recent changes to the TokenRefresjInterceptor and the Config retrieval. The question was more about if with a constant environment, the issue was present only in the recent versions of the client.

@rohanKanojia
Copy link
Member

Hello, I tried reproducing this issue but I'm not able to reproduce this. For me, exec seems to be working okay and I don't get 403. I prepared a reproducer based on the code you shared: https://github.com/rohankanojia-forks/kubernetes-client-execwatch-test
and checked these scenarios (I've logged into all clusters via oc login) :

  1. CRC 1.35.0 OpenShift 4.9.5 OpenShiftClient 5.7.3: Run locally via java -jar ✔️
  2. CRC 1.35.0 OpenShift 4.9.5 OpenShiftClient 5.7.3: Run inside Pod by deploying with OpenShift Gradle Plugin ✔️
  3. CRC 1.35.0 OpenShift 4.9.5 OpenShiftClient 5.10.1: Run locally via java -jar ✔️
  4. CRC 1.35.0 OpenShift 4.9.5 OpenShiftClient 5.10.1: Run inside Pod by deploying with OpenShift Gradle Plugin ✔️
  5. Red Hat Developer Sandbox OpenShift 4.9.9 OpenShiftClient 5.7.3: Run locally via java -jar ✔️
  6. Red Hat Developer Sandbox OpenShift 4.9.9 OpenShiftClient 5.7.3: Run inside Pod by deploying with OpenShift Gradle Plugin ✔️
  7. Red Hat Developer Sandbox OpenShift 4.9.9 OpenShiftClient 5.10.1: Run locally via java -jar ✔️
  8. Red Hat Developer Sandbox OpenShift 4.9.9 OpenShiftClient 5.10.1: Run inside Pod by deploying with OpenShift Gradle Plugin ✔️

We also have an integration test that covers ExecWatch functionality. I also ran PodIT.exec integration test on my CRC cluster and it is passing.

What could I be missing? Would appreciate it if you could provide some more information on how to reproduce this.

@Sgitario
Copy link
Contributor Author

I've checked the code that was failing and with the latest version, it's working fine. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

4 participants