Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CLI deciding if token exchange needed should not look at ID token expiry #1873

Merged
merged 3 commits into from
Feb 23, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
108 changes: 108 additions & 0 deletions hack/prepare-jwtauthenticator-on-kind.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,108 @@
#!/usr/bin/env bash

# Copyright 2024 the Pinniped contributors. All Rights Reserved.
# SPDX-License-Identifier: Apache-2.0

#
# This script deploys a JWTAuthenticator to use for manual testing.
# The JWTAuthenticator will be configured to use Dex as the issuer.
#
# This is for manually testing using the Concierge with a JWTAuthenticator
# that points at some issuer other than the Pinniped Supervisor, as described in
# https://pinniped.dev/docs/howto/concierge/configure-concierge-jwt/
#
# This script assumes that you have run the following command first:
# PINNIPED_USE_CONTOUR=1 hack/prepare-for-integration-tests.sh
# Contour is used to provide ingress for Dex, so the web browser
# on your workstation can connect to Dex running inside the kind cluster.
#

set -euo pipefail

# Change working directory to the top of the repo.
ROOT="$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)"
cd "$ROOT"

# Read the env vars output by hack/prepare-for-integration-tests.sh.
source /tmp/integration-test-env

# Install Contour.
cfryanr marked this conversation as resolved.
Show resolved Hide resolved
kubectl apply -f https://projectcontour.io/quickstart/contour.yaml

# Wait for its pods to be ready.
echo "Waiting for Contour to be ready..."
kubectl wait --for 'jsonpath={.status.phase}=Succeeded' pods -l 'app=contour-certgen' -n projectcontour --timeout 60s
kubectl wait --for 'jsonpath={.status.phase}=Running' pods -l 'app!=contour-certgen' -n projectcontour --timeout 60s

# Capture just the hostname from a string that looks like https://host.name/foo.
dex_host=$(echo "$PINNIPED_TEST_CLI_OIDC_ISSUER" | sed -E 's#^https://([^/]+)/.*#\1#')

# Create an ingress for Dex which uses TLS passthrough to allow Dex to terminate TLS.
cat <<EOF | kubectl apply --namespace "$PINNIPED_TEST_TOOLS_NAMESPACE" -f -
apiVersion: projectcontour.io/v1
kind: HTTPProxy
metadata:
name: dex-proxy
spec:
virtualhost:
fqdn: $dex_host
tls:
passthrough: true
tcpproxy:
services:
- name: dex
port: 443
EOF

# Check if the Dex hostname is defined in /etc/hosts.
dex_host_missing=no
if ! grep -q "$dex_host" /etc/hosts; then
dex_host_missing=yes
fi
if [[ "$dex_host_missing" == "yes" ]]; then
echo
log_error "Please run this commands to edit /etc/hosts, and then run this script again with the same options."
echo "sudo bash -c \"echo '127.0.0.1 $dex_host' >> /etc/hosts\""
log_error "When you are finished with your Kind cluster, you can remove these lines from /etc/hosts."
exit 1
fi

# Create the JWTAuthenticator.
cat <<EOF | kubectl apply -f - 1>&2
kind: JWTAuthenticator
apiVersion: authentication.concierge.pinniped.dev/v1alpha1
metadata:
name: my-jwt-authenticator
spec:
issuer: $PINNIPED_TEST_CLI_OIDC_ISSUER
tls:
certificateAuthorityData: $PINNIPED_TEST_CLI_OIDC_ISSUER_CA_BUNDLE
audience: $PINNIPED_TEST_CLI_OIDC_CLIENT_ID
claims:
username: $PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_USERNAME_CLAIM
groups: $PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_GROUPS_CLAIM
EOF

# Clear the local CLI cache to ensure that commands run after this script will need to perform a fresh login.
rm -f "$HOME/.config/pinniped/sessions.yaml"
rm -f "$HOME/.config/pinniped/credentials.yaml"

# Build the CLI.
go build ./cmd/pinniped

# Use the CLI to get a kubeconfig that will use this JWTAuthenticator.
# Note that port 48095 is configured in Dex as part of the allowed redirect URI for this client.
./pinniped get kubeconfig \
--oidc-client-id "$PINNIPED_TEST_CLI_OIDC_CLIENT_ID" \
--oidc-scopes "openid,offline_access,$PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_USERNAME_CLAIM,$PINNIPED_TEST_SUPERVISOR_UPSTREAM_OIDC_GROUPS_CLAIM" \
--oidc-listen-port 48095 \
>kubeconfig-jwtauthenticator.yaml

echo "When prompted for username and password, use these values:"
echo " OIDC Username: $PINNIPED_TEST_CLI_OIDC_USERNAME"
echo " OIDC Password: $PINNIPED_TEST_CLI_OIDC_PASSWORD"
echo

echo "To log in using OIDC, run:"
echo "PINNIPED_DEBUG=true ./pinniped whoami --kubeconfig ./kubeconfig-jwtauthenticator.yaml"
echo
6 changes: 2 additions & 4 deletions pkg/oidcclient/login.go
Original file line number Diff line number Diff line change
Expand Up @@ -354,12 +354,10 @@ func Login(issuer string, clientID string, opts ...Option) (*oidctypes.Token, er
func (h *handlerState) needRFC8693TokenExchange(token *oidctypes.Token) bool {
// Need a new ID token if there is a requested audience value and any of the following are true...
return h.requestedAudience != "" &&
// we don't have an ID token
// we don't have an ID token (maybe it expired or was otherwise removed from the session cache)
cfryanr marked this conversation as resolved.
Show resolved Hide resolved
(token.IDToken == nil ||
// or, our current ID token has expired or is close to expiring
idTokenExpiredOrCloseToExpiring(token.IDToken) ||
// or, our current ID token has a different audience
(h.requestedAudience != token.IDToken.Claims["aud"]))
h.requestedAudience != token.IDToken.Claims["aud"])
}

func (h *handlerState) tokenValidForNearFuture(token *oidctypes.Token) (bool, string) {
Expand Down
54 changes: 43 additions & 11 deletions pkg/oidcclient/login_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,8 @@ func newClientForServer(server *httptest.Server) *http.Client {
}

func TestLogin(t *testing.T) { //nolint:gocyclo
fakeUniqueTime := time.Now().Add(6 * time.Minute).Add(6 * time.Second)

distantFutureTime := time.Date(2065, 10, 12, 13, 14, 15, 16, time.UTC)

testCodeChallenge := testutil.SHA256("test-pkce")
Expand Down Expand Up @@ -1985,7 +1987,7 @@ func TestLogin(t *testing.T) { //nolint:gocyclo
},
},
{
name: "with requested audience, session cache hit with valid access token, ID token already has the requested audience, but ID token is expired",
name: "with requested audience, session cache hit with valid access token, ID token already has the requested audience, but ID token is expired, causes a refresh and uses refreshed ID token",
issuer: successServer.URL,
clientID: "test-client-id",
opt: func(t *testing.T) Option {
Expand All @@ -1995,7 +1997,7 @@ func TestLogin(t *testing.T) { //nolint:gocyclo
IDToken: &oidctypes.IDToken{
Token: testToken.IDToken.Token,
Expiry: metav1.NewTime(time.Now().Add(9 * time.Minute)), // less than Now() + minIDTokenValidity
Claims: map[string]interface{}{"aud": "request-this-test-audience"},
Claims: map[string]interface{}{"aud": "test-custom-request-audience"},
},
RefreshToken: testToken.RefreshToken,
}}
Expand All @@ -2006,26 +2008,56 @@ func TestLogin(t *testing.T) { //nolint:gocyclo
Scopes: []string{"test-scope"},
RedirectURI: "http://localhost:0/callback",
}}, cache.sawGetKeys)
require.Empty(t, cache.sawPutTokens)
require.Len(t, cache.sawPutTokens, 1)
// want to have cached the refreshed ID token
require.Equal(t, &oidctypes.IDToken{
Token: testToken.IDToken.Token,
Expiry: metav1.NewTime(fakeUniqueTime),
Claims: map[string]interface{}{"aud": "test-custom-request-audience"},
}, cache.sawPutTokens[0].IDToken)
})
require.NoError(t, WithClient(newClientForServer(successServer))(h))
require.NoError(t, WithSessionCache(cache)(h))
require.NoError(t, WithRequestAudience("request-this-test-audience")(h))
require.NoError(t, WithRequestAudience("test-custom-request-audience")(h))

h.validateIDToken = func(ctx context.Context, provider *oidc.Provider, audience string, token string) (*oidc.IDToken, error) {
require.Equal(t, "request-this-test-audience", audience)
require.Equal(t, "test-id-token-with-requested-audience", token)
return &oidc.IDToken{Expiry: testExchangedToken.IDToken.Expiry.Time}, nil
h.getProvider = func(config *oauth2.Config, provider *oidc.Provider, client *http.Client) upstreamprovider.UpstreamOIDCIdentityProviderI {
mock := mockUpstream(t)
mock.EXPECT().
ValidateTokenAndMergeWithUserInfo(gomock.Any(), HasAccessToken(testToken.AccessToken.Token), nonce.Nonce(""), true, false).
Return(&oidctypes.Token{
AccessToken: testToken.AccessToken,
IDToken: &oidctypes.IDToken{
Token: testToken.IDToken.Token,
Expiry: metav1.NewTime(fakeUniqueTime), // less than Now() + minIDTokenValidity but does not matter because this is a freshly refreshed ID token
Claims: map[string]interface{}{"aud": "test-custom-request-audience"},
},
RefreshToken: testToken.RefreshToken,
}, nil)
mock.EXPECT().
PerformRefresh(gomock.Any(), testToken.RefreshToken.Token).
DoAndReturn(func(ctx context.Context, refreshToken string) (*oauth2.Token, error) {
// Call the real production code to perform a refresh.
return upstreamoidc.New(config, provider, client).PerformRefresh(ctx, refreshToken)
})
return mock
}
return nil
}
},
wantLogs: []string{
`"level"=4 "msg"="Pinniped: Found unexpired cached token." "type"="access_token"`,
`"level"=4 "msg"="Pinniped: Performing RFC8693 token exchange" "requestedAudience"="request-this-test-audience"`,
`"level"=4 "msg"="Pinniped: Performing OIDC discovery" "issuer"="` + successServer.URL + `"`,
`"level"=4 "msg"="Pinniped: Refreshing cached tokens."`,
},
// want to have returned the refreshed tokens
wantToken: &oidctypes.Token{
AccessToken: testToken.AccessToken,
IDToken: &oidctypes.IDToken{
Token: testToken.IDToken.Token,
Expiry: metav1.NewTime(fakeUniqueTime),
Claims: map[string]interface{}{"aud": "test-custom-request-audience"},
},
RefreshToken: testToken.RefreshToken,
},
wantToken: &testExchangedToken,
},
{
name: "with requested audience, session cache hit with valid access token, but no ID token",
Expand Down
30 changes: 22 additions & 8 deletions site/content/docs/howto/concierge/configure-concierge-jwt.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,9 @@ spec:
audience: my-client-id
claims:
username: email
# Note that you may also want to configure a groups claim here,
# if your OIDC provider supports putting one into ID tokens.
# See the "Including group membership" section below for more details.
```

If you've saved this into a file `my-jwt-authenticator.yaml`, then install it into your cluster using:
Expand All @@ -59,17 +62,28 @@ Generate a kubeconfig file to target the JWTAuthenticator:
```sh
pinniped get kubeconfig \
--oidc-client-id my-client-id \
--oidc-scopes openid,email \
--oidc-scopes openid,offline_access,email \
--oidc-listen-port 12345 \
> my-cluster.yaml
> my-kubeconfig.yaml
```

Note that the value for the `--oidc-client-id` flag must be your OIDC client's ID, which must also be the same
value declared as the `audience` in the JWTAuthenticator.

This creates a kubeconfig YAML file `my-cluster.yaml` that targets your JWTAuthenticator using `pinniped login oidc` as an [ExecCredential plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins).
Also note that you may need different scopes in the `--oidc-scopes` list, depending on your OIDC provider.
Please refer to the documentation for your OIDC provider.
- Most providers will require you to include `openid` in this list at a minimum.
- You may need to add `offline_access` (or a similar scope) to ask your provider to also return a refresh token.
If your provider can return refresh tokens, the Pinniped CLI will use them to automatically refresh expired ID
tokens without any need for user interaction, until the refresh token stops working.
- In the example above, the `email` scope asks the provider to return an `email` claim in the ID token
whose value will be the user's email address. Most providers support this scope.
- You might need a scope to ask the provider to return a groups claim (see the "Including group membership"
section below for more details).

It should look something like below:
The above command creates a kubeconfig YAML file `my-kubeconfig.yaml` that targets your JWTAuthenticator using `pinniped login oidc` as an [ExecCredential plugin](https://kubernetes.io/docs/reference/access-authn-authz/authentication/#client-go-credential-plugins).

The contents of that kubeconfig file should look something like below:

```yaml
apiVersion: v1
Expand Down Expand Up @@ -112,7 +126,7 @@ users:
Use the kubeconfig with `kubectl` to access your cluster:

```sh
kubectl --kubeconfig my-cluster.yaml get namespaces
kubectl --kubeconfig my-kubeconfig.yaml get namespaces
```

You should see:
Expand Down Expand Up @@ -182,7 +196,7 @@ pinniped get kubeconfig \
--oidc-client-id my-client-id \
--oidc-scopes openid,email,groups \
--oidc-listen-port 12345 \
> my-cluster.yaml
> my-kubeconfig.yaml
```

### Use the kubeconfig file
Expand All @@ -195,14 +209,14 @@ Use the kubeconfig with `kubectl` to access your cluster, as before:
rm -rf ~/.config/pinniped

# Log in again by issuing a kubectl command.
kubectl --kubeconfig my-cluster.yaml get namespaces
kubectl --kubeconfig my-kubeconfig.yaml get namespaces
```

To see the username and group membership as understood by the Kubernetes cluster, you can use
this command:

```sh
pinniped whoami --kubeconfig my-cluster.yaml
pinniped whoami --kubeconfig my-kubeconfig.yaml
```

If your groups configuration worked, then you should see your list of group names from your OIDC provider
Expand Down
Loading