Skip to content

S3 audit sessions backend seems broken on kubernetes deploy #28744

Open

Description

Expected behavior:
Upload recordings of audit sessions to s3 buckets (or compatible API)

Current behavior:
The upload subsystem seems not be able to handler or build the "Authorization" headers properly.

When starting teleport auth:

2023-07-05T22:47:47Z INFO [S3]        Setting up bucket "{TELEPORT_BUCKET}", sessions path "/records" in region "us-east-1". s3sessions/s3handler.go:219
2023-07-05T22:47:47Z ERRO [S3]        "Failed to ensure that bucket \"{TELEPORT_BUCKET}\" exists (RequestError: send request failed\ncaused by: Head \"https://s3.us-east-1.wasabisys.com/{TELEPORT_BUCKET}\": net/http: invalid header field value for \"Authorization\"). S3 session uploads may fail. If you've set up the bucket already and gave Teleport write-only access, feel free to ignore this error." s3sessions/s3handler.go:387

The error keeps showing up whenever upload events are triggered with the same Authorization header error:

2023-07-05T22:50:39Z INFO [S3]        Upload created in 350.517308ms. s3sessions/s3stream.go:42
2023-07-05T22:50:39Z ERRO [AUTH:GRPC] Failed to create audit stream: "RequestError: send request failed\ncaused by: Post \"https://s3.us-east-1.wasabisys.com/{TELEPORT_BUCKET}/recordings/aaaaa-bbbb-ccccc-dddd.tar?uploads=\": net/http: invalid header field value for \"Authorization\"". auth/grpcserver.go:250

Current teleport-cluster.yaml

chartMode: standalone 
clusterName: {TELEPORT_DOMAIN}:
extraArgs:
- --insecure
extraEnv:
- name: AWS_REGION
  value: us-east-1
- name: AWS_ACCESS_KEY_ID
  valueFrom:
    secretKeyRef:
      key: access-key-id
      name: teleport-credentials
      optional: false
- name: AWS_SECRET_ACCESS_KEY
  valueFrom:
    secretKeyRef:
      key: secret-access-key
      name: teleport-credentials
      optional: false
auth:
  teleportConfig:
    teleport:
      storage:
        type: etcd
        peers: ["http://etcd.teleport-cluster.svc.cluster.local:2379"]
        prefix: /teleport
        insecure: true
        region: us-east-1
        audit_sessions_uri: "s3://{TELEPORT_BUCKET}/recordings?endpoint=s3.us-east-1.wasabisys.com&disablesse=true"
proxy:
  teleportConfig:
    teleport:
      storage:
        type: etcd
        peers: ["http://etcd.teleport-cluster.svc.cluster.local:2379"]
        prefix: /teleport
        insecure: true
        region: us-east-1
        audit_sessions_uri: "s3://{TELEPORT_BUCKET}/recordings?endpoint=s3.us-east-1.wasabisys.com&disablesse=true"
highAvailability:
  certManager:
    enabled: true
    issuerKind: ClusterIssuer
    issuerName: letsencrypt-prod
  replicaCount: 1
kubeClusterName: k8s-cluster
log:
  level: DEBUG
persistence:
  enabled: false
proxyListenerMode: multiplex
publicAddr:
  - {TELEPORT_DOMAIN}:443
service:
  type: NodePort
sessionRecording: proxy

At the begining I thought it was due to non AWS s3 bucket, but I attempted to reproduce this on AWS S3 account and even MinIO as well, neither works, with the same error.
I'm certain that the credentials are properly passed over since removing the ENVIRONMENT variables make teleport to complain about them.

Bug details:

  • Teleport version: 13.1.5, validated not to be working still on 13.1.0
  • Recreation steps: triggers right after tsh ssh into a server
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    audit-logIssues related to Teleports Audit LogawsUsed for AWS Related Issues.bughelmregression

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions