Hello,
I would like to configure access to Kubernetes API (Amazon EKS v1.34) via Pomerium using this doc: Kubernetes `kubectl` Integration | Pomerium and User Impersonation methods of authenticating to the Kubernetes API server
What did you expect to happen?
When running any kubectl command, I get access the EKS cluster using the Pomerium
How’d it happen?
When running any kubectl command, for example
kubectl get pod
It open a browser, the pomerium confirms that login complete, you may close this page, and I get an error
Unable to connect to the server: getting credentials: exec plugin is configured to use API version client.authentication.k8s.io/v1, plugin returned version client.authentication.k8s.io/v1beta1
What’s your environment like?
- Pomerium docker image version: latest
- pomerium-cli version v0.32.0-1769557085+6760b68
- kubectl client Version: v1.34.0
- Amazon EKS v1.34
- Pomerium run in AWS ECS Fargate
Security groups are configured in such a way that the pomerium has access to EKS API Endpoint
SA and RBAC:
# ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: pomerium
namespace: default
---
# ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: pomerium
rules:
- apiGroups:
- ''
resources:
- users
- groups
- serviceaccounts
verbs:
- impersonate
- apiGroups:
- 'authorization.k8s.io'
resources:
- selfsubjectaccessreviews
verbs:
- create
- apiGroups:
- 'authentication.k8s.io'
resources:
- selfsubjectreviews
verbs:
- create
---
# ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: pomerium
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: pomerium
subjects:
- kind: ServiceAccount
name: pomerium
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: cluster-admin-crb
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: myuser@mycorp.com
---
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: pomerium-token
namespace: default
annotations:
kubernetes.io/service-account.name: pomerium
Get the Pomerium service account secret, which will be used in the Pomerium route
kubectl get secret pomerium-token -o jsonpath={.data.token} | base64 -d
Pomerium route:
- from: https://my-eks-qa.pomerium-example.net # DNS is exist
to: tcp+https://111111111111.eks.amazonaws.com # EKS endpoint
kubernetes_service_account_token: SECRET_SERVICE_ACCOUNT_TOKEN
tls_skip_verify: true
allow_spdy: true
kube config:
apiVersion: v1
kind: Config
clusters:
- cluster:
server: https://my-eks-qa.pomerium-example.net
name: my-eks-qa
contexts:
- context:
cluster: my-eks-qa
user: via-pomerium
name: my-eks-qa
current-context: my-eks-qa
users:
- name: via-pomerium
user:
exec:
apiVersion: client.authentication.k8s.io/v1
command: pomerium-cli
provideClusterInfo: false
interactiveMode: IfAvailable
args:
- k8s
- exec-credential
- https://my-eks-qa.pomerium-example.net
What did you see in the logs?
No any error in pomerium logs
Additional context
I tried to get pomerium token by running the following command:
pomerium-cli k8s exec-credential https://my-eks-qa.pomerium-example.net
Your browser has been opened to visit:
https://authenticate.pomerium....
{
"kind":"ExecCredential",
"apiVersion":"client.authentication.k8s.io/v1beta1",
"status":{
"expirationTimestamp":"2026-02-24T17:14:04+02:00",
"token":"Pomerium-SECRET_TOKEN........"
}
}
so it looks like this pomerium-cli is returning the wrong apiVersion v1beta1 of the client.authentication.k8s.io/ endpoint
Checked kubectl api-versions in EKS:
kubectl api-versions |grep auth
authentication.k8s.io/v1
authorization.k8s.io/v1
So in our EKS we not have client.authentication.k8s.io/v1beta1
Pomerium proxy to EKS Access also works:
pomerium-cli proxy --proxy-domain pomerium-example.net --pomerium-url https://my-eks-qa.pomerium-example.net
{"level":"info","time":"2026-02-25T11:42:44+02:00","message":"Proxy running at 127.0.0.1:3128"}
➜ ~ curl -k --proxy http://127.0.0.1:3128 https://1111111111111.us-east-1.eks.amazonaws.com
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
That’s 401 coming from Kubernetes so at the network level everything works
I would be very grateful for any idea how to fix this
Thanks!