Using the pomerium-cli proxy command with Azure AKS
I wanted to put in a plug for the new pomerium-cli proxy
command that was added in 0.17.3. (Related to #1837).
The new proxy command allows kubectl
and helm
to access a private Azure Kubernetes Service cluster through Pomerium.
Pros / Cons
Pomerium has its own full-fledged solution for authenticating access to Kubernetes clusters using a service account for impersonation, but I didn’t want to replace AKS’s Azure AD integration. I wanted to layer on Pomerium as an extra network level protection rather than replacing AKS authorization entirely.
Configs
Pomerium config:
routes:
- from:tcp+https://examplecluster-12345678.pomerium.example.com:8000
# For "to", use the API server address from the Azure portal, adding "tcp://" and ":443"
to: tcp://examplecluster-12345678.hcp.exampleregion.azmk8s.io:443
policy:
- allow:
or:
- groups:
has: "examplegroup"
The port :8000
in the from
clause is a bit of a hack, explained in the pull request. From the client’s perspective it will be port 443.
Launch the proxy:
pomerium-cli proxy --proxy-domain pomerium.example.com --pomerium-url https://pomerium.example.com
Test with curl first. When you run this, pomerium-cli
should open your browser to authenticate to Pomerium. You should then get a 401 unauthorized back from the AKS cluster:
curl -k --proxy http://127.0.0.1:3128 https://examplecluster-12345678.pomerium.example.com
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "Unauthorized",
"reason": "Unauthorized",
"code": 401
}
That’s a good sign, since it’s coming from Kubernetes.
Set up your ~/.kube/config
as usual using az aks get-credentials
. To route traffic through the proxy, edit ~/.kube/config
and find your cluster:
entry.
clusters:
- cluster:
certificate-authority-data: ...
server: https://examplecluster-12345678.hcp.exampleregion.azmk8s.io:443
name: examplecluster
Make the following changes:
- change
server
to your internal Pomeriumtcp+https
route from thefrom
block. - add
proxy-url
with your local proxy. - add
tls-server-name
with the real server name.
clusters:
- cluster:
certificate-authority-data: ...
server: https://examplecluster-12345678.pomerium.example.com:443
proxy-url: http://127.0.0.1:3128
tls-server-name: examplecluster-12345678.hcp.exampleregion.azmk8s.io
name: examplecluster
With this configuration, kubectl get nodes
, kubectl exec
, kubectl logs -f
, etc. should all work. Also helm upgrade
. Each request will be a bit slower than accessing directly or over a VPN, but it works.