External Domain configuration

,

I am trying to use Pomerium to provide AuthN to internal facing apps starting with Prometheus. I will also be integrating it with Argo CD so some potential for upgrading to the SAML use case.

What happened?

Unable to get the Kubernetes tutorial to work with an external facing domain. I also tried cert-manager’s Pomerium ingress tutorial but that didn’t work either.

Specifically, I try loading up hello.localhost.pomerium.io and the page refuses to connect. However I am not primarily trying to use hello.localhost.pomerium.io but rather my own domain with the subdomain set to authenticate.dev.sw.io and verify.dev.sw.io pointed to the Load Balancer created by the pomerium-proxy .

What did you expect to happen?

I would be able to follow the tutorial successfully substituting in my domain instead of localhost.pomerium.io. This is an example of a good intention gone wrong, as I see that *.localhost.pomerium.io is aliased to localhost, because while this makes the docker example work it detracts from the Kubernetes setup by leaving out configuration details for a custom domain.

How’d it happen?

I went through the steps as documented substituting in dev.sw.io (abbreviated) for localhost.pomerium.io.

What’s your environment like?

  • Pomerium version (retrieve with pomerium --version): Current Helm chart - pomerium-30.1.1
  • Server Operating System/Architecture/Cloud: EKS K8s 1.21 with Google IDP

What’s your config.yaml?

No config.yaml present since I am using Helm. Helm configs below.

In pomerium-certificates.yaml

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: pomerium-cert
  namespace: pomerium
spec:
  secretName: pomerium-tls
  issuerRef:
    name: pomerium-issuer
    kind: Issuer
  usages:
    - server auth
    - client auth
  dnsNames:
    - pomerium-proxy.pomerium.svc.cluster.local
    - pomerium-authorize.pomerium.svc.cluster.local
    - pomerium-databroker.pomerium.svc.cluster.local
    - pomerium-authenticate.pomerium.svc.cluster.local
    - authenticate.dev.sw.io

In values.yaml

authenticate:
  existingTLSSecret: pomerium-tls
  idp:
    provider: "google"
    clientID: ${client_id}
    clientSecret: ${client_secret}

forwardAuth:
  enabled: false

ingressController:
  enabled: true

config:
  # routes under this wildcard domain are handled by pomerium
  rootDomain: dev.sw.io
  existingCASecret: pomerium-tls
  generateTLS: true
  insecure: false
#  routes:
#    - from: https://verify.dev.sw.io
#      to: https://verify:80
#      allowed_domains:
#        - sensibleweather.com
#        - sensibleweather.io

proxy:
  existingTLSSecret: pomerium-tls

databroker:
  existingTLSSecret: pomerium-tls
  storage:
    connectionString: rediss://pomerium-redis-master.pomerium.svc.cluster.local
    type: redis
    clientTLS:
      existingSecretName: pomerium-tls
      existingCASecretKey: ca.crt

authorize:
  existingTLSSecret: pomerium-tls

redis:
  enabled: true
  auth:
    enabled: false
  usePassword: false
  generateTLS: false
  tls:
    certificateSecret: pomerium-redis-tls

What did you see in the logs?

pomerium-proxy

{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"authenticate.dev.sensibleweather.io","path":"/","user-agent":"Mozilla/5.0 (Linux; U; Android 4.4.2; en-us; SCH-I535 Build/KOT49H) AppleWebKit/534.30 (KHTML, like Gecko) Version/4.0 Mobile Safari/534.30","referer":"","forwarded-for":"10.0.29.79","request-id":"47b293b3-5dd3-447d-934d-484cf49ca950","duration":0.175072,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-21T17:15:19Z","message":"http-request"}
{"level":"info","syncer_id":"databroker","syncer_type":"type.googleapis.com/pomerium.config.Config","time":"2022-03-21T17:15:50Z","message":"initial sync"}
{"level":"error","error":"rpc error: code = Unknown desc = cryptutil: decryption failed (mismatched keys?): chacha20poly1305: message authentication failed","time":"2022-03-21T17:15:50Z","message":"error during initial sync"}
{"level":"error","error":"rpc error: code = Unknown desc = cryptutil: decryption failed (mismatched keys?): chacha20poly1305: message authentication failed","time":"2022-03-21T17:15:50Z","message":"sync"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"verify.dev.sensibleweather.io","path":"/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.19.198","request-id":"656fedc6-2279-4a19-8cfa-586e81ef2cb1","duration":0.175444,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-21T17:16:19Z","message":"http-request"}

redis

1:M 21 Mar 2022 16:16:27.362 * Replica pomerium-redis-replicas-2.:6379 asks for synchronization
1:M 21 Mar 2022 16:16:27.362 * Full resync requested by replica pomerium-redis-replicas-2.:6379
1:M 21 Mar 2022 16:16:27.362 * Starting BGSAVE for SYNC with target: disk
1:M 21 Mar 2022 16:16:27.362 * Background saving started by pid 272
272:C 21 Mar 2022 16:16:27.366 * DB saved on disk
272:C 21 Mar 2022 16:16:27.367 * RDB: 0 MB of memory used by copy-on-write
1:M 21 Mar 2022 16:16:27.383 * Background saving terminated with success
1:M 21 Mar 2022 16:16:27.384 * Synchronization with replica pomerium-redis-replicas-2.:6379 succeeded

Additional context

This step is very vague:
If you are installing Pomerium with a valid domain name and certificates, update your DNS records to point to the external IP address of the pomerium-proxy service:

Should this be the authenticate.dev.sw.io domain? Or the domain that we are redirecting to?

Also, with the following:

ingress.pomerium.io/policy: '[{"allow":{"and":[{"domain":{"is":"example.com"}}]}}]'

Should this example.com be the authenticate.dev.sw.io domain as well? So confused by these inconsistent and unexplained indicators of how to get this working with a domain.

Edit: I have the same exact problem with the kuard ingress in the cert-manager tutorial - http://kuard.localhost.pomerium.io gets connection refused.

Any help would be much appreciated.

It looks like you’re using the pomerium TLS certificate for the redis connection:

databroker:
  storage:
    clientTLS:
      existingSecretName: pomerium-tls
      existingCASecretKey: ca.crt

But redis uses a different TLS certificate:

redis:
  tls:
    certificateSecret: pomerium-redis-tls

Hi @calebdoxsey , thanks for your reply!

I have updated my TLS cert but I am still getting errors, this time a 500 error. I attempted to direct my verify.dev.sw.io to the kuard service by setting its DNS to the pomerium-proxy and adding a route for it, but no luck.

authenticate:
  existingTLSSecret: pomerium-tls
  idp:
    provider: "google"
    clientID: ${client_id}
    clientSecret: ${client_secret}
  ingress:
    annotations:
      cert-manager.io/issuer: letsencrypt-staging
    tls:
      secretName: authenticate.localhost.pomerium.io-tls

proxy:
  existingTLSSecret: pomerium-tls

databroker:
  existingTLSSecret: pomerium-tls
  storage:
    clientTLS:
      existingSecretName: pomerium-redis-tls
      existingCASecretKey: ca.crt

authorize:
  existingTLSSecret: pomerium-tls

redis:
  enabled: true
  generateTLS: false
  tls:
    certificateSecret: pomerium-redis-tls

ingressController:
  enabled: true

config:
  rootDomain: dev.sw.io #Change this to your reserved domain space.
  existingCASecret: pomerium-tls
  generateTLS: false
  routes:
    - from: https://verify.dev.sw.io
      to: http://kuard.pomerium.cluster.svc.local

Pomerium-proxy Logs

{"level":"error","time":"2022-03-17T16:25:32Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"verify.dev.sw.io","error":"no information found to solve challenge for identifier: verify.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"verify.dev.sw.io","path":"/.well-known/acme-challenge/9be1AwlB_vmiSvUDfxVAgRgswqdzlmDy8RH0wXs5hJg","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://verify.dev.sw.io/.well-known/acme-challenge/9be1AwlB_vmiSvUDfxVAgRgswqdzlmDy8RH0wXs5hJg","forwarded-for":"10.0.40.4","request-id":"f4ec245f-9c01-4808-ac2b-85e34ab712e4","duration":0,"size":0,"response-code":0,"response-code-details":"downstream_remote_disconnect","time":"2022-03-17T16:25:33Z","message":"http-request"}

Test ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: kuard
  annotations:
    cert-manager.io/issuer: letsencrypt-staging
    ingress.pomerium.io/policy: '[{"allow":{"and":[{"domain":{"is":"localhost.pomerium.io"}}]}}]'
#    ingress.pomerium.io/secure_upstream: "true"
spec:
  ingressClassName: pomerium
  rules:
  - host: verify.dev.sw.io
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: kuard
            port:
              number: 80
  tls:
    - hosts:
        - verify.dev.sw.io
      secretName: kuard.localhost.pomerium.io-tls

Any ideas?

My current stuck status is that I have created DNS records for verify.dev.sw.io and authenticate.dev.sw.io pointing to the pomerium-proxy Load Balancer, but I am getting 404s with route not found in the pomerium-proxy logs.

{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"authenticate.dev.sw.io","path":"/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.29.79","request-id":"da48e291-cf5e-4889-8335-255635357a17","duration":0.186128,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-21T16:51:39Z","message":"http-request"}

Is there any way to see what routes the ingress controller created for the ingress?

Hi @saranicole. I see a potential cause of the problem:

You’ve defined your route twice, once in the yaml for Pomerium (config.routes), and once with an Ingress. If you’re using an Ingress, remove the route from Pomerium’s config.

Beyond that, here’s something to keep in mind regarding logging that will help us help you </maguire>:

More error and log output is always better. The autocert error provided can be safely ignored since you’re not using autocert to provision. The info line about downstream_remote_disconnect implies that it’s the browser that gave up the connection. That means that there’s probably more relevant output buried in the (admittedly huge amount of) log output.

A good rule of thumb would be to start the logging (for k8s, something like kubectl logs -f pomerium-{authenticate|proxy|etc}-#NUMBERS) , run the action that doesn’t work, and capture the output created by that action.

Regarding seeing the ingress for the authenticate service, you should be able to see that with kubectl get ingress (in the correct namespace):

❯ k get ingress
NAME                     CLASS      HOSTS             ADDRESS       PORTS     AGE
pomerium-authenticate    pomerium   auth.mydomain.tld      x.x.x.x   80, 443   73d

But the error you’re seeing makes me think that the authorize service isn’t stood up in your cluster:

❯ k get deployments pomerium-authorize
NAME                 READY   UP-TO-DATE   AVAILABLE   AGE
pomerium-authorize   1/1     1            1           73d

Finally, I would suggest as a generality that Kubernetes not be the first environment you set up Pomerium in, unless you’re already very familiar with not only Kubernetes, but the various aspects of it Pomerium meddles with, like IngressControllers for example. Debugging the setup of a tool as flexible as Pomerium can be hard, and debugging Kubernetes deployments can be hard. Doing both is a challenge.

P.S. I forgot to mention: I see your Ingress for Kuard, but not the deployment of the service itself. Did you confirm that the service is listening on port 80?

Ref: Pomerium Ingress | cert-manager

Hi Alex,
Thanks for the thoughtful response! I actually have that verify route commented out in the config.routes, so there should be no conflict between that and the ingress.

How is the authenticate service url supposed to be externally routable AND redirect to my Google IDP URI https://accounts.google.com/o/oauth2/v2/auth ? I’ve tried setting the DNS record of authenticate.dev.sw.io to the pomerium-proxy Load Balancer and I get an error with too many redirects.

I have tried the Docker Compose example and that worked fine, but I am committed to Pomerium in Kubernetes-land since that is what we are using here.

As for the kuard service I did a port forward and confirmed that I could access the service on port 80.

I turned on debugging for the resources and this is what I get using stern -l app.kubernetes.io/instance=pomerium


pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C769] new connection from 10.0.47.110:59890 name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C769] TLS error: 268436502:SSL routines:OPENSSL_internal:SSLV3_ALERT_CERTIFICATE_UNKNOWN name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C769] closing socket: 0 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C769] TLS error: 268436502:SSL routines:OPENSSL_internal:SSLV3_ALERT_CERTIFICATE_UNKNOWN name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C769] adding to cleanup list name=conn_handler service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG transport socket match, socket ts-47WSF65HWZI3KYCYKXJIY3QR83Z7OQTT6ECGE3U2TC1E6LKMQK selected for host with address 10.0.4.80:443 name=upstream service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG DNS refresh rate reset for pomerium-authorize.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG flushing stats name=main service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls:onServerName(), requestedServerName: verify.dev.sw.io name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C770] new connection from 10.0.40.4:33085 name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C770] TLS error: 268436502:SSL routines:OPENSSL_internal:SSLV3_ALERT_CERTIFICATE_UNKNOWN name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C770] closing socket: 0 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C770] TLS error: 268436502:SSL routines:OPENSSL_internal:SSLV3_ALERT_CERTIFICATE_UNKNOWN name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C770] adding to cleanup list name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG flushing stats name=main service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG transport socket match, socket ts-47WSF65HWZI3KYCYKXJIY3QR83Z7OQTT6ECGE3U2TC1E6LKMQK selected for host with address 10.0.4.80:443 name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG DNS refresh rate reset for pomerium-authorize.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG flushing stats name=main service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG transport socket match, socket ts-47WSF65HWZI3KYCYKXJIY3QR83Z7OQTT6ECGE3U2TC1E6LKMQK selected for host with address 10.0.4.80:443 name=upstream service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG DNS refresh rate reset for pomerium-authorize.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4] new stream name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4][S4676803168102434870] request headers complete (end_stream=false):\n\':method\', \'POST\'\n\':scheme\', \'https\'\n\':path\', \'/databroker.DataBrokerService/RenewLease\'\n\':authority\', \'pomerium-databroker.pomerium.svc.cluster.local:443\'\n\'content-type\', \'application/grpc\'\n\'user-agent\', \'grpc-go/1.44.1-dev\'\n\'te\', \'trailers\'\n\'grpc-timeout\', \'59999901u\'\n\'jwt\', \'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NDc4OTY1MDR9.6Rzh7JknfBk35531OVnDZAdvfvUW6-f4uh1xhAghvzQ\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4][S4676803168102434870] cluster \'pomerium-control-plane-grpc\' match for URL \'/databroker.DataBrokerService/RenewLease\' name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4][S4676803168102434870] router decoding headers:\n\':method\', \'POST\'\n\':scheme\', \'https\'\n\':path\', \'/databroker.DataBrokerService/RenewLease\'\n\':authority\', \'pomerium-databroker.pomerium.svc.cluster.local:443\'\n\'content-type\', \'application/grpc\'\n\'user-agent\', \'grpc-go/1.44.1-dev\'\n\'te\', \'trailers\'\n\'grpc-timeout\', \'59999901u\'\n\'jwt\', \'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NDc4OTY1MDR9.6Rzh7JknfBk35531OVnDZAdvfvUW6-f4uh1xhAghvzQ\'\n\'x-forwarded-proto\', \'https\'\n\'x-request-id\', \'18c3c385-3219-452b-9802-27c5baacaa35\' name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C5] using existing connection name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C5] creating stream name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4][S4676803168102434870] pool ready name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4][S4676803168102434870] request end stream name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG renew lease duration=30000 id=b0126f37-0c8f-4885-92c7-0989a9925652 name=ingress-controller
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4][S4676803168102434870] upstream headers complete: end_stream=false name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4][S4676803168102434870] encoding headers via codec (end_stream=false):\n\':status\', \'200\'\n\'content-type\', \'application/grpc\'\n\'x-envoy-version\', \'1.19.3+0799f0751367f33f54e488ff1241104bb9592a916e5c2387d80776291f8fed45\'\n\'x-pomerium-version\', \'0.16.4-1645833281+0d3fa003\'\n\'x-envoy-upstream-service-time\', \'1\'\n\'date\', \'Mon, 21 Mar 2022 20:01:44 GMT\'\n\'server\', \'envoy\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C5] response complete name=client service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C5] destroying stream: 0 remaining name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4][S4676803168102434870] encoding trailers via codec:\n\'grpc-status\', \'0\'\n\'grpc-message\', \'\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C4] stream closed: 0 name=http2 service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C5] stream closed: 0 name=http2 service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG transport socket match, socket ts-47WSF65HWZI3KYCYKXJIY3QR83Z7OQTT6ECGE3U2TC1E6LKMQK selected for host with address 10.0.4.80:443 name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG DNS refresh rate reset for pomerium-authorize.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG flushing stats name=main service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG transport socket match, socket ts-47WSF65HWZI3KYCYKXJIY3QR83Z7OQTT6ECGE3U2TC1E6LKMQK selected for host with address 10.0.4.80:443 name=upstream service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG DNS refresh rate reset for pomerium-authorize.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG flushing stats name=main service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG flushing stats name=main service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG transport socket match, socket ts-47WSF65HWZI3KYCYKXJIY3QR83Z7OQTT6ECGE3U2TC1E6LKMQK selected for host with address 10.0.4.80:443 name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG DNS refresh rate reset for pomerium-authorize.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG flushing stats name=main service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG transport socket match, socket ts-47WSF65HWZI3KYCYKXJIY3QR83Z7OQTT6ECGE3U2TC1E6LKMQK selected for host with address 10.0.4.80:443 name=upstream service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG DNS refresh rate reset for pomerium-authorize.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600] new connection from 10.0.10.189:46464 name=conn_handler service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601] new connection from 10.0.10.189:46466 name=conn_handler service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600] updating connection-level initial window size to 268435456 name=http2 service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600] new stream name=http service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600][S17535186329522841194] request headers complete (end_stream=true):\n\':authority\', \'10.0.15.93:443\'\n\':method\', \'GET\'\n\':path\', \'/ping\'\n\':scheme\', \'https\'\n\'accept\', \'*/*\'\n\'user-agent\', \'kube-probe/1.21+\' name=http service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600][S17535186329522841194] request end stream name=http service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600][S17535186329522841194] cluster \'pomerium-control-plane-http\' match for URL \'/ping\' name=router service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600][S17535186329522841194] router decoding headers:\n\':authority\', \'10.0.15.93:443\'\n\':method\', \'GET\'\n\':path\', \'/ping\'\n\':scheme\', \'https\'\n\'accept\', \'*/*\'\n\'user-agent\', \'kube-probe/1.21+\'\n\'x-forwarded-for\', \'10.0.10.189\'\n\'x-forwarded-proto\', \'https\'\n\'x-envoy-internal\', \'true\'\n\'x-request-id\', \'3b4576bc-a87d-497e-8e67-969d984f1431\'\n\'x-envoy-expected-rq-timeout-ms\', \'15000\' name=router service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C8] using existing connection name=pool service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C8] creating stream name=pool service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600][S17535186329522841194] pool ready name=router service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601] updating connection-level initial window size to 268435456 name=http2 service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601] new stream name=http service=envoy

I hit the limit on post size so I will post the rest of the log in a separate reply.

Here is part 2 of the logs:

pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601][S11525685187868624995] request headers complete (end_stream=true):\n\':authority\', \'10.0.15.93:443\'\n\':method\', \'GET\'\n\':path\', \'/ping\'\n\':scheme\', \'https\'\n\'user-agent\', \'kube-probe/1.21+\'\n\'accept\', \'*/*\' name=http service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601][S11525685187868624995] request end stream name=http service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601][S11525685187868624995] cluster \'pomerium-control-plane-http\' match for URL \'/ping\' name=router service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601][S11525685187868624995] router decoding headers:\n\':authority\', \'10.0.15.93:443\'\n\':method\', \'GET\'\n\':path\', \'/ping\'\n\':scheme\', \'https\'\n\'user-agent\', \'kube-probe/1.21+\'\n\'accept\', \'*/*\'\n\'x-forwarded-for\', \'10.0.10.189\'\n\'x-forwarded-proto\', \'https\'\n\'x-envoy-internal\', \'true\'\n\'x-request-id\', \'3a4352c6-f65c-405e-a52a-a709981d8851\'\n\'x-envoy-expected-rq-timeout-ms\', \'15000\' name=router service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C344] using existing connection name=pool service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C344] creating stream name=pool service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601][S11525685187868624995] pool ready name=router service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG http-request X-Forwarded-For=["10.0.10.189"] X-Forwarded-Proto=["https"] duration=0.083749 host=10.0.15.93:443 ip=127.0.0.1 method=GET path=/ping request-id=3b4576bc-a87d-497e-8e67-969d984f1431 size=3 status=200 user_agent=kube-probe/1.21+
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG http-request X-Forwarded-For=["10.0.10.189"] X-Forwarded-Proto=["https"] duration=0.029709 host=10.0.15.93:443 ip=127.0.0.1 method=GET path=/ping request-id=3a4352c6-f65c-405e-a52a-a709981d8851 size=3 status=200 user_agent=kube-probe/1.21+
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600][S17535186329522841194] upstream headers complete: end_stream=false name=router service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601][S11525685187868624995] upstream headers complete: end_stream=false name=router service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601][S11525685187868624995] encoding headers via codec (end_stream=false):\n\':status\', \'200\'\n\'content-type\', \'text/plain\'\n\'date\', \'Mon, 21 Mar 2022 20:01:49 GMT\'\n\'content-length\', \'3\'\n\'x-envoy-upstream-service-time\', \'1\'\n\'strict-transport-security\', \'max-age=31536000; includeSubDomains; preload\'\n\'x-frame-options\', \'SAMEORIGIN\'\n\'x-xss-protection\', \'1; mode=block\'\n\'server\', \'envoy\' name=http service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600][S17535186329522841194] encoding headers via codec (end_stream=false):\n\':status\', \'200\'\n\'content-type\', \'text/plain\'\n\'date\', \'Mon, 21 Mar 2022 20:01:49 GMT\'\n\'content-length\', \'3\'\n\'x-envoy-upstream-service-time\', \'1\'\n\'strict-transport-security\', \'max-age=31536000; includeSubDomains; preload\'\n\'x-frame-options\', \'SAMEORIGIN\'\n\'x-xss-protection\', \'1; mode=block\'\n\'server\', \'envoy\' name=http service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C8] response complete name=client service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C344] response complete name=client service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601] stream closed: 0 name=http2 service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C344] response complete name=pool service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C344] destroying stream: 0 remaining name=pool service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600] stream closed: 0 name=http2 service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C8] response complete name=pool service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C8] destroying stream: 0 remaining name=pool service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600] remote close name=connection service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600] closing socket: 0 name=connection service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600] SSL shutdown: rc=1 name=connection service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C600] adding to cleanup list name=conn_handler service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601] remote close name=connection service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601] closing socket: 0 name=connection service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601] SSL shutdown: rc=1 name=connection service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG [C601] adding to cleanup list name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls:onServerName(), requestedServerName: verify.dev.sw.io name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C771] new connection from 10.0.47.110:63954 name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C771] TLS error: 268436502:SSL routines:OPENSSL_internal:SSLV3_ALERT_CERTIFICATE_UNKNOWN name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C771] closing socket: 0 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C771] TLS error: 268436502:SSL routines:OPENSSL_internal:SSLV3_ALERT_CERTIFICATE_UNKNOWN name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C771] adding to cleanup list name=conn_handler service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls:onServerName(), requestedServerName: verify.dev.sw.io name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772] new connection from 10.0.40.4:15462 name=conn_handler service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM INF initial sync syncer_id=databroker syncer_type=type.googleapis.com/pomerium.config.Config
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3] new stream name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3][S1018385024169704963] request headers complete (end_stream=false):\n\':method\', \'POST\'\n\':scheme\', \'http\'\n\':path\', \'/databroker.DataBrokerService/SyncLatest\'\n\':authority\', \'127.0.0.1:39725\'\n\'content-type\', \'application/grpc\'\n\'user-agent\', \'grpc-go/1.43.0\'\n\'te\', \'trailers\'\n\'x-request-id\', \'LQ28Bh5NKhDGKGGss9ggZg\'\n\'jwt\', \'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NDc4OTY1MDl9.X7zGPwUNzg4dzqMhNaU5jOQM_iZzGpJSJu8I6UyIAtg\'\n\'grpc-trace-bin\', \'AAAbOTEn8PUK9SoDc9ZssnZ5ARN+tydbD5oXAgA\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3][S1018385024169704963] cluster \'pomerium-databroker\' match for URL \'/databroker.DataBrokerService/SyncLatest\' name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3][S1018385024169704963] router decoding headers:\n\':method\', \'POST\'\n\':scheme\', \'http\'\n\':path\', \'/databroker.DataBrokerService/SyncLatest\'\n\':authority\', \'127.0.0.1:39725\'\n\'content-type\', \'application/grpc\'\n\'user-agent\', \'grpc-go/1.43.0\'\n\'te\', \'trailers\'\n\'x-request-id\', \'LQ28Bh5NKhDGKGGss9ggZg\'\n\'jwt\', \'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NDc4OTY1MDl9.X7zGPwUNzg4dzqMhNaU5jOQM_iZzGpJSJu8I6UyIAtg\'\n\'grpc-trace-bin\', \'AAAbOTEn8PUK9SoDc9ZssnZ5ARN+tydbD5oXAgA\'\n\'x-forwarded-proto\', \'http\' name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C8] using existing connection name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C8] creating stream name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3][S1018385024169704963] pool ready name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3][S1018385024169704963] request end stream name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9] new stream name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9][S15425613695117919716] request headers complete (end_stream=false):\n\':method\', \'POST\'\n\':scheme\', \'http\'\n\':path\', \'/databroker.DataBrokerService/SyncLatest\'\n\':authority\', \'127.0.0.1:39725\'\n\'content-type\', \'application/grpc\'\n\'user-agent\', \'grpc-go/1.43.0\'\n\'te\', \'trailers\'\n\'x-request-id\', \'LQ28Bh5NKhDGKGGss9ggZg\'\n\'jwt\', \'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NDc4OTY1MDl9.X7zGPwUNzg4dzqMhNaU5jOQM_iZzGpJSJu8I6UyIAtg\'\n\'grpc-trace-bin\', \'AAAbOTEn8PUK9SoDc9ZssnZ5ARN+tydbD5oXAgA\'\n\'x-forwarded-proto\', \'http\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9][S15425613695117919716] cluster \'pomerium-control-plane-grpc\' match for URL \'/databroker.DataBrokerService/SyncLatest\' name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9][S15425613695117919716] router decoding headers:\n\':method\', \'POST\'\n\':scheme\', \'http\'\n\':path\', \'/databroker.DataBrokerService/SyncLatest\'\n\':authority\', \'127.0.0.1:39725\'\n\'content-type\', \'application/grpc\'\n\'user-agent\', \'grpc-go/1.43.0\'\n\'te\', \'trailers\'\n\'x-request-id\', \'LQ28Bh5NKhDGKGGss9ggZg\'\n\'jwt\', \'eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NDc4OTY1MDl9.X7zGPwUNzg4dzqMhNaU5jOQM_iZzGpJSJu8I6UyIAtg\'\n\'grpc-trace-bin\', \'AAAbOTEn8PUK9SoDc9ZssnZ5ARN+tydbD5oXAgA\'\n\'x-forwarded-proto\', \'http\' name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C7] using existing connection name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C7] creating stream name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9][S15425613695117919716] pool ready name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9][S15425613695117919716] request end stream name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM INF sync latest type=type.googleapis.com/pomerium.config.Config
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9][S15425613695117919716] upstream headers complete: end_stream=false name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9][S15425613695117919716] encoding headers via codec (end_stream=false):\n\':status\', \'200\'\n\'content-type\', \'application/grpc\'\n\'x-envoy-version\', \'1.19.3+0799f0751367f33f54e488ff1241104bb9592a916e5c2387d80776291f8fed45\'\n\'x-pomerium-version\', \'0.16.4-1645833281+0d3fa003\'\n\'x-envoy-upstream-service-time\', \'1\'\n\'date\', \'Mon, 21 Mar 2022 20:01:49 GMT\'\n\'server\', \'envoy\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C7] response complete name=client service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C7] destroying stream: 0 remaining name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9][S15425613695117919716] encoding trailers via codec:\n\'grpc-status\', \'2\'\n\'grpc-message\', \'cryptutil: decryption failed (mismatched keys?): chacha20poly1305: message authentication failed\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C9] stream closed: 0 name=http2 service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C7] stream closed: 0 name=http2 service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3][S1018385024169704963] upstream headers complete: end_stream=false name=router service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3][S1018385024169704963] encoding headers via codec (end_stream=false):\n\':status\', \'200\'\n\'content-type\', \'application/grpc\'\n\'x-envoy-version\', \'1.19.3+0799f0751367f33f54e488ff1241104bb9592a916e5c2387d80776291f8fed45\'\n\'x-pomerium-version\', \'0.16.4-1645833281+0d3fa003\'\n\'x-envoy-upstream-service-time\', \'3\'\n\'date\', \'Mon, 21 Mar 2022 20:01:49 GMT\'\n\'server\', \'envoy\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C8] response complete name=client service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C8] destroying stream: 0 remaining name=pool service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3][S1018385024169704963] encoding trailers via codec:\n\'grpc-status\', \'2\'\n\'grpc-message\', \'cryptutil: decryption failed (mismatched keys?): chacha20poly1305: message authentication failed\' name=http service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C3] stream closed: 0 name=http2 service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG [C8] stream closed: 0 name=http2 service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM ERR error during initial sync error="rpc error: code = Unknown desc = cryptutil: decryption failed (mismatched keys?): chacha20poly1305: message authentication failed"
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM ERR sync error="rpc error: code = Unknown desc = cryptutil: decryption failed (mismatched keys?): chacha20poly1305: message authentication failed"
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772] updating connection-level initial window size to 268435456 name=http2 service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772] new stream name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772][S4499411489346191405] request headers complete (end_stream=true):\n\':method\', \'GET\'\n\':authority\', \'verify.dev.sw.io\'\n\':scheme\', \'https\'\n\':path\', \'/\'\n\'cache-control\', \'max-age=0\'\n\'sec-ch-ua\', \'\" Not A;Brand\";v=\"99\", \"Chromium\";v=\"98\", \"Google Chrome\";v=\"98\"\'\n\'sec-ch-ua-mobile\', \'?0\'\n\'sec-ch-ua-platform\', \'\"macOS\"\'\n\'upgrade-insecure-requests\', \'1\'\n\'user-agent\', \'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36\'\n\'accept\', \'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9\'\n\'sec-fetch-site\', \'none\'\n\'sec-fetch-mode\', \'navigate\'\n\'sec-fetch-user\', \'?1\'\n\'sec-fetch-dest\', \'document\'\n\'accept-encoding\', \'gzip, deflate, br\'\n\'accept-language\', \'en-US,en;q=0.9\' name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772][S4499411489346191405] request end stream name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772][S4499411489346191405] no cluster match for URL \'/\' name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772][S4499411489346191405] Sending local reply with details route_not_found name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772][S4499411489346191405] encoding headers via codec (end_stream=true):\n\':status\', \'404\'\n\'strict-transport-security\', \'max-age=31536000; includeSubDomains; preload\'\n\'x-frame-options\', \'SAMEORIGIN\'\n\'x-xss-protection\', \'1; mode=block\'\n\'date\', \'Mon, 21 Mar 2022 20:01:49 GMT\'\n\'server\', \'envoy\' name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C772] stream closed: 0 name=http2 service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG http-request authority=10.0.15.93:443 duration=2.675483 forwarded-for=10.0.10.189 method=GET path=/ping referer= request-id=3a4352c6-f65c-405e-a52a-a709981d8851 response-code=200 response-code-details=via_upstream service=envoy size=3 upstream-cluster=pomerium-control-plane-http user-agent=kube-probe/1.21+
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 1000 ms name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG http-request authority=10.0.15.93:443 duration=5.026816 forwarded-for=10.0.10.189 method=GET path=/ping referer= request-id=3b4576bc-a87d-497e-8e67-969d984f1431 response-code=200 response-code-details=via_upstream service=envoy size=3 upstream-cluster=pomerium-control-plane-http user-agent=kube-probe/1.21+
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM INF http-request authority=verify.dev.sw.io duration=1.156126 forwarded-for=10.0.40.4 method=GET path=/ referer= request-id=2f41385a-21a3-46c1-add6-3300665d91e7 response-code=404 response-code-details=route_not_found service=envoy size=0 upstream-cluster= user-agent="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36"
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774] new connection from 10.0.10.189:57080 name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773] new connection from 10.0.10.189:57082 name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774] updating connection-level initial window size to 268435456 name=http2 service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774] new stream name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774][S5291142556699549941] request headers complete (end_stream=true):\n\':authority\', \'10.0.2.189:443\'\n\':method\', \'GET\'\n\':path\', \'/ping\'\n\':scheme\', \'https\'\n\'user-agent\', \'kube-probe/1.21+\'\n\'accept\', \'*/*\' name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774][S5291142556699549941] request end stream name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy

And part 3

pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774][S5291142556699549941] cluster \'pomerium-control-plane-http\' match for URL \'/ping\' name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774][S5291142556699549941] router decoding headers:\n\':authority\', \'10.0.2.189:443\'\n\':method\', \'GET\'\n\':path\', \'/ping\'\n\':scheme\', \'https\'\n\'user-agent\', \'kube-probe/1.21+\'\n\'accept\', \'*/*\'\n\'x-forwarded-for\', \'10.0.10.189\'\n\'x-forwarded-proto\', \'https\'\n\'x-envoy-internal\', \'true\'\n\'x-request-id\', \'4dfbf281-2b8f-4777-ade6-04f320dd79bb\'\n\'x-envoy-expected-rq-timeout-ms\', \'15000\' name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C60] using existing connection name=pool service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C60] creating stream name=pool service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774][S5291142556699549941] pool ready name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773] updating connection-level initial window size to 268435456 name=http2 service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773] new stream name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773][S13714257857131245702] request headers complete (end_stream=true):\n\':authority\', \'10.0.2.189:443\'\n\':method\', \'GET\'\n\':path\', \'/ping\'\n\':scheme\', \'https\'\n\'user-agent\', \'kube-probe/1.21+\'\n\'accept\', \'*/*\' name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773][S13714257857131245702] request end stream name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773][S13714257857131245702] cluster \'pomerium-control-plane-http\' match for URL \'/ping\' name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773][S13714257857131245702] router decoding headers:\n\':authority\', \'10.0.2.189:443\'\n\':method\', \'GET\'\n\':path\', \'/ping\'\n\':scheme\', \'https\'\n\'user-agent\', \'kube-probe/1.21+\'\n\'accept\', \'*/*\'\n\'x-forwarded-for\', \'10.0.10.189\'\n\'x-forwarded-proto\', \'https\'\n\'x-envoy-internal\', \'true\'\n\'x-request-id\', \'a5707a4d-17b5-45bc-9997-b892eb7a7ac7\'\n\'x-envoy-expected-rq-timeout-ms\', \'15000\' name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C33] using existing connection name=pool service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C33] creating stream name=pool service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773][S13714257857131245702] pool ready name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG http-request X-Forwarded-For=["10.0.10.189"] X-Forwarded-Proto=["https"] duration=0.076781 host=10.0.2.189:443 ip=127.0.0.1 method=GET path=/ping request-id=a5707a4d-17b5-45bc-9997-b892eb7a7ac7 size=3 status=200 user_agent=kube-probe/1.21+
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG http-request X-Forwarded-For=["10.0.10.189"] X-Forwarded-Proto=["https"] duration=0.045432 host=10.0.2.189:443 ip=127.0.0.1 method=GET path=/ping request-id=4dfbf281-2b8f-4777-ade6-04f320dd79bb size=3 status=200 user_agent=kube-probe/1.21+
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773][S13714257857131245702] upstream headers complete: end_stream=false name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774][S5291142556699549941] upstream headers complete: end_stream=false name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773][S13714257857131245702] encoding headers via codec (end_stream=false):\n\':status\', \'200\'\n\'content-type\', \'text/plain\'\n\'date\', \'Mon, 21 Mar 2022 20:01:50 GMT\'\n\'content-length\', \'3\'\n\'x-envoy-upstream-service-time\', \'2\'\n\'strict-transport-security\', \'max-age=31536000; includeSubDomains; preload\'\n\'x-frame-options\', \'SAMEORIGIN\'\n\'x-xss-protection\', \'1; mode=block\'\n\'server\', \'envoy\' name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C33] response complete name=client service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774][S5291142556699549941] encoding headers via codec (end_stream=false):\n\':status\', \'200\'\n\'content-type\', \'text/plain\'\n\'date\', \'Mon, 21 Mar 2022 20:01:50 GMT\'\n\'content-length\', \'3\'\n\'x-envoy-upstream-service-time\', \'3\'\n\'strict-transport-security\', \'max-age=31536000; includeSubDomains; preload\'\n\'x-frame-options\', \'SAMEORIGIN\'\n\'x-xss-protection\', \'1; mode=block\'\n\'server\', \'envoy\' name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C60] response complete name=client service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774] stream closed: 0 name=http2 service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773] stream closed: 0 name=http2 service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C33] response complete name=pool service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C33] destroying stream: 0 remaining name=pool service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C60] response complete name=pool service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C60] destroying stream: 0 remaining name=pool service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773] remote close name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773] closing socket: 0 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773] SSL shutdown: rc=1 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C773] adding to cleanup list name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774] remote close name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774] closing socket: 0 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774] SSL shutdown: rc=1 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C774] adding to cleanup list name=conn_handler service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 1000 ms name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG http-request authority=10.0.2.189:443 duration=4.462195 forwarded-for=10.0.10.189 method=GET path=/ping referer= request-id=4dfbf281-2b8f-4777-ade6-04f320dd79bb response-code=200 response-code-details=via_upstream service=envoy size=3 upstream-cluster=pomerium-control-plane-http user-agent=kube-probe/1.21+
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-authenticate-6cff4664b8-2cm4k pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-databroker-6fb496b479-nbqnh pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG http-request authority=10.0.2.189:443 duration=3.271466 forwarded-for=10.0.10.189 method=GET path=/ping referer= request-id=a5707a4d-17b5-45bc-9997-b892eb7a7ac7 response-code=200 response-code-details=via_upstream service=envoy size=3 upstream-cluster=pomerium-control-plane-http user-agent=kube-probe/1.21+
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium {"level":"error","time":"2022-03-21T20:01:51Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"authenticate.tools.dev.sw.io","error":"no information found to solve challenge for identifier: authenticate.tools.dev.sw.io"}
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls inspector: new connection accepted name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG tls:onServerName(), requestedServerName: authenticate.tools.dev.sw.io name=filter service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] new connection from 10.0.13.146:50883 name=conn_handler service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] new stream name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775][S15921319083277809224] request headers complete (end_stream=true):\n\':authority\', \'authenticate.tools.dev.sw.io\'\n\':path\', \'/.well-known/acme-challenge/pZMc3KoTBK2q2xpyy9rVXmi--iSHAaZNaUpcdi0BVEw\'\n\':method\', \'GET\'\n\'user-agent\', \'cert-manager/v1.7.0 (clean)\'\n\'referer\', \'http://authenticate.tools.dev.sw.io/.well-known/acme-challenge/pZMc3KoTBK2q2xpyy9rVXmi--iSHAaZNaUpcdi0BVEw\'\n\'accept-encoding\', \'gzip\'\n\'connection\', \'close\' name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775][S15921319083277809224] request end stream name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775][S15921319083277809224] no cluster match for URL \'/.well-known/acme-challenge/pZMc3KoTBK2q2xpyy9rVXmi--iSHAaZNaUpcdi0BVEw\' name=router service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775][S15921319083277809224] Sending local reply with details route_not_found name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG coroutine finished name=lua service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775][S15921319083277809224] closing connection due to connection close header name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775][S15921319083277809224] encoding headers via codec (end_stream=true):\n\':status\', \'404\'\n\'strict-transport-security\', \'max-age=31536000; includeSubDomains; preload\'\n\'x-frame-options\', \'SAMEORIGIN\'\n\'x-xss-protection\', \'1; mode=block\'\n\'date\', \'Mon, 21 Mar 2022 20:01:51 GMT\'\n\'server\', \'envoy\'\n\'connection\', \'close\' name=http service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] closing data_to_write=251 type=2 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] setting delayed close timer with timeout 1000 ms name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] closing data_to_write=251 type=2 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] write flush complete name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] remote early close name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] closing socket: 0 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] SSL shutdown: rc=0 name=connection service=envoy
pomerium-proxy-7d4ccc78cd-ghsm4 pomerium 8:01PM DBG [C775] adding to cleanup list name=conn_handler service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG transport socket match, socket ts-2I60SP1ZD7E9TQILNX0ILYLEQFSIH8467G9Z9172RVRPE4MEMP selected for host with address 10.0.8.60:443 name=upstream service=envoy
pomerium-authorize-76744d9464-hp7cp pomerium 8:01PM DBG DNS refresh rate reset for pomerium-databroker.pomerium.svc.cluster.local, refresh rate 5000 ms name=upstream service=envoy

This part confuses me. The authenticate service URL will perform a redirect for the user, but the DNS / route should be to the Pomerium Proxy service.

Thanks for providing the additional logs. Since my ideas didn’t pan out I’m gonna raise this internally to those who know the 0s and 1s better than I.

But in the meantime, I’d like to hear more about your confusion with the authenticate route. Regarding the redirect loop, I would get that myself a lot when setting up a new instance of Pomerium. I would double check not only your idp_* config keys, but also the values you gave Google for the callback URL.

Hi,

I believe that indicates you’ve changed your sharedSecret since you initialized redis. Data inside redis is encrypted with the secret and you will need to clear redis if you’ve changed it. This could easily lead to 404s (ingress controller and proxy can’t talk to databroker). It would also cause problems for authorize and authenticate.

I’m sure we’re close here. I disabled redis for the time being to rule that out of the equation - now I’m getting a 403 forbidden when I navigate to verify.dev.sw.io and upstream connect error or disconnect/reset before headers. reset reason: connection failure when I go to authenticate.dev.sw.io . I ruled out certificate errors by using a prod let’s encrypt issuer.

Logs:

pomerium-authorize-d87bfd878-dvfxq pomerium {"level":"info","service":"authorize","request-id":"f1644a25-0aca-4b0b-a677-a96c11edd46b","check-request-id":"c64ea404-3cd8-4437-a8a7-5b1704482312","method":"GET","path":"/","host":"verify.tools.dev.sw.io","query":"","allow":false,"allow-why-false":["non-pomerium-route"],"deny":false,"deny-why-false":["valid-client-certificate-or-none-required"],"user":"","email":"","databroker_server_version":2427717062922184798,"databroker_record_version":18,"time":"2022-03-22T15:01:00Z","message":"authorize check"}
pomerium-authorize-d87bfd878-dvfxq pomerium {"level":"info","service":"authorize","request-id":"e0cb356c-27a1-4426-bb3a-de5b62e97c6f","check-request-id":"5835ff2a-7e70-4641-8d40-9f890209b87b","method":"GET","path":"/","host":"verify.tools.dev.sw.io","query":"","allow":false,"allow-why-false":["non-pomerium-route"],"deny":false,"deny-why-false":["valid-client-certificate-or-none-required"],"user":"","email":"","databroker_server_version":2427717062922184798,"databroker_record_version":18,"time":"2022-03-22T15:01:13Z","message":"authorize check"}
pomerium-authorize-d87bfd878-dvfxq pomerium {"level":"info","service":"authorize","request-id":"33851a3f-8404-4f3b-9468-ff297aeabd9f","check-request-id":"4070ccee-ae16-457c-a004-946c0867ed92","method":"GET","path":"/","host":"verify.tools.dev.sw.io","query":"","allow":false,"allow-why-false":["non-pomerium-route"],"deny":false,"deny-why-false":["valid-client-certificate-or-none-required"],"user":"","email":"","databroker_server_version":2427717062922184798,"databroker_record_version":18,"time":"2022-03-22T15:03:56Z","message":"authorize check"}
pomerium-proxy-77448fd8b6-wngpl pomerium {"level":"info","service":"envoy","upstream-cluster":"pomerium-control-plane-http","method":"GET","authority":"verify.tools.dev.sw.io","path":"/.pomerium/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.25.163","request-id":"999cb853-d268-4e3b-8fa6-ae442f432976","duration":0.717767,"size":302,"response-code":302,"response-code-details":"via_upstream","time":"2022-03-22T15:04:10Z","message":"http-request"}
pomerium-proxy-77448fd8b6-wngpl pomerium {"level":"info","service":"envoy","upstream-cluster":"pomerium-pomerium-authenticate-authenticate-tools-dev-sw-io-eb4679af2e931dfc","method":"GET","authority":"authenticate.tools.dev.sw.io","path":"/.pomerium/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.47.110","request-id":"6a3824de-e66d-4db9-b817-4863820d407c","duration":0.450253,"size":91,"response-code":503,"response-code-details":"upstream_reset_before_response_started{connection_failure}","time":"2022-03-22T15:04:10Z","message":"http-request"}

Update: I got further by turning off TLS in the backend, so it seems that there is some issue with the backend certificates. Specifically, I set config.insecure to true temporarily and I got my redirect to Google Auth. Much further than I’ve gotten before!

Update 2: I clarified that the 403 forbidden when navigating to https://verify.dev.sw.io is happening regardless of whether insecure is set to true, so there is still something else going on. The authorize check is failing and not redirecting to my authentication service.

pomerium-authorize-658d577748-qbzxt pomerium {"level":"info","service":"authorize","request-id":"446c751e-4fc2-4394-81dd-4f4c6a6c4635","check-request-id":"99386579-d746-4559-b5f1-e8b37b63a349","method":"GET","path":"/","host":"verify.tools.dev.sw.io","query":"","allow":false,"allow-why-false":["non-pomerium-route"],"deny":false,"deny-why-false":["valid-client-certificate-or-none-required"],"user":"","email":"","databroker_server_version":9387271978945274956,"databroker_record_version":50,"time":"2022-03-22T19:54:09Z","message":"authorize check"}
pomerium-proxy-b74d99bfb-9745j pomerium {"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"verify.tools.dev.sw.io","path":"/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.2.171","request-id":"99386579-d746-4559-b5f1-e8b37b63a349","duration":3.854942,"size":11849,"response-code":403,"response-code-details":"ext_authz_denied","time":"2022-03-22T19:54:09Z","message":"http-request"}

I tried this with the pomerium-verify app and got the same results with the following command:

helm upgrade --install pomerium-verify pomerium/pomerium-verify --set ingress.host=verify.dev.sw.io --set ingress.className=pomerium

Which mostly rules out that my test app itself might be broken.

Can you please post your current values, ingress and any cert-manager certificate configuration? It appears there’s been a large amount of drift since your original post and it isn’t clear what the current state is. Please also post the helm chart version you’ve deployed; the TLS issue might be something that was addressed very recently.

Also, other potential problems:

  • you’re referring to verify.dev.sw.io but the logs mentions verify.tools.dev.sw.io. I’m not clear why there’s a difference.
  • the 403 indicates the policy is not allowing you to that route. Please check your policy definition actually allows your identity. I see an earlier iteration had '[{"allow":{"and":[{"domain":{"is":"localhost.pomerium.io"}}]}}]' which is not likely your Idp e-mail and won’t work.

Thanks @travisgroth that is a good point. I switched to using tools.dev.sw.io as the root domain instead of dev.sw.io so I could attempt pointing tools.dev.sw.io to the proxy as well (which didn’t do anything).

As for the ingress policy I am getting a 403 without any attempt at login - shouldn’t the route redirect me to the authentication screen if I’m not logged in at all?

Here is my current configuration:

Pomerium Helm chart version 30.1.0
Verify Helm chart version 0.1.0

Values.yaml

authenticate:
  idp:
    provider: "google"
    clientID: ${client_id}
    clientSecret: ${client_secret}
  existingTLSSecret: pomerium-tls
  ingress:
    annotations:
      cert-manager.io/issuer: letsencrypt-staging
    tls:
      secretName: authenticate-tools-tls

forwardAuth:
  enabled: false

ingressController:
  enabled: true
#  config:
#    operatorMode: true
#  image:
#    tag: "v0.15.3"

config:
  # routes under this wildcard domain are handled by pomerium
  rootDomain: tools.dev.sw.io
  existingCASecret: pomerium-tls
  generateTLS: false
  insecure: false

#  routes:
#    - from: https://verify.dev.sw.io
#      to: https://verify:80
#      allowed_domains:
#        - sw.com
#        - sw.io

proxy:
  existingTLSSecret: pomerium-tls
#  service:
#    type: LoadBalancer
#
extraEnv:
  AUTOCERT: false
#  LOG_LEVEL: debug
#  POMERIUM_DEBUG: true

databroker:
  existingTLSSecret: pomerium-tls
  storage:
    # connectionString: rediss://pomerium-redis-master.pomerium.svc.cluster.local
    type: memory
    clientTLS:
      existingSecretName: pomerium-tls
      existingCASecretKey: ca.crt

authorize:
  existingTLSSecret: pomerium-tls

redis:
  enabled: false
  auth:
    enabled: false
  usePassword: false
  generateTLS: false
  tls:
    certificateSecret: pomerium-redis-tls

Verify Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    meta.helm.sh/release-name: pomerium-verify
    meta.helm.sh/release-namespace: pomerium
  creationTimestamp: "2022-03-22T21:07:12Z"
  generation: 2
  labels:
    app.kubernetes.io/instance: pomerium-verify
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pomerium-verify
    app.kubernetes.io/version: 0.0.1
    helm.sh/chart: pomerium-verify-0.1.0
  name: pomerium-verify
  namespace: pomerium
  resourceVersion: "10657711"
  uid: cffc4c9a-bbd0-435a-aa6b-9cc63384c15c
spec:
  ingressClassName: pomerium
  rules:
  - host: verify.tools.dev.sw.io
    http:
      paths:
      - backend:
          service:
            name: pomerium-verify
            port:
              number: 80
        path: /
        pathType: Prefix

Authenticate Ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/issuer: letsencrypt-staging
    ingress.pomerium.io/allow_public_unauthenticated_access: "true"
    ingress.pomerium.io/secure_upstream: "true"
    meta.helm.sh/release-name: pomerium
    meta.helm.sh/release-namespace: pomerium
  creationTimestamp: "2022-03-22T19:32:18Z"
  generation: 3
  labels:
    app.kubernetes.io/instance: pomerium
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pomerium
    helm.sh/chart: pomerium-30.1.1
  name: pomerium-authenticate
  namespace: pomerium
  resourceVersion: "10664150"
  uid: f9151161-4106-41a1-8765-ffc77eec624c
spec:
  ingressClassName: pomerium
  rules:
  - host: authenticate.tools.dev.sw.io
    http:
      paths:
      - backend:
          service:
            name: pomerium-authenticate
            port:
              name: https
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - authenticate.tools.dev.sw.io
    secretName: authenticate-tools-tls

Pomerium Internal Certificates

apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: pomerium-cert
  namespace: pomerium
spec:
  secretName: pomerium-tls
  issuerRef:
    name: pomerium-issuer
    kind: Issuer
  usages:
    - server auth
    - client auth
  dnsNames:
    - pomerium-proxy.pomerium.svc.cluster.local
    - pomerium-authorize.pomerium.svc.cluster.local
    - pomerium-databroker.pomerium.svc.cluster.local
    - pomerium-authenticate.pomerium.svc.cluster.local

Internal Issuer

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: pomerium-ca
  namespace: pomerium
spec:
  selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
  name: pomerium-ca
  namespace: pomerium
spec:
  isCA: true
  secretName: pomerium-ca
  commonName: pomerium ca
  issuerRef:
    name: pomerium-ca
    kind: Issuer
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: pomerium-issuer
  namespace: pomerium
spec:
  ca:
    secretName: pomerium-ca

Letsencrypt-staging

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-staging
spec:
  acme:
    # The ACME server URL
    server: https://acme-staging-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: sara@sw.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-staging
    # Enable the HTTP-01 challenge provider
    solvers:
      - http01:
          ingress:
            class:  pomerium

Letsencrypt-prod

apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    # The ACME server URL
    server: https://acme-v02.api.letsencrypt.org/directory
    # Email address used for ACME registration
    email: sara@sw.com
    # Name of a secret used to store the ACME account private key
    privateKeySecretRef:
      name: letsencrypt-prod
    # Enable the HTTP-01 challenge provider
    solvers:
    - http01:
        ingress:
          class: pomerium

Progress! I didn’t understand that the policy allow meant the idp email domain, so that is helpful. With the policy ingress.pomerium.io/policy: '[{"allow":{"and":[{"domain":{"is":"sw.com"}}]}}]' on the verify ingress I didn’t get a 403 but got an immediate redirect to the authenticate url. However with this it still resulted in upstream connect error or disconnect/reset before headers. reset reason: connection failure. I take this to mean there is an issue with the site I am redirecting to somehow?

Update: Interestingly, I added the policy and then reinstalled pomerium with insecure set to true, then reinstalled again with insecure set to false. I deleted all pods to make sure they reset. Now I’m getting the 403 forbidden again with the policy added.

Update 2: And it’s redirecting again to the upstream connect error. I pinned the ingress controller image to v0.15.3 and then removed the pin, and now it’s redirecting again.

I see from issue Getting connection_failure errors - #10 by travisgroth that I am running into a bug. I tried installing v30.1.1 ( v30.1.10 didn’t exist ) with ingress controller v0.15.3 but I still got the 403 and upstream connection failures. Happy to help troubleshoot any way I can.

My current status is I tried disabling TLS in the cluster and I am getting 404s when the ingress redirects to the authenticate url.

pomerium-authorize-746998d976-2hvsn pomerium {"level":"info","service":"authorize","request-id":"fba983a0-3069-485c-b12a-018f4daf11c9","check-request-id":"99203b80-7bb7-46e3-9670-52bf50880214","method":"GET","path":"/","host":"verify.tools.dev.sw.io","query":"","allow":false,"allow-why-false":["non-pomerium-route","user-unauthenticated"],"deny":false,"deny-why-false":["valid-client-certificate-or-none-required"],"user":"","email":"","databroker_server_version":8470342037473944555,"databroker_record_version":70,"time":"2022-03-23T19:43:27Z","message":"authorize check"}
pomerium-proxy-dcf76cf56-vls47 pomerium {"level":"info","service":"envoy","upstream-cluster":"pomerium-pomerium-authenticate-authenticate-tools-dev-sw-io-143fef259e8add7","method":"GET","authority":"pomerium-authenticate.pomerium.svc.cluster.local","path":"/.pomerium/sign_in","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.8.126","request-id":"93c318e4-5699-4499-b18b-b0843b18e924","duration":1.517119,"size":19,"response-code":404,"response-code-details":"via_upstream","time":"2022-03-23T19:43:28Z","message":"http-request"}
pomerium-proxy-dcf76cf56-vls47 pomerium {"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"verify.tools.dev.sw.io","path":"/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.40.4","request-id":"99203b80-7bb7-46e3-9670-52bf50880214","duration":4.910571,"size":1299,"response-code":302,"response-code-details":"ext_authz_denied","time":"2022-03-23T19:43:28Z","message":"http-request"}
pomerium-authenticate-5c68cf6cb8-wp7nf pomerium {"level":"info","service":"envoy","upstream-cluster":"pomerium-control-plane-http","method":"GET","authority":"pomerium-authenticate.pomerium.svc.cluster.local","path":"/.pomerium/sign_in","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.8.126,10.0.1.44","request-id":"ee759301-b375-4e22-9502-90ada8389395","duration":0.661432,"size":19,"response-code":404,"response-code-details":"via_upstream","time":"2022-03-23T19:43:28Z","message":"http-request"}

Not sure if this is related to the other issue, this issue had gone away when I disabled redis so it’s confusing why it’s cropping up now again when redis is still disabled. When I reenable TLS I get the upstream connect error again.

Found it! By default ingress enabled is set to true, and this seems to set the authenticate internal service url incorrectly. By setting ingress enabled to false I get the redirect and everything works as intended.

Working Helm configs

authenticate:
  idp:
    provider: "google"
    clientID: ${client_id}
    clientSecret: ${client_secret}
  existingTLSSecret: pomerium-tls
  ingress:
    annotations:
      cert-manager.io/issuer: letsencrypt-staging
      ingress.pomerium.io/service_proxy_upstream: "true"
    tls:
      secretName: authenticate-tools-tls

forwardAuth:
  enabled: false

ingress:
  enabled: false

ingressController:
  enabled: true
#  config:
#    operatorMode: true
#  image:
#    tag: "v0.15.3"

#image:
#  repository: "pomerium/pomerium"
#  tag: "v0.15.8"
#  pullPolicy: "IfNotPresent"

config:
  # routes under this wildcard domain are handled by pomerium
  rootDomain: tools.dev.sw.io
  existingCASecret: pomerium-tls
  generateTLS: false
  insecure: false

#  routes:
#    - from: https://verify.dev.sw.io
#      to: https://verify:80
#      allowed_domains:
#        - sw.com
#        - sw.io

proxy:
  existingTLSSecret: pomerium-tls
#  service:
#    type: LoadBalancer
#
extraEnv:
  AUTOCERT: false
#  LOG_LEVEL: debug
#  POMERIUM_DEBUG: true

databroker:
  existingTLSSecret: pomerium-tls
  storage:
    # connectionString: rediss://pomerium-redis-master.pomerium.svc.cluster.local
    type: memory
    clientTLS:
      existingSecretName: pomerium-tls
      existingCASecretKey: ca.crt

authorize:
  existingTLSSecret: pomerium-tls

redis:
  enabled: false
  auth:
    enabled: false
  usePassword: false
  generateTLS: false
  tls:
    certificateSecret: pomerium-redis-tls

I still get a 500 error when redirected from the authentication page when using redis, but we can use the in memory type for now. Thanks!

1 Like

Just a follow up about these two points of the ingress.enabled setting and the status 500 errors - the ingress.enabled is not meant to be set to true when ingressController.enabled is also set to true. @travisgroth added some validation to the Helm chart to prevent this confusion which is doc’ed in Ingress enabled set to true breaks authenticate service · Issue #282 · pomerium/pomerium-helm · GitHub . For the status 500 errors this was due to the shared secret getting changed during subsequent reinstallations of Redis which required a FLUSHALL to be executed against the master Redis pod. This is easier said than done because the default setting on the bitnami Redis chart is to disable these two commands. The solution is to set redis.master.disableCommands to an empty list [], then exec the FLUSHALL command with the TLS options enabled for a TLS enabled Redis. These learnings are documented in 500 Errors when redirected from AuthN when Redis is enabled · Issue #3187 · pomerium/pomerium · GitHub and will also be added to the troubleshooting doc.

1 Like