404s adding a new route to existing working Pomerium install

What happened?

I have a working installation of Pomerium but when I go to add a new route ( in this case for Grafana ) I get 404s.

What did you expect to happen?

I should be able to add a new route and be redirected to the authentication page.

How’d it happen?

Configured Grafana according to the guide at Grafana | Pomerium . I used Helm to install Grafana as part of the Prometheus Helm chart.

What’s your environment like?

  • Pomerium version (retrieve with pomerium --version ): Current Helm chart - pomerium-30.1.1
  • Server Operating System/Architecture/Cloud: EKS K8s 1.21 with Google IDP

What’s your config.yaml?

autocert: false
dns_lookup_family: V4_ONLY
address: :443
grpc_address: :443
certificate_authority_file: "/pomerium/ca/ca.crt"
certificates:
authenticate_service_url: https://authenticate.ops.dev.sw.io
authorize_service_url: https://pomerium-authorize.tools.svc.cluster.local
databroker_service_url: https://pomerium-databroker.tools.svc.cluster.local
idp_provider: google
idp_scopes:
idp_provider_url:
idp_client_id: ${client_id}
idp_client_secret: ${client_secret}
idp_service_account: ${base_64_encoded_sa_key}
databroker_storage_tls_skip_verify: true
routes:

What did you see in the logs?

pomerium-proxy-56cbf57cb7-fj6qk pomerium {"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.8.126","request-id":"be87f353-0d8f-413e-9066-4c243e070d53","duration":0.241207,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T15:31:25Z","message":"http-request"}

Additional context

Grafana values:

grafana:
  grafana.ini:
    users:
      allow_sign_up: false
      auto_assign_org: true
      auto_assign_org_role: Editor
    auth.jwt:
      enabled: true
      header_name: X-Pomerium-Jwt-Assertion
      email_claim: email
      cache_ttl: 60m
      jwk_set_url: https://authenticate.ops.dev.sw.io/.well-known/pomerium/jwks.json
  ingress:
    ## If true, Grafana Ingress will be created
    ##
    enabled: true

    ## Annotations for Grafana Ingress
    ##
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt-pomerium
      ingress.pomerium.io/policy: '[{"allow":{"and":[{"domain":{"is":"sw.com"}}]}}]'
      ingress.pomerium.io/secure_upstream: "true"
      ingress.pomerium.io/pass_identity_headers: "true"

    ## Labels to be added to the Ingress
    ##
    labels: {}

    ## Hostnames.
    ## Must be provided if Ingress is enable.
    ##
    # hosts:
    #   - grafana.domain.com
    hosts:
      - monitoring.ops.dev.sw.io

    ## Path for grafana ingress
    path: /

    ## TLS configuration for grafana Ingress
    ## Secret must be manually created in the namespace
    ##
    tls:
     - secretName: grafana-general-tls
       hosts:
       - monitoring.ops.dev.sw.io
    ingressClassName: pomerium

Resulting ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-pomerium
    ingress.pomerium.io/pass_identity_headers: "true"
    ingress.pomerium.io/policy: '[{"allow":{"and":[{"domain":{"is":"sw.com"}}]}}]'
  name: prometheus-stack-1-grafana
  namespace: monitoring
spec:
  ingressClassName: pomerium
  rules:
  - host: monitoring.ops.dev.sw.io
    http:
      paths:
      - backend:
          service:
            name: prometheus-stack-1-grafana
            port:
              number: 80
        path: /?(.*)
        pathType: Prefix
  tls:
  - hosts:
    - monitoring.ops.dev.sw.io
    secretName: grafana-general-tls

The cert isn’t getting provisioned since the page is giving 404s. Not sure why this isn’t redirecting to the authenticate service. The Pomerium install is working with Prometheus so I’m assuming it must be the way I configured the ingress?

This is invalid syntax. Prefix is not regular expression, but is interpreted literally.
You need just have / in the path. Please see Ingress | Kubernetes.

Pomerium does support regular expression in paths defined via Ingress with custom annotation, as that mode is not part of standard Ingress spec. Please see Pomerium Ingress Controller for Kubernetes | Pomerium

1 Like

Hi @denis , thanks for the help! Good catch, I have fixed the path to be just path: / but still getting the 404s. What else could be missing here?

It should not matter. If your CertManager solver is answering HTTP challenge, cert-manager would create a separate Ingress just to answer the challenge. If it’s DNS type, the cert-manager provisions certificate without interacting with any of the Ingress.

Please do kubectl describe ingress/prometheus-stack-1-grafana to see some events that Pomerium and cert-manager publishes related to the lifecycle and status of this object.

Also do follow the status of the cert-manager certificate provisioning.

Hi @saranicole. Does the Proxy service log the 404? There may be some useful deets there.

could you please find the log line related to that request in pomerium-proxy logs ?

also, can you open your URL with /ping or /.pomerium path? i.e. https://monitoring.ops.dev.sw.io/.pomerium/

Thanks for the attention guys!

Interestingly I can open the URL with https://monitoring.ops.dev.sw.io/.pomerium/ . What does that signify exactly?

Here are the details:

Name:             prometheus-stack-1-grafana
Labels:           app.kubernetes.io/instance=prometheus-stack-1
                  app.kubernetes.io/managed-by=Helm
                  app.kubernetes.io/name=grafana
                  app.kubernetes.io/version=8.3.6
                  helm.sh/chart=grafana-6.22.0
Namespace:        monitoring
Address:
Default backend:  default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
  grafana-general-tls terminates monitoring.ops.dev.sw.io
Rules:
  Host                                   Path  Backends
  ----                                   ----  --------
  monitoring.ops.dev.sw.io
                                         /   prometheus-stack-1-grafana:80 (10.0.12.3:3000)
Annotations:                             cert-manager.io/cluster-issuer: letsencrypt-pomerium
                                         ingress.pomerium.io/pass_identity_headers: true
                                         ingress.pomerium.io/policy: [{"allow":{"and":[{"domain":{"is":"sw.com"}}]}}]
                                         meta.helm.sh/release-name: prometheus-stack-1
                                         meta.helm.sh/release-namespace: monitoring
Events:                                  <none>

The challenge created by the cert looks like this:

    Manager:    controller
    Operation:  Update
    Time:       2022-03-28T17:23:29Z
  Owner References:
    API Version:           acme.cert-manager.io/v1
    Block Owner Deletion:  true
    Controller:            true
    Kind:                  Order
    Name:                  grafana-general-tls-zfkg9-2025356553
    UID:                   75894268-e7a3-460c-a62e-747a78d7f4a2
  Resource Version:        13146094
  UID:                     cb20f3e9-9bca-4c2a-842c-ef6fa0582414
Spec:
  Authorization URL:  https://acme-v02.api.letsencrypt.org/acme/authz-v3/92340725090
  Dns Name:           monitoring.ops.dev.sw.io
  Issuer Ref:
    Group:  cert-manager.io
    Kind:   ClusterIssuer
    Name:   letsencrypt-pomerium
  Key:      4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0.Kl1cJfdnAoKUQpT9TJcpSQdKy7zP7Du4yJMeVUMo5z8
  Solver:
    http01:
      Ingress:
        Class:  pomerium
  Token:        4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0
  Type:         HTTP-01
  URL:          https://acme-v02.api.letsencrypt.org/acme/chall-v3/92340725090/T4EZ7g
  Wildcard:     false
Status:
  Presented:   true
  Processing:  true
  Reason:      Waiting for HTTP-01 challenge propagation: wrong status code '404', expected '200'
  State:       pending
Events:
  Type    Reason     Age   From          Message
  ----    ------     ----  ----          -------
  Normal  Started    71s   cert-manager  Challenge scheduled for processing
  Normal  Presented  71s   cert-manager  Presented challenge using HTTP-01 challenge mechanism

The proxy service logs - the first line is the one relating to the root “/” path:

{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.47.110","request-id":"7a8376f9-7c3b-4de6-91a5-a3ec85ce78b3","duration":0.233702,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:14:14Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:14:15Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.10.189","request-id":"7569e864-0509-4e39-baf0-10a7060b680b","duration":0.242273,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:14:16Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:14:25Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.40.4","request-id":"e68996a6-5d1c-48f4-8e59-adadb2d1c67c","duration":0.211868,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:14:26Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:14:35Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.19.198","request-id":"0a252d61-ea5c-4960-881a-67f881564aef","duration":0.21353,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:14:36Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:14:45Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.29.120","request-id":"24a87911-c228-4cff-82d7-8c342ae29635","duration":0.228997,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:14:46Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:14:55Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.29.79","request-id":"b2d13611-cd86-4b76-bdca-6710ea8e0e71","duration":0.207281,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:14:56Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:15:05Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.8.126","request-id":"18b44a21-d436-4452-b2f3-8e2e61af9849","duration":0.237993,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:15:06Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:15:15Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.25.163","request-id":"7919aba0-26b9-487c-beb4-d53039130bdb","duration":0.257293,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:15:16Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:15:25Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.47.110","request-id":"05daf7da-3aa4-4991-ba17-fe3faa920728","duration":0.214819,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:15:26Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:15:35Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.19.198","request-id":"795ed33b-d2d8-4cf6-be29-a859dbe20a62","duration":0.215905,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:15:36Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:15:46Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.29.120","request-id":"da121b5f-0455-4f91-b7e7-f070bb1230d2","duration":0.227903,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:15:46Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:15:56Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.40.4","request-id":"4efe8733-0c31-4968-be12-c054c0796798","duration":0.214025,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:15:56Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:16:06Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.47.110","request-id":"a0d7c1b2-58f9-45cf-af23-a543bf52348e","duration":0.206795,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:16:06Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:16:16Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.13.146","request-id":"be90862c-48a0-4d9c-b462-21017adc3b93","duration":0.208612,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:16:16Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:16:26Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.25.163","request-id":"9bd74650-3dc8-42aa-84f9-358822321c37","duration":0.201519,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:16:26Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:16:36Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.2.171","request-id":"ba64f65c-9d60-4966-b269-941905ffee4f","duration":0.228688,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:16:36Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:16:46Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.19.198","request-id":"d6df6214-2690-405a-832d-12ad997f3b53","duration":0.228857,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:16:46Z","message":"http-request"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.40.4","request-id":"e19cce9d-6923-4838-b34a-60256aee53e5","duration":0.237447,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:16:54Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:16:56Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.29.120","request-id":"2fa67e20-2888-4854-8580-2e32ee7c2bac","duration":0.228664,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:16:56Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:17:06Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}
{"level":"info","service":"envoy","upstream-cluster":"","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","user-agent":"cert-manager/v1.7.0 (clean)","referer":"http://monitoring.ops.dev.sw.io/.well-known/acme-challenge/4cXA2f7fNTV1pvDzHkgzovRHySKcvDoqH5NjiuNDNt0","forwarded-for":"10.0.29.79","request-id":"33b9c166-6f3c-49b3-b6e5-6ae95725051a","duration":0.251202,"size":0,"response-code":404,"response-code-details":"route_not_found","time":"2022-03-28T17:17:06Z","message":"http-request"}
{"level":"error","time":"2022-03-28T17:17:16Z","msg":"looking up info for HTTP challenge","service":"autocert","host":"monitoring.ops.dev.sw.io","error":"no information found to solve challenge for identifier: monitoring.ops.dev.sw.io"}

Lots of 404s around cert-manager, but it seems it’s not finding the root path either.

you should not have autocert running in Kubernetes, it’s competing with cert-manager. could you check your global pomerium configuration?

Here is my global pomerium configuration. I tried to disable autocert by setting extraEnv.AUTOCERT to false, but maybe there’s something else I need to do to disable it?

authenticate:
  idp:
    provider: "google"
    clientID: ${client_id}
    clientSecret: ${client_secret}
    serviceAccount: ${service_account}
  existingTLSSecret: pomerium-tls
  ingress:
    annotations:
      cert-manager.io/cluster-issuer: letsencrypt
      ingress.pomerium.io/service_proxy_upstream: "true"
    tls:
      secretName: authenticate-tools-tls

forwardAuth:
  enabled: false

ingress:
  enabled: false

ingressController:
  enabled: true
  namespaces: [ "tools", "monitoring" ]

config:
  rootDomain: ops.dev.sw.io
  # routes under this wildcard domain are handled by pomerium
  existingCASecret: pomerium-tls
  generateTLS: false
  insecure: false

proxy:
  existingTLSSecret: pomerium-tls

extraEnv:
  AUTOCERT: false
#  LOG_LEVEL: debug
#  POMERIUM_DEBUG: true

databroker:
  existingTLSSecret: pomerium-tls
  storage:
    connectionString: rediss://pomerium-redis-master.tools.svc.cluster.local
    type: redis
    tlsSkipVerify: true
    clientTLS:
      existingSecretName: pomerium-redis-tls
      existingCASecretKey: ca.crt

authorize:
  existingTLSSecret: pomerium-tls

redis:
  enabled: true
  master:
    disableCommands: [ ]
  auth:
    enabled: false
  usePassword: false
  generateTLS: false
  tls:
    enabled: true
    certificateSecret: pomerium-redis-tls

Update: I added autocert: false to my config and I am no longer getting the autocert errors, however I am still getting the 404s.

the error is route does not exist, meaning it was not reconciled by ingress controller to pomerium.
could you try recreating or renaming the problematic ingress, and observing the events via kubectl describe. you should eventually observe either some error or “updated pomerium configuration” event, which means it was successfully synced with core pomerium.

I tried deleting the ingress and renaming it and got these results:

Events

9s          Normal    Issuing                  certificate/grafana-general-tls                             Issuing certificate as Secret does not exist
9s          Normal    Generated                certificate/grafana-general-tls                             Stored new private key in temporary Secret resource "grafana-general-tls-ds4b8"
9s          Normal    Requested                certificate/grafana-general-tls                             Created new CertificateRequest resource "grafana-general-tls-6lqx2"
9s          Normal    CreateCertificate        ingress/prometheus-stack-1-grafana-rename                   Successfully created Certificate "grafana-general-tls"

However the cert still shows up as False

kubectl get certs
NAME                     READY   SECRET                   AGE
grafana-general-tls      False   grafana-general-tls      45s

Logs from the ingress controller

{"level":"error","ts":1648495585.2536097,"logger":"controller.ingress","msg":"obtaining ingress related resources","reconciler group":"networking.k8s.io","reconciler kind":"Ingress","name":"prometheus-stack-1-grafana-rename","namespace":"monitoring","deps":[{"Kind":"Secret","Namespace":"monitoring","Name":"grafana-general-tls"}],"error":"tls: get secret monitoring/grafana-general-tls: Secret \"grafana-general-tls\" not found"}
{"level":"error","ts":1648495585.2537024,"logger":"controller.ingress","msg":"Reconciler error","reconciler group":"networking.k8s.io","reconciler kind":"Ingress","name":"prometheus-stack-1-grafana-rename","namespace":"monitoring","error":"fetch ingress related resources: tls: get secret monitoring/grafana-general-tls: Secret \"grafana-general-tls\" not found"}
{"level":"info","ts":1648495588.786489,"logger":"controller.ingress","msg":"use of deprecated annotation kubernetes.io/ingress.class, please use spec.ingressClassName instead","reconciler group":"networking.k8s.io","reconciler kind":"Ingress","name":"cm-acme-http-solver-brm8v","namespace":"monitoring"}
{"level":"info","ts":1648495588.7865179,"logger":"controller.ingress","msg":"use of deprecated annotation kubernetes.io/ingress.class, please use spec.ingressClassName instead","reconciler group":"networking.k8s.io","reconciler kind":"Ingress","name":"cm-acme-http-solver-brm8v","namespace":"monitoring"}
{"level":"error","ts":1648495588.78653,"logger":"controller.ingress","msg":"obtaining ingress related resources","reconciler group":"networking.k8s.io","reconciler kind":"Ingress","name":"cm-acme-http-solver-brm8v","namespace":"monitoring","deps":[],"error":"tls: spec.TLS.secretName was empty, could not get default cert from ingressClass: default cert secret name: annotation ingress.pomerium.io/default-cert-secret is missing"}
{"level":"error","ts":1648495588.7865624,"logger":"controller.ingress","msg":"Reconciler error","reconciler group":"networking.k8s.io","reconciler kind":"Ingress","name":"cm-acme-http-solver-brm8v","namespace":"monitoring","error":"fetch ingress related resources: tls: spec.TLS.secretName was empty, could not get default cert from ingressClass: default cert secret name: annotation ingress.pomerium.io/default-cert-secret is missing"}

Aha, so I had closed Authenticate service throws 404s in version 17.0 · Issue #172 · pomerium/ingress-controller · GitHub because I couldn’t reproduce it, but now I’m realizing this must be the same issue. Downgrading to v0.16.1 I get the successful redirect, no 404s.

I also realized that the pomerium authenticate ingress is in a different namespace than the one I am trying to create this service in, so this could very well be an issue for v0.17.0 .

Pomerium-authenticate ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-pomerium
    ingress.pomerium.io/allow_public_unauthenticated_access: "true"
    ingress.pomerium.io/secure_upstream: "true"
    ingress.pomerium.io/service_proxy_upstream: "true"
  labels:
    app.kubernetes.io/instance: pomerium
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/name: pomerium
    helm.sh/chart: pomerium-31.0.0
  name: pomerium-authenticate
  namespace: tools
spec:
  ingressClassName: pomerium
  rules:
  - host: authenticate.ops.dev.sw.io
    http:
      paths:
      - backend:
          service:
            name: pomerium-authenticate
            port:
              name: https
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - authenticate.ops.dev.sw.io
    secretName: authenticate-tools-tls

And the target ingress

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-pomerium
    ingress.pomerium.io/pass_identity_headers: "true"
    ingress.pomerium.io/policy: '[{"allow":{"and":[{"domain":{"is":"sw.com"}}]}}]'
  name: prometheus-stack-1-grafana-rename
  namespace: monitoring
spec:
  ingressClassName: pomerium
  rules:
  - host: monitoring.ops.dev.sw.io
    http:
      paths:
      - backend:
          service:
            name: prometheus-stack-1-grafana
            port:
              number: 80
        path: /
        pathType: Prefix
  tls:
  - hosts:
    - monitoring.ops.dev.sw.io
    secretName: grafana-general-tls

I am reopening Authenticate service throws 404s in version 17.0 · Issue #172 · pomerium/ingress-controller · GitHub since I can now reproduce this again.

Ingress controller won’t apply partial configuration, it waits until all dependencies are met.
That status should reflected in events - i.e. check out kubectl describe ingress/prometheus-stack-1-grafana.

As you can see, your certificate is not ready, meaning the TLS secret your Ingress refers to (grafana-general-tls) does not exist.

You need follow the entire chain of cert-manager CRDs to pin-point the problem. Did you consider using DNS challenge issuer instead?

I upgraded to v0.17.0 and now I’m able to authenticate since the certificate was already obtained when I downgraded, but I am getting upstream disconnect instead. Progress!

I tried adding /.pomerium to the url and got redirected to https://pomerium-authenticate.tools.svc.cluster.local/.pomerium/ which seems like very strange behavior. Could this indicate my internal authenticate service url is getting read as my external auth service url somewhere?

Logs:

pomerium-proxy-56cbf57cb7-lrdb2 pomerium {"level":"info","service":"envoy","upstream-cluster":"tools-pomerium-authenticate-authenticate-ops-dev-sw-io-cd9208a432431ad6","method":"GET","authority":"pomerium-authenticate.tools.svc.cluster.local","path":"/.pomerium/sign_in","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.47.110","request-id":"56bafc37-dd18-4879-b206-d147356c9746","duration":7.783832,"size":839,"response-code":302,"response-code-details":"via_upstream","time":"2022-03-29T13:08:24Z","message":"http-request"}
pomerium-proxy-56cbf57cb7-lrdb2 pomerium {"level":"info","service":"envoy","upstream-cluster":"pomerium-control-plane-http","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/.pomerium/callback/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.10.189","request-id":"fec73a4a-43a7-4a47-a030-2a937c22fce4","duration":0.71755,"size":69,"response-code":302,"response-code-details":"via_upstream","time":"2022-03-29T13:08:24Z","message":"http-request"}
pomerium-proxy-56cbf57cb7-lrdb2 pomerium {"level":"info","service":"envoy","upstream-cluster":"monitoring-prometheus-stack-1-grafana-monitoring-ops-dev-sw-io-70653e732c763e7c","method":"GET","authority":"monitoring.ops.dev.sw.io","path":"/","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"","forwarded-for":"10.0.10.189","request-id":"1893f457-96b2-4312-868d-7ddd6b690c74","duration":5.480429,"size":91,"response-code":503,"response-code-details":"upstream_reset_before_response_started{connection_failure}","time":"2022-03-29T13:08:24Z","message":"http-request"}

I did already do a FLUSHALL on Redis when I upgraded the ingress controller, so it’s not the shared secret.

If I downgrade again from here, I no longer have the upstream disconnect or the 404s but instead get the Grafana login page.

I found something very interesting along the lines of that internal authenticate service:

pomerium-authenticate-56d57769b-gfxht pomerium {"level":"info","service":"envoy","upstream-cluster":"pomerium-control-plane-http","method":"GET","authority":"pomerium-authenticate.tools.svc.cluster.local","path":"/oauth2/callback","user-agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/98.0.4758.80 Safari/537.36","referer":"https://accounts.youtube.com/","forwarded-for":"10.0.40.4,10.0.11.227","request-id":"516944a7-403c-4bfd-9b69-59693e367dfd","duration":915.016602,"size":232,"response-code":302,"response-code-details":"via_upstream","time":"2022-03-29T18:21:23Z","message":"http-request"}

Notice it tries to redirect to path /oauth2/callback. This should probably be directing to the external facing authenticate url.

As a sidenote I am no longer having issues with the cert, it has been fulfilled and the domain is getting directed properly with https.

1 Like

it feels like you’ve got authenticate_url set to an internal URL - could you please dump your current configuration using the following command:

kubectl get secret/pomerium -o jsonpath='{.data.config\.yaml}' | base64 -d

Got it! I had the following annotation on my ingress:

ingress.pomerium.io/secure_upstream: "true"

But my ingress is going over port 80, so this probably means secure_upstream does not make sense in this case. By unsetting secure_upstream I no longer have the upstream disconnect, in v0.17.0 as well as v0.16.1 .

For posterity, here is my config.yaml as requested:

autocert: false
dns_lookup_family: V4_ONLY
address: :443
grpc_address: :443
certificate_authority_file: "/pomerium/ca/ca.crt"
certificates:
authenticate_service_url: https://authenticate.ops.dev.sw.io
authorize_service_url: https://pomerium-authorize.tools.svc.cluster.local
databroker_service_url: https://pomerium-databroker.tools.svc.cluster.local
idp_provider: google
idp_scopes:
idp_provider_url:
idp_client_id: ${client_id}
idp_client_secret: ${client_secret}
idp_service_account: ${service_account}
databroker_storage_tls_skip_verify: true
routes:

Thanks so much @denis for your help!