How to use pomerium in istio mesh mTLS mode STRICT?

What happened?

I have pomerium v0.17.2 in an rke2 cluster, running with istio installed. All pods happily talking to each other. I turn on istio injection, permissive mode. Everyone still happy. Connect to Redis - works like a charm. Turn Redis namespace to STRICT mTLS - works. The intent is for istio to provide ingress and pomerium as IAP - so ingress controller is not un use.

Set databroker to use memory, and turn on pomerium namespace to STRICT mTLS Pomerium pods cannot connect to databroker.

I set annotations for databroker:

deployment:

podAnnotations:

inject.istio.io/templates: grpc-agent

proxy.istio.io/config: ‘{“holdApplicationUntilProxyStarts”: true}’

Things start working again. Connect to redis - trouble again.

What did you expect to happen?

pods should get to databroker fine

How’d it happen?

  1. Ran x
  2. Clicked y
  3. Saw error z

What’s your environment like?

  • Pomerium version (retrieve with pomerium --version):
  • Server Operating System/Architecture/Cloud:
    my own rke cluster running on RHEL 8.4 nodes in aws

What’s your config.yaml?

autocert: false
dns_lookup_family: V4_ONLY
address: :80
grpc_address: :80
authenticate_service_url: https://authenticate.<IP>.nip.io
authorize_service_url: http://pomerium-authorize.pomerium.svc.cluster.local
databroker_service_url: http://pomerium-databroker.pomerium.svc.cluster.local:80
idp_provider: oidc
idp_scopes: 
idp_provider_url: https://<IDP>.<IP>.nip.io/realms/<REALM>
insecure_server: true
grpc_insecure: true
idp_client_id: <REDACTED>
idp_client_secret: <REDACTED>
idp_service_account: <REDACTED>
routes:
  - allow_public_unauthenticated_access: true
    from: https://authenticate.<IP>.nip.io
    to: http://pomerium-authenticate.pomerium.svc.cluster.local
  - allow_any_authenticated_user: true
    from: https://httpbin.ip-<IP>.nip.io
    preserve_host_header: true
    to: https://httpbin.org```

## What did you see in the logs?

```logs
{"level":"error","error":"rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection termination","time":"2022-05-27T20:58:43Z","message":"controlplane: error storing configuration event, retrying"}
{"level":"info","syncer_id":"databroker","syncer_type":"type.googleapis.com/pomerium.config.Config","time":"2022-05-27T20:59:01Z","message":"initial sync"}
{"level":"error","syncer_id":"databroker","syncer_type":"type.googleapis.com/pomerium.config.Config","error":"rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection termination","time":"2022-05-27T20:59:01Z","message":"error during initial sync"}
{"level":"error","syncer_id":"databroker","syncer_type":"type.googleapis.com/pomerium.config.Config","error":"rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection termination","time":"2022-05-27T20:59:01Z","message":"sync"}
{"level":"error","error":"rpc error: code = Unavailable desc = upstream connect error or disconnect/reset before headers. reset reason: connection termination","time":"2022-05-27T20:59:22Z","message":"controlplane: error storing configuration event, retrying"}
[2022-05-27T21:00:08.770Z] "POST /databroker.DataBrokerService/SetOptions HTTP/2" 200 UC upstream_reset_before_response_started{connection_termination} - "-" 70 0 0 - "-" "grpc-go/1.44.1-dev" "Ki6yQkEWNuh4MjWS4yg5or" "127.0.0.1:36181" "10.42.1.236:80" PassthroughCluster 10.42.1.235:51008 10.42.1.236:80 10.42.1.235:49340 - allow_any
[2022-05-27T21:00:45.988Z] "POST /databroker.DataBrokerService/SyncLatest HTTP/2" 200 UC upstream_reset_before_response_started{connection_termination} - "-" 49 0 0 - "-" "grpc-go/1.44.1-dev" "9vbj6B5yXBbkVZSvjFTKRG" "127.0.0.1:36181" "10.42.1.236:80" PassthroughCluster 10.42.1.235:51502 10.42.1.236:80 10.42.1.235:49318 - allow_any
[2022-05-27T21:00:49.224Z] "POST /databroker.DataBrokerService/SetOptions HTTP/2" 200 UC upstream_reset_before_response_started{connection_termination} - "-" 70 0 0 - "-" "grpc-go/1.44.1-dev" "srDZBVV4zZqYrBEgki4t7" "127.0.0.1:36181" "10.42.1.236:80" PassthroughCluster 10.42.1.235:51554 10.42.1.236:80 10.42.1.235:49340 - allow_any

Additional context

Add any other context about the problem here.

1 Like

To answer my own question - its istio’s infamous port name setting in the service. So far I have seen tcp and https to work. http and the incumbent grpc does not work when using STRICT mTLS mode. Fortunately, the helm chart already has an override for the value - just populate it

1 Like