What happened?
I am attempting to migrate from buzzfeed-sso (which is an unmaintained project) to the Pomerium proxy in my Kubernetes clusters.
I’ve have it all working except for one problem.
I’m using the promerium-proxy to protect the Prometheus GUI.
In version 3 of Prometheus they’ve added Server-Sent Events and the GUI is looking at the /api/v1/notifications/live
URI path for these.
These have a content type of text/event-stream
, and they are persistent connections, but in order for them to work, the data needs to be flushed rather than buffered for too long.
In buzzfeed-sso I used a flush_interval
setting to enable this for its route handling Prometheus.
I haven’t found such a setting in Pomerium. I tried the idle_timeout
setting, but this closes the connection, resulting in an error appearing in the Prometheus GUI.
Is there a setting for flushing connections regularly that I am missing?
What did you expect to happen?
I’d like the /api/v1/notifications/live
endpoint that uses Server-Sent Events to work in the Prometheus GUI when traffic to Prometheus is routed via the Pomerium proxy; but instead it shows an error.
How’d it happen?
I added a route for Prometheus like so (note this is templated and the {{ }}
part expands before being deployed:
- from: https://prometheus.{{ requiredEnv "CLUSTER_DOMAIN" }}
to: http://kube-prometheus-stack-prometheus.prometheus.svc.cluster.local:9090
The above also has a policy
setting but I’ve left it off since it isn’t relevant to the issue.
What’s your environment like?
- Pomerium version (retrieve with
pomerium --version
):
v0.29.2
- Server Operating System/Architecture/Cloud:
Self provisioned Kubernetes 1.31.8 cluster in AWS.
The Ingress for Prometheus points to the pomerium-proxy service so that Pomerium can take care of Authentication and Authorization tasks before routing traffic through to the Prometheus service in the cluster.
What’s your config.yaml?
The test cluster I used is currently deleted so I don’t have this on hand right now.
I can supply later is necessary.
What did you see in the logs?
There are no errors in the logs.
Additional context
nil