What happened?
I see no (apparent) way to test a changed configuration before restarting pomerium.
I tried running a second instance of the service on a different port, to see if it would start without errors.
Existing instance creates a socket file and some other stuff under tmp. I’m guessing this blocks creation of a second service, unless there’s some way to override the path of these in the config…?
What did you expect to happen?
Either a command line option (ie. -verify_config) or being able to interactively start pomerium on a different TCP port to verify that config is ok
How’d it happen?
- Copied working config.yaml to configtest.yaml
- Changed address to 0.0.0.0:4443
- Ran ‘pomerium -config configtest.yaml’
What’s your environment like?
- Pomerium version (retrieve with
pomerium --version
): 0.20.0-1668445494+9413123c - Server Operating System/Architecture/Cloud: RHEL 8.7
What’s your config.yaml?
# This is the test config file. It's similar to the original config, except that 443 has been changed to 4443
address: 0.0.0.0:4443
authenticate_service_url: https://auth.my_internal_service.com
https://www.pomerium.com/docs/reference/certificates.html
autocert: false
certificates:
- cert: /etc/pki/http/my_configured_cert.crt
key: /etc/pki/http/my_configured_cert.key
shared_secret: <generated secret>
cookie_secret: <another generated secret>
idp_provider: oidc
idp_provider_url: https://sso.my_internal_service.com/auth/realms/pomerium
idp_client_id: pomerium-client-001
idp_client_secret: <sso client secret>
routes:
- from: https://test.my_internal_service.com
to: https://internal_test.my_internal_service.com
tls_skip_verify: true
policy:
- allow:
or:
- domain:
is: my_internal_domain.com
What did you see in the logs?
{"service":"envoy","name":"envoy","time":"2023-01-10T12:47:13+01:00","message":"unable to bind domain socket with base_id=86667480, id=0, errno=98 (see --base-id option)"}
{"level":"info","address":"127.0.0.1:34647","time":"2023-01-10T12:47:13+01:00","message":"grpc: dialing"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"time":"2023-01-10T12:47:13+01:00","message":"enabled authorize service"}
{"level":"info","Algorithm":"ES256","KeyID":"<REDACTED>","Public Key":{"use":"sig","kty":"EC","kid":"<REDACTED>","crv":"P-256","alg":"ES256","x":"<REDACTED>","y":"<REDACTED>"},"time":"2023-01-10T12:47:13+01:00","message":"authorize: signing key"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"time":"2023-01-10T12:47:13+01:00","message":"enabled databroker service"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"address":"127.0.0.1:34647","time":"2023-01-10T12:47:13+01:00","message":"grpc: dialing"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"time":"2023-01-10T12:47:13+01:00","message":"enabled proxy service"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"addr":"127.0.0.1:40931","time":"2023-01-10T12:47:13+01:00","message":"starting control-plane gRPC server"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"addr":"127.0.0.1:33583","time":"2023-01-10T12:47:13+01:00","message":"starting control-plane http server"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"addr":"127.0.0.1:42417","time":"2023-01-10T12:47:13+01:00","message":"starting control-plane debug server"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"addr":"127.0.0.1:39099","time":"2023-01-10T12:47:13+01:00","message":"starting control-plane metrics server"}
{"level":"info","name":"identity_manager","duration":30000,"time":"2023-01-10T12:47:13+01:00","message":"acquire lease"}
{"level":"info","time":"2023-01-10T12:47:13+01:00","message":"using in-memory store"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"service":"identity_manager","syncer_id":"identity_manager","syncer_type":"","time":"2023-01-10T12:47:13+01:00","message":"initial sync"}
{"level":"info","type":"","time":"2023-01-10T12:47:13+01:00","message":"sync latest"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"service":"identity_manager","syncer_id":"identity_manager","syncer_type":"","time":"2023-01-10T12:47:13+01:00","message":"listening for updates"}
{"level":"info","config_file_source":"/etc/pomerium/configtest.yaml","bootstrap":true,"service":"identity_manager","sessions":0,"users":0,"time":"2023-01-10T12:47:13+01:00","message":"initial sync complete"}
{"level":"info","server_version":16683166791686608810,"record_version":0,"time":"2023-01-10T12:47:13+01:00","message":"sync"}
{"level":"fatal","pid":1059873,"time":"2023-01-10T12:47:14+01:00","message":"envoy: subprocess exited"}
Additional context
We use an external configuration management system that edits config.yaml and restarts the pomerium service if the yaml contents have changed. I would like to be able to verify that no breaking changes have been introduced to the file, so we can elect to restart the service or roll back the changes.
I don’t know if changing the socket path is sufficient to be able to run two instances of the service on one server, but in /tmp/pomerium-envoyxxxxxxxx/envoy-config.yaml I see the settings for it, though I’ve been unable to modify them with the bootstrap options (Envoy Bootstrap Options | Pomerium)