Pomerium-databroker pod CrashLoopBackOff

Hi Expert: sorry to bother,

What happened?

after helm installation, the pomerium-databroker pod and redis pod cannot be running

What did you expect to happen?

How’d it happen?

the pods status is shown below

databroker:

redis pod:

What’s your environment like?

  • Pomerium version (retrieve with pomerium --version):
    Pomerium v0.17.3 image

  • Server Operating System/Architecture/Cloud:
    centOS7

What’s your config.yaml?

my values.yaml is shown below

authenticate:
  ingress:
    tls:
      secretName: pomerium-tls
  existingTLSSecret: pomerium-tls
  idp:
    provider: "okta"
    clientID: "0oa1xxxxxxxxxxxxxx4Z697"
    clientSecret: "b6_QkzxxxxxxxxxxxxxxxxxxxxxxxxxxsV"
    serviceAccount: "ewogICJhcGlfxxxxxxxxxxxxxxxxxxxxxTWVHdnViWFNRxxxxxxxxxxxxxIKfQo="
  proxied: false

proxy:
  existingTLSSecret: pomerium-tls

databroker:
  existingTLSSecret: pomerium-tls
  storage:
    connectionString: redis://pomerium-redis-master.pomerium.svc.cluster.local
    type: redis
    clientTLS:
      existingSecretName: pomerium-tls
      existingCASecretKey: ca.crt

authorize:
  existingTLSSecret: pomerium-tls

redis:
  enabled: true
  auth:
    enabled: false
  usePassword: false
  generateTLS: false
  tls:
    certificateSecret: pomerium-redis-tls

ingressController:
  enabled: true

ingress:
  enabled: false

config:
  rootDomain: localhost.pomerium.io
  existingCASecret: pomerium-tls
  generateTLS: false # On by default, disabled when cert-manager or another solution is in place.
# The policy block isn't required when using the Pomerium Ingress Controller, as routes are defined
# by the addition of Ingress Resources.
#  routes:
#      # This will be our testing app, to confirm that Pomerium is authenticating and routing traffic.
#    - from: https://authenticate.localhost.pomerium.io
#      to: https://pomerium-authenticate.pomerium.svc.cluster.local
#      preserve_host_header: true
#      allow_public_unauthenticated_access: true
#      policy:

appreciate!

Databroker depends on Redis.
Redis doesn’t seem to be unable to start because a persistent volume claim cannot be fulfilled.
Please check PVC documentation for your kubernetes distribution and environment and adjust accordingly.

appreciate Denis!! I create 2 PV, the yaml as below:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: task-pv-volume
  labels:
    type: local
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteOnce
  nfs:
    path: /ifs/share_folder
    server: 192.168.31.227

currently, the PVC was bound status, but 2 redis pods and databrokder pod are still the crashloopbackoff status

databroker issue:

redis-master:

redis-replicas:

much appreciate