Helm Deployment
A Helm chart for Kubernetes deployment lives in
charts/pgsense/
in the source repository.
Layout
The chart renders the standard set of resources for a long-running service:
- A workload resource running
ghcr.io/mrehbr/pgsense-rs:<tag>. - A
ConfigMapholding the renderedconfig.tomlandrules.toml. Secretmounts for any file-backed credentials.- An optional
Serviceexposing the health/metrics port. - Optional
ServiceMonitorfor Prometheus Operator scraping.
Install
git clone https://github.com/MrEhbr/pgsense-rs
cd pgsense-rs
helm install pgsense ./charts/pgsense \
--namespace pgsense \
--create-namespace \
--values my-values.yaml
Values
The chart’s values.yaml mirrors the structure of config.toml. See
charts/pgsense/values.yaml in the repository for the full set of
options and defaults — the chart is the source of truth, this page only
sketches the shape.
A minimal my-values.yaml:
image:
# Defaults to chart appVersion; override only if you need a specific tag.
tag: ""
databases:
- name: app
host: "primary.db.svc.cluster.local"
port: 5432
dbname: "app"
username: "pgsense"
publication: "pgsense_pub"
# Reference an existing Kubernetes Secret containing the password.
# The chart mounts it as a file and wires the config to read from it.
passwordSecret:
name: "pgsense-db-credentials"
key: "password"
pipeline:
store: postgres
alerts:
log:
enabled: true
server:
enabled: true
port: 9090
serviceMonitor:
enabled: true
Secrets
Important
Database and Postgres-alert passwords are supplied via
passwordSecret: { name, key }, referencing an existing KubernetesSecret. The chart mounts the secret as a file and points the generatedconfig.tomlat it. This avoids putting plaintext credentials in theConfigMapand avoids env-var inheritance leaking secrets into child processes.
Observability
The chart exposes three independent observability surfaces, all disabled by default:
service:
type: ClusterIP # Service in front of the metrics/health port
port: 9090
serviceMonitor:
enabled: false # Prometheus Operator ServiceMonitor for /metrics
interval: 30s
labels: {} # extra labels (typically the Prometheus selector label)
grafanaDashboard:
enabled: false # ConfigMap carrying a sidecar-discoverable dashboard
labels: {} # selector labels for the Grafana dashboard sidecar
annotations: {}
telemetry:
enabled: false # OTLP tracing exporter
endpoint: "http://localhost:4317"
protocol: grpc # "grpc" | "http"
serviceName: pgsense
sampleRate: 1.0
profiling:
enabled: false # per-rule + per-phase scan histograms
service is required for serviceMonitor to scrape the pod. The
telemetry and profiling sections render straight into the same
config.toml blocks documented in
Telemetry and
Profiling.
Note
serviceMonitor.labelstypically needs to match your Prometheus Operator’sserviceMonitorSelector.grafanaDashboard.labelslikewise needs to match the Grafana sidecar’sdashboardLabel/folderAnnotationconfiguration. Without those, the objects are created but never picked up.
Upgrade
helm upgrade pgsense ./charts/pgsense \
--namespace pgsense \
--values my-values.yaml
A helm upgrade triggers a rolling restart.
Caution
With
pipeline.store = memory(default), events written during the restart window are missed because the new pod starts from PostgreSQL’s current LSN. Usestore = postgresin any environment where missed findings would matter.