pgsense-rs rules
Inspect, test, and benchmark detection rules without starting the scanner.
Subcommands
pgsense-rs rules list [-c CONFIG] [-r RULES]
pgsense-rs rules test [-c CONFIG] [-r RULES] --input <VALUE>
pgsense-rs rules bench [-c CONFIG] [-r RULES] [INPUT_OPTIONS] [--iterations N] [--format table|json]
Common flags
| Flag | Description |
|---|---|
-c, --config <FILE> | Path to the configuration TOML. |
-r, --rules <FILE> | Path to the rules TOML. Overrides rules_file from config. |
--help | Show subcommand help. |
rules list
Prints every loaded rule with its ID, type, severity, category, and description. Use this to verify what’s actually active after editing the rules file.
pgsense-rs rules list -r config/rules.toml
Output is human-readable (a fixed-width table). Pipe through grep for
ad-hoc filtering.
rules test
Evaluates a single value against the loaded rule set and prints which rules match.
pgsense-rs rules test -r config/rules.toml --input "4111111111111111"
pgsense-rs rules test -r config/rules.toml --input "jane.doe@example.com"
Each match shows the rule ID, severity, category, and the masked finding. Useful for:
- Verifying a new pattern matches the right inputs.
- Reproducing a false positive locally before tuning the allowlist.
- Spot-checking what gets caught after editing rules.
The command does not connect to PostgreSQL — it operates entirely on the input value.
rules bench
Benchmarks each rule individually against a corpus of test values, plus the combined engine throughput.
# Single value
pgsense-rs rules bench -r config/rules.toml --input "test"
# File of values, one per line
pgsense-rs rules bench -r config/rules.toml --file inputs.txt
# Generate 1000 random values
pgsense-rs rules bench -r config/rules.toml --generate 1000
# More iterations, JSON output
pgsense-rs rules bench -r config/rules.toml --iterations 5000 --format json
| Flag | Description |
|---|---|
--input <VALUE> | Single value to benchmark against. |
--file <PATH> | File with one value per line. |
--generate <N> | Generate N synthetic values mixing clean text and known patterns (default 100). |
--iterations <N> | Iterations per rule (default 1000). |
--format <table|json> | Output format (default table). |
--input, --file, and --generate are mutually exclusive — pick one.
The output sorts rules by mean scan time, slowest first, with mean,
p50, p95, and p99 per rule. Use this to find rules that are
disproportionately expensive.
Tip
The synthetic corpus generated by
--generatemixes ~70% random alphanumeric noise with samples of real-world sensitive shapes (credit cards, SSNs, phone numbers, AWS keys, GitHub tokens). It’s a reasonable proxy for production traffic.