Skip to content

Conversation

@squatboy
Copy link

📑 Description

This PR fixes false positive reports from the ConfigMap analyzer when ConfigMaps are dynamically loaded by sidecar containers.

Problem:
The current analyzer only detects ConfigMaps directly referenced in Pod specs (volumes, env, envFrom). However, many operators (Grafana, Prometheus, Fluentd) use sidecar containers that load ConfigMaps dynamically via label selectors and Kubernetes API watches, causing false "unused" reports.

Solution:
Added detection for common sidecar patterns:

  • grafana_dashboard label (Grafana dashboard sidecar)
  • grafana_datasource label (Grafana datasource sidecar)
  • prometheus_rule label (Prometheus Operator)
  • fluentd_config label (Fluentd config reloader)
  • User-defined k8sgpt.ai/dynamically-loaded label for custom patterns
  • Opt-out via k8sgpt.ai/skip-usage-check annotation

✅ Checks

  • My pull request adheres to the code style of this project
  • My code requires changes to the documentation
  • I have updated the documentation as required
  • All the tests have passed

ℹ Additional Information

Backward compatibility:
✅ No breaking changes - existing detection logic fully preserved, new features are opt-in or based on well-known community labels.

Unit Test Passed:

$ go test -v ./pkg/analyzer -run TestConfigMap
=== RUN   TestConfigMapAnalyzer
=== RUN   TestConfigMapAnalyzer/unused_configmap
=== RUN   TestConfigMapAnalyzer/empty_configmap
=== RUN   TestConfigMapAnalyzer/large_configmap
=== RUN   TestConfigMapAnalyzer/used_configmap
--- PASS: TestConfigMapAnalyzer (0.00s)
    --- PASS: TestConfigMapAnalyzer/unused_configmap (0.00s)
    --- PASS: TestConfigMapAnalyzer/empty_configmap (0.00s)
    --- PASS: TestConfigMapAnalyzer/large_configmap (0.00s)
    --- PASS: TestConfigMapAnalyzer/used_configmap (0.00s)
=== RUN   TestConfigMapAnalyzer_SidecarPatterns
=== RUN   TestConfigMapAnalyzer_SidecarPatterns/grafana_dashboard_configmap_should_not_be_flagged_as_unused
=== RUN   TestConfigMapAnalyzer_SidecarPatterns/configmap_with_skip_annotation_should_be_ignored
=== RUN   TestConfigMapAnalyzer_SidecarPatterns/normal_unused_configmap_should_still_be_flagged
=== RUN   TestConfigMapAnalyzer_SidecarPatterns/prometheus_rule_configmap_should_not_be_flagged
=== RUN   TestConfigMapAnalyzer_SidecarPatterns/custom_dynamically-loaded_label_should_work
--- PASS: TestConfigMapAnalyzer_SidecarPatterns (0.00s)
    --- PASS: TestConfigMapAnalyzer_SidecarPatterns/grafana_dashboard_configmap_should_not_be_flagged_as_unused (0.00s)
    --- PASS: TestConfigMapAnalyzer_SidecarPatterns/configmap_with_skip_annotation_should_be_ignored (0.00s)
    --- PASS: TestConfigMapAnalyzer_SidecarPatterns/normal_unused_configmap_should_still_be_flagged (0.00s)
    --- PASS: TestConfigMapAnalyzer_SidecarPatterns/prometheus_rule_configmap_should_not_be_flagged (0.00s)
    --- PASS: TestConfigMapAnalyzer_SidecarPatterns/custom_dynamically-loaded_label_should_work (0.00s)
PASS
ok      github.com/k8sgpt-ai/k8sgpt/pkg/analyzer        6.097s

Cluster verification (kube-prometheus-stack deployed)

Before fix:

$ k8sgpt analyze -n monitoring --filter ConfigMap
AI Provider: AI not used; --explain not set

0: ConfigMap monitoring/kps-kube-prometheus-stack-alertmanager-overview()
- Error: ConfigMap kps-kube-prometheus-stack-alertmanager-overview is not used by any pods in the namespace

1: ConfigMap monitoring/kps-kube-prometheus-stack-apiserver()
- Error: ConfigMap kps-kube-prometheus-stack-apiserver is not used by any pods in the namespace

...  (26 more Grafana dashboard ConfigMaps)

28: ConfigMap monitoring/kps-kube-prometheus-stack-grafana-datasource()
- Error: ConfigMap kps-kube-prometheus-stack-grafana-datasource is not used by any pods in the namespace

Total: 29 ConfigMaps flagged as unused

After fix:

$ ./bin/k8sgpt analyze -n monitoring --filter ConfigMap
AI Provider: AI not used; --explain not set

No problems detected

==> Verification of ConfigMap labels

$ kubectl get configmap -n monitoring kps-kube-prometheus-stack-alertmanager-overview -o jsonpath='{.metadata.labels}'
{"app":"kube-prometheus-stack-grafana","grafana_dashboard":"1", ... }

$ kubectl get configmap -n monitoring kps-kube-prometheus-stack-grafana-datasource -o jsonpath='{.metadata.labels}'
{"app":"kube-prometheus-stack-grafana","grafana_datasource":"1", ...}

Result: ✅ 29 false positives eliminated (97% reduction)

  • All 28 Grafana dashboard ConfigMaps now correctly recognized
  • 1 Grafana datasource ConfigMap now correctly recognized

Files changed:

  • pkg/analyzer/configmap.go: Added isKnownSidecarPattern() and shouldSkipUsageCheck() helper functions
  • pkg/analyzer/configmap_test.go: Added TestConfigMapAnalyzer_SidecarPatterns with 5 test cases

@squatboy squatboy requested review from a team as code owners January 19, 2026 07:13
@github-project-automation github-project-automation bot moved this to Proposed in Backlog Jan 19, 2026
@squatboy squatboy changed the title fix(analyzer): improve ConfigMap usage detection for sidecar patterns fix: improve ConfigMap usage detection for sidecar patterns Jan 19, 2026
- Add detection for dynamically loaded ConfigMaps (Grafana sidecar)
- Support grafana_dashboard and grafana_datasource labels
- Support prometheus_rule and fluentd_config labels
- Add k8sgpt.ai/dynamically-loaded label for custom patterns
- Add k8sgpt.ai/skip-usage-check annotation to opt-out
- Add comprehensive test cases for sidecar patterns

Fixes false positives where ConfigMaps loaded dynamically by sidecar
containers (via Kubernetes API watches with label selectors) were
incorrectly flagged as unused.

Tested on production cluster with kube-prometheus-stack:
- Before: 29 ConfigMaps incorrectly flagged as unused
- After:  No false positives (29 eliminated - 100% reduction)

Signed-off-by: sqautboy <[email protected]>
@squatboy squatboy force-pushed the fix/configmap-sidecar-false-positives branch from 9ae04d2 to 32c8516 Compare January 19, 2026 07:19
Copilot AI added a commit that referenced this pull request Jan 30, 2026
Copilot AI added a commit that referenced this pull request Jan 30, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Proposed

Development

Successfully merging this pull request may close these issues.

1 participant