chore(main): Pass namespace info to config loaders#3238
chore(main): Pass namespace info to config loaders#3238ntrevino-virtru wants to merge 4 commits intomainfrom
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the configuration management system by allowing configuration loaders to be aware of the active namespaces and services. By passing this contextual information during the watch phase, loaders can make more informed decisions about whether or not to monitor specific configuration sources, improving the efficiency and precision of configuration updates. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. The config needs to know the scope, To watch the files with better hope. We pass the namespaces along, To keep the system running strong. Footnotes
|
📝 WalkthroughWalkthroughAdds namespace/service metadata types and propagates them into config watching: introduces Changes
Sequence Diagram(s)sequenceDiagram
participant Server
participant Registry
participant Config
participant Loader
participant OnChange
Server->>Registry: GetNamespaces()
Registry-->>Server: []NamespaceInfo
Server->>Config: WatchWithNamespaces(ctx, []NamespaceInfo)
loop for each loader
Config->>Loader: Watch(ctx, cfg, onChange, []NamespaceInfo)
Loader-->>Config: (start watching)
end
Note right of Loader: on config change (e.g., file)
Loader->>OnChange: onChange(ctx)
OnChange->>Config: Reload(ctx)
Config->>Config: Invoke OnChange callbacks (c.OnChange(ctx))
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Code Review
This pull request introduces NamespaceInfo and ServiceInfo structures to provide configuration loaders with context about registered services during the watching phase. It updates the Loader interface and the server startup sequence to extract and propagate this metadata. Review feedback identifies a lifecycle gap where loaders lack this context during the initial Load call and points out that service enablement status may become stale if the configuration mode changes. Additionally, it is suggested to encapsulate the verbose mapping logic in the server initialization into a helper function to improve maintainability.
| var watchInfo []config.NamespaceInfo | ||
| for _, nsInfo := range svcRegistry.GetNamespaces() { | ||
| var services []config.ServiceInfo | ||
| for _, svc := range nsInfo.Namespace.Services { | ||
| services = append(services, config.ServiceInfo{ | ||
| Namespace: svc.GetNamespace(), | ||
| Name: svc.GetServiceDesc().ServiceName, | ||
| }) | ||
| } | ||
| watchInfo = append(watchInfo, config.NamespaceInfo{ | ||
| Name: nsInfo.Name, | ||
| Enabled: nsInfo.Namespace.IsEnabled(cfg.Mode), | ||
| Services: services, | ||
| }) | ||
| } |
There was a problem hiding this comment.
The logic for building watchInfo by mapping svcRegistry data to config.NamespaceInfo is quite verbose and adds to the already high complexity of the Start function. This mapping logic should be encapsulated in a helper function or a method on the service registry to improve the readability and maintainability of the server initialization code.
Benchmark results, click to expandBenchmark authorization.GetDecisions Results:
Benchmark authorization.v2.GetMultiResourceDecision Results:
Benchmark Statistics
Bulk Benchmark Results
TDF3 Benchmark Results:
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@service/pkg/config/config.go`:
- Around line 179-181: When starting multiple watchers with loader.Watch(ctx, c,
onChangeCallback, namespaces), ensure you roll back any watchers already started
if a subsequent loader.Watch returns an error: change the startup loop to
collect a cleanup for each successful start (either by having loader.Watch
return (cleanup func, error) or by calling a known stop method on the
loader/Loader interface), and if a later Watch fails iterate the collected
cleanups and invoke them (or call the loader's Stop/Close/Unwatch) to stop those
watchers before returning the error; update the code paths around loader.Watch,
onChangeCallback, and namespaces to use the new cleanup handling.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: ca0b1b1d-5faa-40bd-828f-51cb30c5b61a
📒 Files selected for processing (1)
service/pkg/config/config.go
| if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil { | ||
| return err | ||
| } |
There was a problem hiding this comment.
Rollback already-started watchers when one loader fails to watch.
If a later loader.Watch(...) fails, earlier loaders may still be actively watching, leaving the system in a partially-started state despite returning an error.
Suggested fix
- for _, loader := range c.loaders {
+ for i, loader := range c.loaders {
// onChangeCallback is the function that will be called by loaders when a change is detected.
// It orchestrates the reloading of the entire configuration and then triggers the registered hooks.
loaderName := loader.Name()
onChangeCallback := func(ctx context.Context) error {
@@
- if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
- return err
+ if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
+ for j := 0; j < i; j++ {
+ if closeErr := c.loaders[j].Close(); closeErr != nil {
+ slog.WarnContext(ctx, "failed to close loader after watch startup failure",
+ slog.String("loader", c.loaders[j].Name()),
+ slog.Any("error", closeErr),
+ )
+ }
+ }
+ return fmt.Errorf("loader %s failed to watch: %w", loader.Name(), err)
}
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil { | |
| return err | |
| } | |
| for i, loader := range c.loaders { | |
| // onChangeCallback is the function that will be called by loaders when a change is detected. | |
| // It orchestrates the reloading of the entire configuration and then triggers the registered hooks. | |
| loaderName := loader.Name() | |
| onChangeCallback := func(ctx context.Context) error { | |
| // ... existing callback implementation ... | |
| } | |
| if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil { | |
| for j := 0; j < i; j++ { | |
| if closeErr := c.loaders[j].Close(); closeErr != nil { | |
| slog.WarnContext(ctx, "failed to close loader after watch startup failure", | |
| slog.String("loader", c.loaders[j].Name()), | |
| slog.Any("error", closeErr), | |
| ) | |
| } | |
| } | |
| return fmt.Errorf("loader %s failed to watch: %w", loader.Name(), err) | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@service/pkg/config/config.go` around lines 179 - 181, When starting multiple
watchers with loader.Watch(ctx, c, onChangeCallback, namespaces), ensure you
roll back any watchers already started if a subsequent loader.Watch returns an
error: change the startup loop to collect a cleanup for each successful start
(either by having loader.Watch return (cleanup func, error) or by calling a
known stop method on the loader/Loader interface), and if a later Watch fails
iterate the collected cleanups and invoke them (or call the loader's
Stop/Close/Unwatch) to stop those watchers before returning the error; update
the code paths around loader.Watch, onChangeCallback, and namespaces to use the
new cleanup handling.
Benchmark results, click to expandBenchmark authorization.GetDecisions Results:
Benchmark authorization.v2.GetMultiResourceDecision Results:
Benchmark Statistics
Bulk Benchmark Results
TDF3 Benchmark Results:
|
| defer shutdown() | ||
|
|
||
| logger.Debug("config loaded", slog.Any("config", cfg.LogValue())) | ||
| logger.Trace("config loaded", slog.Any("config", cfg.LogValue())) |
There was a problem hiding this comment.
@coderabbitai does this LogValue() struct method redact all sensitive config values?
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find LogValue implementation in the config package
rg -n "LogValue" --type go -C 10
Length of output: 32949
🏁 Script executed:
#!/bin/bash
# Get full SDKConfig.LogValue() and surrounding context
rg -n "SDKConfig" --type go -C 3 service/pkg/config/config.go | head -80
# Get full Config.LogValue for SDKConfig (lines around 304)
sed -n '300,330p' service/pkg/config/config.go
# Check if auth.Config (server auth) has a LogValue implementation
rg -n "LogValue" service/internal/auth/ -C 10
# Also check server config auth LogValue
rg -n "LogValue" service/internal/server/ -C 15
Length of output: 4401
🏁 Script executed:
#!/bin/bash
# Check what type Auth is in server config and its fields
rg -n "Auth\s" service/internal/server/server.go -C 3
# Find the auth config struct definition
rg -n "type.*Config.*struct" service/internal/auth/ -A 40
# Check if there's a LogValue on auth config
rg -rn "LogValue" service/internal/auth/
Length of output: 11250
@jakedoublev — Good question! Here's a breakdown of what LogValue() covers across the config tree:
Well-redacted fields ✓
db.Config.LogValue()→passwordlogged as[REDACTED]SDKConfig.LogValue()→client_secretlogged as[REDACTED];client_idlogged in plaintext (not a secret)KASConfig.LogValue()→RootKeylogged as[REDACTED len=N]Services(ServicesMap) → entirely excluded from(*Config).LogValue()with the comment"We exclude logging service configuration as it may contain sensitive information"
Potential gaps
server.Config.LogValue() logs c.Auth via slog.Any("auth", c.Auth), and auth.Config (service/internal/auth/config.go) has no custom LogValue() implementation. This means slog falls back to reflection-based logging, which importantly does not respect json:"-" tags. Two fields in the auth config tree could be of concern:
PolicyConfig.CsvandPolicyConfig.Extension— these contain full Casbin access control policy rules, which could expose your authorization model.RolesProviderConfig.Config(map[string]any) — a freeform map that could contain provider-specific credentials depending on the configured roles provider.
Risk is mitigated by the change in this PR from Debug → Trace (Line 127), since trace-level logging would typically only be enabled during active debugging sessions, not in production.
It may be worth adding a LogValue() to auth.Config (or at minimum RolesProviderConfig) to ensure the Casbin policy content and any roles provider secrets are redacted explicitly, independent of the log level used.
There was a problem hiding this comment.
Yeah, this can likely be removed IMO
There was a problem hiding this comment.
Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!
service/pkg/config/config.go
Outdated
|
|
||
| // Watch starts watching the configuration for changes in all config loaders. | ||
| func (c *Config) Watch(ctx context.Context) error { | ||
| func (c *Config) Watch(ctx context.Context, namespaces []NamespaceInfo) error { |
There was a problem hiding this comment.
Add a new function WatchWithNamespace()
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
service/pkg/config/config.go (1)
158-189:⚠️ Potential issue | 🟠 MajorProvide evidence that required Go checks passed on this change.
The repository enforces Go format, lint, and test checks via CI (
.github/workflows/checks.yamlfor the service directory). These checks must pass before merging:
golangci-lint run(via golangci-lint-action)go test ./... -shortgo fmt ./...andgo mod tidyShow the GitHub Actions logs confirming all checks passed, or if running locally, provide the output of:
make fmt golangci-lint run ./service/pkg/config/config.go go test ./...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/pkg/config/config.go` around lines 158 - 189, The reviewer requested proof that Go format, lint and tests passed for the change in service/pkg/config/config.go (specifically the WatchWithNamespaces/Watch functions); run and capture the CI/local check outputs and attach them to the PR: run make fmt (or go fmt ./...), run golangci-lint run ./service/pkg/config/config.go (or the repo-wide lint command), run go test ./... -short, and run go mod tidy if needed, then paste the full terminal logs (or the GitHub Actions checks logs) showing successful completion of each step in the PR conversation so reviewers can verify all checks passed.
♻️ Duplicate comments (1)
service/pkg/config/config.go (1)
179-181:⚠️ Potential issue | 🟠 MajorMissing rollback for already-started watchers on later watch failure remains unresolved.
If
loader.Watch(...)fails at Line 179, previously started loaders can remain active in a partially-started state. This was already raised in prior review comments and still appears unresolved.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/pkg/config/config.go` around lines 179 - 181, The call to loader.Watch(ctx, c, onChangeCallback, namespaces) can fail after earlier watcher instances have already been started, leaving them running; update the watcher startup so you track each started watcher and, if a subsequent loader.Watch returns an error, immediately rollback by stopping/closing/unwatching all previously started watchers (e.g., call the watcher Stop/Unwatch/Close method or cancel the per-watcher context) before returning the error; modify the code around loader.Watch to collect started watcher handles and perform the cleanup loop on error to ensure no partially-started watchers remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@service/pkg/config/config.go`:
- Around line 186-189: The Watch method only accepts (ctx context.Context) but
callers are passing namespace info; update callers to use
WatchWithNamespaces(ctx, watchInfo) instead of Watch(ctx, watchInfo) so
namespace arguments are forwarded; specifically replace cfg.Watch(ctx,
watchInfo) with cfg.WatchWithNamespaces(ctx, watchInfo) in the server startup
call and update the test calls in config_test.go to call
config.WatchWithNamespaces(ctx, ...) (keep the same watchInfo variable names and
types and no changes to the Config implementation).
---
Outside diff comments:
In `@service/pkg/config/config.go`:
- Around line 158-189: The reviewer requested proof that Go format, lint and
tests passed for the change in service/pkg/config/config.go (specifically the
WatchWithNamespaces/Watch functions); run and capture the CI/local check outputs
and attach them to the PR: run make fmt (or go fmt ./...), run golangci-lint run
./service/pkg/config/config.go (or the repo-wide lint command), run go test
./... -short, and run go mod tidy if needed, then paste the full terminal logs
(or the GitHub Actions checks logs) showing successful completion of each step
in the PR conversation so reviewers can verify all checks passed.
---
Duplicate comments:
In `@service/pkg/config/config.go`:
- Around line 179-181: The call to loader.Watch(ctx, c, onChangeCallback,
namespaces) can fail after earlier watcher instances have already been started,
leaving them running; update the watcher startup so you track each started
watcher and, if a subsequent loader.Watch returns an error, immediately rollback
by stopping/closing/unwatching all previously started watchers (e.g., call the
watcher Stop/Unwatch/Close method or cancel the per-watcher context) before
returning the error; modify the code around loader.Watch to collect started
watcher handles and perform the cleanup loop on error to ensure no
partially-started watchers remain.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: fe4e9b46-202f-4c2b-ac71-6db74d70b794
📒 Files selected for processing (3)
service/kas/kas.goservice/pkg/config/config.goservice/pkg/serviceregistry/serviceregistry.go
X-Test Failure Report✅ go-v0.9.0 |
cd328f7 to
eb41ec6
Compare
Benchmark results, click to expandBenchmark authorization.GetDecisions Results:
Benchmark authorization.v2.GetMultiResourceDecision Results:
Benchmark Statistics
Bulk Benchmark Results
TDF3 Benchmark Results:
|
There was a problem hiding this comment.
♻️ Duplicate comments (1)
service/pkg/config/config.go (1)
164-180:⚠️ Potential issue | 🟠 MajorRollback already-started watchers when a later loader fails.
If one loader starts watching and a later
Watchreturns an error, this exits immediately without stopping the earlier watchers.service/pkg/server/start.goonly deferscfg.Close(ctx)afterWatchWithNamespacessucceeds, so a startup failure can leak active watchers and leave the process partially initialized. At minimum, close the already-started loaders here; ifWatchcan fail after partial setup, include the current loader in that cleanup path too.Suggested fix
- for _, loader := range c.loaders { + for i, loader := range c.loaders { // onChangeCallback is the function that will be called by loaders when a change is detected. // It orchestrates the reloading of the entire configuration and then triggers the registered hooks. loaderName := loader.Name() onChangeCallback := func(ctx context.Context) error { @@ - if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil { - return err + if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil { + for j := 0; j < i; j++ { + if closeErr := c.loaders[j].Close(); closeErr != nil { + slog.WarnContext(ctx, "failed to close loader after watch startup failure", + slog.String("loader", c.loaders[j].Name()), + slog.Any("error", closeErr), + ) + } + } + return fmt.Errorf("loader %s failed to watch: %w", loader.Name(), err) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@service/pkg/config/config.go` around lines 164 - 180, When iterating c.loaders and calling loader.Watch, capture the subset of loaders that have successfully started (e.g., started := []Loader{}) and append each loader after Watch returns nil; if any subsequent loader.Watch returns an error, iterate the started slice (in reverse order) and call the loader shutdown method (e.g., Close(ctx) or Stop/Unwatch on the loader interface) for each to rollback started watchers, optionally logging but preserving and returning the original Watch error; also include the current loader in the cleanup if Watch may partially succeed before returning an error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@service/pkg/config/config.go`:
- Around line 164-180: When iterating c.loaders and calling loader.Watch,
capture the subset of loaders that have successfully started (e.g., started :=
[]Loader{}) and append each loader after Watch returns nil; if any subsequent
loader.Watch returns an error, iterate the started slice (in reverse order) and
call the loader shutdown method (e.g., Close(ctx) or Stop/Unwatch on the loader
interface) for each to rollback started watchers, optionally logging but
preserving and returning the original Watch error; also include the current
loader in the cleanup if Watch may partially succeed before returning an error.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: ASSERTIVE
Plan: Pro
Run ID: 3202191b-8e4b-42f4-ba0e-ad544f8cf9cc
📒 Files selected for processing (3)
service/pkg/config/config.goservice/pkg/config/config_test.goservice/pkg/server/start.go
Proposed Changes
Checklist
Testing Instructions
Summary by CodeRabbit
Refactor
Tests