Skip to content

chore(main): Pass namespace info to config loaders#3238

Open
ntrevino-virtru wants to merge 4 commits intomainfrom
chore/dspx-2669/quieter-config-loaders
Open

chore(main): Pass namespace info to config loaders#3238
ntrevino-virtru wants to merge 4 commits intomainfrom
chore/dspx-2669/quieter-config-loaders

Conversation

@ntrevino-virtru
Copy link
Copy Markdown
Contributor

@ntrevino-virtru ntrevino-virtru commented Apr 1, 2026

Proposed Changes

  • Pass the service information to config loaders

Checklist

  • I have added or updated unit tests
  • I have added or updated integration tests (if appropriate)
  • I have added or updated documentation

Testing Instructions

Summary by CodeRabbit

  • Refactor

    • Config watching can now be scoped with namespace/service metadata and forwards that info to loaders.
    • Startup now assembles and supplies targeted namespace/service info when enabling config watches.
    • Config load/reload logging lowered to trace level and made context-aware.
  • Tests

    • Tests updated to verify namespace/service information is forwarded into the watch flow.

@ntrevino-virtru ntrevino-virtru requested a review from a team as a code owner April 1, 2026 16:44
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the configuration management system by allowing configuration loaders to be aware of the active namespaces and services. By passing this contextual information during the watch phase, loaders can make more informed decisions about whether or not to monitor specific configuration sources, improving the efficiency and precision of configuration updates.

Highlights

  • Namespace and Service Information: Introduced new data structures NamespaceInfo and ServiceInfo to encapsulate configuration context.
  • Interface Update: Updated the Loader interface and all implementations to accept []NamespaceInfo in the Watch method.
  • Server Initialization: Updated the server startup logic to collect and pass namespace/service information to the configuration watcher.
  • Logging Adjustments: Changed specific log levels from Debug to Trace in the server startup process.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.


The config needs to know the scope, To watch the files with better hope. We pass the namespaces along, To keep the system running strong.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown

coderabbitai bot commented Apr 1, 2026

📝 Walkthrough

Walkthrough

Adds namespace/service metadata types and propagates them into config watching: introduces NamespaceInfo and ServiceInfo, extends Loader.Watch to accept []NamespaceInfo, adds Config.WatchWithNamespaces (with Watch delegating to it), updates loader implementations and tests, and passes computed namespaces from server startup into WatchWithNamespaces. Logging calls adjusted to trace level and made context-aware.

Changes

Cohort / File(s) Summary
Loader Types & Interface
service/pkg/config/loader.go
Add exported ServiceInfo and NamespaceInfo types; change Loader interface Watch to accept namespaces []NamespaceInfo.
Config orchestration
service/pkg/config/config.go
Introduce WatchWithNamespaces(ctx, namespaces); make Watch(ctx) a thin wrapper that calls it with an empty []NamespaceInfo; forward namespaces into each loader's Watch call.
Loader implementations
service/pkg/config/config_file_loader.go, service/pkg/config/default_settings_loader.go, service/pkg/config/environment_value_loader.go, service/pkg/config/legacy_loader.go
Update Watch signatures to accept trailing []NamespaceInfo parameter. Implementations ignore the new parameter; runtime behavior unchanged.
Tests
service/pkg/config/config_test.go
Update MockLoader.watchFn and MockLoader.Watch signatures to accept namespaces []NamespaceInfo; update tests and add a subtest verifying the provided namespaces are forwarded to loaders.
Server startup changes
service/pkg/server/start.go
Build []config.NamespaceInfo from the service registry (svcRegistry.GetNamespaces()), then call cfg.WatchWithNamespaces(ctx, watchInfo); change pre-load and post-load logs to trace level and make them context-aware.
Logging adjustments
service/kas/kas.go, service/pkg/serviceregistry/serviceregistry.go
Change several debug logs to trace-level and make them context-aware (use provided ctx); adjust a config update hook to use the passed ctx and trace logging.

Sequence Diagram(s)

sequenceDiagram
  participant Server
  participant Registry
  participant Config
  participant Loader
  participant OnChange

  Server->>Registry: GetNamespaces()
  Registry-->>Server: []NamespaceInfo
  Server->>Config: WatchWithNamespaces(ctx, []NamespaceInfo)
  loop for each loader
    Config->>Loader: Watch(ctx, cfg, onChange, []NamespaceInfo)
    Loader-->>Config: (start watching)
  end
  Note right of Loader: on config change (e.g., file)
  Loader->>OnChange: onChange(ctx)
  OnChange->>Config: Reload(ctx)
  Config->>Config: Invoke OnChange callbacks (c.OnChange(ctx))
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Poem

🐇 I hop from registry to config with glee,
Namespaces tucked in my satchel, neat and free,
I nudge the loaders: "Listen, watch with me!"
A trace-level whisper wakes the config tree,
Cheers from the rabbit — metadata jubilee.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title accurately reflects the main change: passing namespace information to configuration loaders through a new WatchWithNamespaces method and updated loader interfaces.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch chore/dspx-2669/quieter-config-loaders

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added the size/s label Apr 1, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces NamespaceInfo and ServiceInfo structures to provide configuration loaders with context about registered services during the watching phase. It updates the Loader interface and the server startup sequence to extract and propagate this metadata. Review feedback identifies a lifecycle gap where loaders lack this context during the initial Load call and points out that service enablement status may become stale if the configuration mode changes. Additionally, it is suggested to encapsulate the verbose mapping logic in the server initialization into a helper function to improve maintainability.

Comment on lines +302 to +316
var watchInfo []config.NamespaceInfo
for _, nsInfo := range svcRegistry.GetNamespaces() {
var services []config.ServiceInfo
for _, svc := range nsInfo.Namespace.Services {
services = append(services, config.ServiceInfo{
Namespace: svc.GetNamespace(),
Name: svc.GetServiceDesc().ServiceName,
})
}
watchInfo = append(watchInfo, config.NamespaceInfo{
Name: nsInfo.Name,
Enabled: nsInfo.Namespace.IsEnabled(cfg.Mode),
Services: services,
})
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for building watchInfo by mapping svcRegistry data to config.NamespaceInfo is quite verbose and adds to the already high complexity of the Start function. This mapping logic should be encapsulated in a helper function or a method on the service registry to improve the readability and maintainability of the server initialization code.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 203.286303ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 97.116235ms

Benchmark Statistics

Name № Requests Avg Duration Min Duration Max Duration

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 381.006593ms
Throughput 262.46 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 40.051239358s
Average Latency 398.394516ms
Throughput 124.84 requests/second

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@service/pkg/config/config.go`:
- Around line 179-181: When starting multiple watchers with loader.Watch(ctx, c,
onChangeCallback, namespaces), ensure you roll back any watchers already started
if a subsequent loader.Watch returns an error: change the startup loop to
collect a cleanup for each successful start (either by having loader.Watch
return (cleanup func, error) or by calling a known stop method on the
loader/Loader interface), and if a later Watch fails iterate the collected
cleanups and invoke them (or call the loader's Stop/Close/Unwatch) to stop those
watchers before returning the error; update the code paths around loader.Watch,
onChangeCallback, and namespaces to use the new cleanup handling.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: ca0b1b1d-5faa-40bd-828f-51cb30c5b61a

📥 Commits

Reviewing files that changed from the base of the PR and between 131073d and 1e0bfb2.

📒 Files selected for processing (1)
  • service/pkg/config/config.go

Comment on lines +179 to 181
if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
return err
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Rollback already-started watchers when one loader fails to watch.

If a later loader.Watch(...) fails, earlier loaders may still be actively watching, leaving the system in a partially-started state despite returning an error.

Suggested fix
-	for _, loader := range c.loaders {
+	for i, loader := range c.loaders {
 		// onChangeCallback is the function that will be called by loaders when a change is detected.
 		// It orchestrates the reloading of the entire configuration and then triggers the registered hooks.
 		loaderName := loader.Name()
 		onChangeCallback := func(ctx context.Context) error {
@@
-		if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
-			return err
+		if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
+			for j := 0; j < i; j++ {
+				if closeErr := c.loaders[j].Close(); closeErr != nil {
+					slog.WarnContext(ctx, "failed to close loader after watch startup failure",
+						slog.String("loader", c.loaders[j].Name()),
+						slog.Any("error", closeErr),
+					)
+				}
+			}
+			return fmt.Errorf("loader %s failed to watch: %w", loader.Name(), err)
 		}
 	}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
return err
}
for i, loader := range c.loaders {
// onChangeCallback is the function that will be called by loaders when a change is detected.
// It orchestrates the reloading of the entire configuration and then triggers the registered hooks.
loaderName := loader.Name()
onChangeCallback := func(ctx context.Context) error {
// ... existing callback implementation ...
}
if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
for j := 0; j < i; j++ {
if closeErr := c.loaders[j].Close(); closeErr != nil {
slog.WarnContext(ctx, "failed to close loader after watch startup failure",
slog.String("loader", c.loaders[j].Name()),
slog.Any("error", closeErr),
)
}
}
return fmt.Errorf("loader %s failed to watch: %w", loader.Name(), err)
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@service/pkg/config/config.go` around lines 179 - 181, When starting multiple
watchers with loader.Watch(ctx, c, onChangeCallback, namespaces), ensure you
roll back any watchers already started if a subsequent loader.Watch returns an
error: change the startup loop to collect a cleanup for each successful start
(either by having loader.Watch return (cleanup func, error) or by calling a
known stop method on the loader/Loader interface), and if a later Watch fails
iterate the collected cleanups and invoke them (or call the loader's
Stop/Close/Unwatch) to stop those watchers before returning the error; update
the code paths around loader.Watch, onChangeCallback, and namespaces to use the
new cleanup handling.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 209.703696ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 100.830063ms

Benchmark Statistics

Name № Requests Avg Duration Min Duration Max Duration

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 379.493647ms
Throughput 263.51 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 40.333615984s
Average Latency 401.858616ms
Throughput 123.97 requests/second

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 1, 2026

defer shutdown()

logger.Debug("config loaded", slog.Any("config", cfg.LogValue()))
logger.Trace("config loaded", slog.Any("config", cfg.LogValue()))
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coderabbitai does this LogValue() struct method redact all sensitive config values?

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Find LogValue implementation in the config package
rg -n "LogValue" --type go -C 10

Length of output: 32949


🏁 Script executed:

#!/bin/bash
# Get full SDKConfig.LogValue() and surrounding context
rg -n "SDKConfig" --type go -C 3 service/pkg/config/config.go | head -80

# Get full Config.LogValue for SDKConfig (lines around 304)
sed -n '300,330p' service/pkg/config/config.go

# Check if auth.Config (server auth) has a LogValue implementation
rg -n "LogValue" service/internal/auth/ -C 10

# Also check server config auth LogValue
rg -n "LogValue" service/internal/server/ -C 15

Length of output: 4401


🏁 Script executed:

#!/bin/bash
# Check what type Auth is in server config and its fields
rg -n "Auth\s" service/internal/server/server.go -C 3

# Find the auth config struct definition
rg -n "type.*Config.*struct" service/internal/auth/ -A 40

# Check if there's a LogValue on auth config
rg -rn "LogValue" service/internal/auth/

Length of output: 11250


@jakedoublev — Good question! Here's a breakdown of what LogValue() covers across the config tree:

Well-redacted fields ✓

  • db.Config.LogValue()password logged as [REDACTED]
  • SDKConfig.LogValue()client_secret logged as [REDACTED]; client_id logged in plaintext (not a secret)
  • KASConfig.LogValue()RootKey logged as [REDACTED len=N]
  • Services (ServicesMap) → entirely excluded from (*Config).LogValue() with the comment "We exclude logging service configuration as it may contain sensitive information"

Potential gaps ⚠️
server.Config.LogValue() logs c.Auth via slog.Any("auth", c.Auth), and auth.Config (service/internal/auth/config.go) has no custom LogValue() implementation. This means slog falls back to reflection-based logging, which importantly does not respect json:"-" tags. Two fields in the auth config tree could be of concern:

  1. PolicyConfig.Csv and PolicyConfig.Extension — these contain full Casbin access control policy rules, which could expose your authorization model.
  2. RolesProviderConfig.Config (map[string]any) — a freeform map that could contain provider-specific credentials depending on the configured roles provider.

Risk is mitigated by the change in this PR from DebugTrace (Line 127), since trace-level logging would typically only be enabled during active debugging sessions, not in production.

It may be worth adding a LogValue() to auth.Config (or at minimum RolesProviderConfig) to ensure the Casbin policy content and any roles provider secrets are redacted explicitly, independent of the log level used.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, this can likely be removed IMO

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like the humans are having a chat. I'll hop back into my burrow for now. If you need me again, just tag @coderabbitai in a new comment, and I'll come hopping out!


// Watch starts watching the configuration for changes in all config loaders.
func (c *Config) Watch(ctx context.Context) error {
func (c *Config) Watch(ctx context.Context, namespaces []NamespaceInfo) error {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Add a new function WatchWithNamespace()

@ntrevino-virtru ntrevino-virtru requested review from a team as code owners April 3, 2026 20:17
Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
service/pkg/config/config.go (1)

158-189: ⚠️ Potential issue | 🟠 Major

Provide evidence that required Go checks passed on this change.

The repository enforces Go format, lint, and test checks via CI (.github/workflows/checks.yaml for the service directory). These checks must pass before merging:

  • golangci-lint run (via golangci-lint-action)
  • go test ./... -short
  • go fmt ./... and go mod tidy

Show the GitHub Actions logs confirming all checks passed, or if running locally, provide the output of:

make fmt
golangci-lint run ./service/pkg/config/config.go
go test ./...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@service/pkg/config/config.go` around lines 158 - 189, The reviewer requested
proof that Go format, lint and tests passed for the change in
service/pkg/config/config.go (specifically the WatchWithNamespaces/Watch
functions); run and capture the CI/local check outputs and attach them to the
PR: run make fmt (or go fmt ./...), run golangci-lint run
./service/pkg/config/config.go (or the repo-wide lint command), run go test
./... -short, and run go mod tidy if needed, then paste the full terminal logs
(or the GitHub Actions checks logs) showing successful completion of each step
in the PR conversation so reviewers can verify all checks passed.
♻️ Duplicate comments (1)
service/pkg/config/config.go (1)

179-181: ⚠️ Potential issue | 🟠 Major

Missing rollback for already-started watchers on later watch failure remains unresolved.

If loader.Watch(...) fails at Line 179, previously started loaders can remain active in a partially-started state. This was already raised in prior review comments and still appears unresolved.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@service/pkg/config/config.go` around lines 179 - 181, The call to
loader.Watch(ctx, c, onChangeCallback, namespaces) can fail after earlier
watcher instances have already been started, leaving them running; update the
watcher startup so you track each started watcher and, if a subsequent
loader.Watch returns an error, immediately rollback by
stopping/closing/unwatching all previously started watchers (e.g., call the
watcher Stop/Unwatch/Close method or cancel the per-watcher context) before
returning the error; modify the code around loader.Watch to collect started
watcher handles and perform the cleanup loop on error to ensure no
partially-started watchers remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@service/pkg/config/config.go`:
- Around line 186-189: The Watch method only accepts (ctx context.Context) but
callers are passing namespace info; update callers to use
WatchWithNamespaces(ctx, watchInfo) instead of Watch(ctx, watchInfo) so
namespace arguments are forwarded; specifically replace cfg.Watch(ctx,
watchInfo) with cfg.WatchWithNamespaces(ctx, watchInfo) in the server startup
call and update the test calls in config_test.go to call
config.WatchWithNamespaces(ctx, ...) (keep the same watchInfo variable names and
types and no changes to the Config implementation).

---

Outside diff comments:
In `@service/pkg/config/config.go`:
- Around line 158-189: The reviewer requested proof that Go format, lint and
tests passed for the change in service/pkg/config/config.go (specifically the
WatchWithNamespaces/Watch functions); run and capture the CI/local check outputs
and attach them to the PR: run make fmt (or go fmt ./...), run golangci-lint run
./service/pkg/config/config.go (or the repo-wide lint command), run go test
./... -short, and run go mod tidy if needed, then paste the full terminal logs
(or the GitHub Actions checks logs) showing successful completion of each step
in the PR conversation so reviewers can verify all checks passed.

---

Duplicate comments:
In `@service/pkg/config/config.go`:
- Around line 179-181: The call to loader.Watch(ctx, c, onChangeCallback,
namespaces) can fail after earlier watcher instances have already been started,
leaving them running; update the watcher startup so you track each started
watcher and, if a subsequent loader.Watch returns an error, immediately rollback
by stopping/closing/unwatching all previously started watchers (e.g., call the
watcher Stop/Unwatch/Close method or cancel the per-watcher context) before
returning the error; modify the code around loader.Watch to collect started
watcher handles and perform the cleanup loop on error to ensure no
partially-started watchers remain.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: fe4e9b46-202f-4c2b-ac71-6db74d70b794

📥 Commits

Reviewing files that changed from the base of the PR and between 1e0bfb2 and cd328f7.

📒 Files selected for processing (3)
  • service/kas/kas.go
  • service/pkg/config/config.go
  • service/pkg/serviceregistry/serviceregistry.go

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 3, 2026

@ntrevino-virtru ntrevino-virtru force-pushed the chore/dspx-2669/quieter-config-loaders branch from cd328f7 to eb41ec6 Compare April 3, 2026 21:01
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 3, 2026

Benchmark results, click to expand

Benchmark authorization.GetDecisions Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 203.908059ms

Benchmark authorization.v2.GetMultiResourceDecision Results:

Metric Value
Approved Decision Requests 1000
Denied Decision Requests 0
Total Time 101.934677ms

Benchmark Statistics

Name № Requests Avg Duration Min Duration Max Duration

Bulk Benchmark Results

Metric Value
Total Decrypts 100
Successful Decrypts 100
Failed Decrypts 0
Total Time 387.138564ms
Throughput 258.31 requests/second

TDF3 Benchmark Results:

Metric Value
Total Requests 5000
Successful Requests 5000
Failed Requests 0
Concurrent Requests 50
Total Time 41.38256758s
Average Latency 412.303158ms
Throughput 120.82 requests/second

Copy link
Copy Markdown

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
service/pkg/config/config.go (1)

164-180: ⚠️ Potential issue | 🟠 Major

Rollback already-started watchers when a later loader fails.

If one loader starts watching and a later Watch returns an error, this exits immediately without stopping the earlier watchers. service/pkg/server/start.go only defers cfg.Close(ctx) after WatchWithNamespaces succeeds, so a startup failure can leak active watchers and leave the process partially initialized. At minimum, close the already-started loaders here; if Watch can fail after partial setup, include the current loader in that cleanup path too.

Suggested fix
-	for _, loader := range c.loaders {
+	for i, loader := range c.loaders {
 		// onChangeCallback is the function that will be called by loaders when a change is detected.
 		// It orchestrates the reloading of the entire configuration and then triggers the registered hooks.
 		loaderName := loader.Name()
 		onChangeCallback := func(ctx context.Context) error {
@@
-		if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
-			return err
+		if err := loader.Watch(ctx, c, onChangeCallback, namespaces); err != nil {
+			for j := 0; j < i; j++ {
+				if closeErr := c.loaders[j].Close(); closeErr != nil {
+					slog.WarnContext(ctx, "failed to close loader after watch startup failure",
+						slog.String("loader", c.loaders[j].Name()),
+						slog.Any("error", closeErr),
+					)
+				}
+			}
+			return fmt.Errorf("loader %s failed to watch: %w", loader.Name(), err)
 		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@service/pkg/config/config.go` around lines 164 - 180, When iterating
c.loaders and calling loader.Watch, capture the subset of loaders that have
successfully started (e.g., started := []Loader{}) and append each loader after
Watch returns nil; if any subsequent loader.Watch returns an error, iterate the
started slice (in reverse order) and call the loader shutdown method (e.g.,
Close(ctx) or Stop/Unwatch on the loader interface) for each to rollback started
watchers, optionally logging but preserving and returning the original Watch
error; also include the current loader in the cleanup if Watch may partially
succeed before returning an error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@service/pkg/config/config.go`:
- Around line 164-180: When iterating c.loaders and calling loader.Watch,
capture the subset of loaders that have successfully started (e.g., started :=
[]Loader{}) and append each loader after Watch returns nil; if any subsequent
loader.Watch returns an error, iterate the started slice (in reverse order) and
call the loader shutdown method (e.g., Close(ctx) or Stop/Unwatch on the loader
interface) for each to rollback started watchers, optionally logging but
preserving and returning the original Watch error; also include the current
loader in the cleanup if Watch may partially succeed before returning an error.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository UI

Review profile: ASSERTIVE

Plan: Pro

Run ID: 3202191b-8e4b-42f4-ba0e-ad544f8cf9cc

📥 Commits

Reviewing files that changed from the base of the PR and between cd328f7 and eb41ec6.

📒 Files selected for processing (3)
  • service/pkg/config/config.go
  • service/pkg/config/config_test.go
  • service/pkg/server/start.go

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Apr 3, 2026

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants