Conversation
prevents oracle price staleness Signed-off-by: Artur Troian <troian@users.noreply.github.com>
Signed-off-by: Artur Troian <troian@users.noreply.github.com>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughMigration of the oracle module v1 → v2: moved from height-based to timestamp+sequence price records, added epoch-driven pruning and timestamp encoding, refactored keeper APIs to accept explicit timestamps, updated Pyth contract/message encodings to v2, added a v2.1.0 upgrade/migration and related scripts/docs/tests updates. Changes
Sequence Diagram(s)sequenceDiagram
participant Pyth as Pyth<br/>Contract
participant MsgServer as MsgServer
participant Oracle as Oracle<br/>Keeper
participant Store as Price<br/>Store
participant Epochs as Epochs<br/>Module
Pyth->>MsgServer: MsgAddPriceEntry(price, timestamp)
activate MsgServer
MsgServer->>Oracle: AddPriceEntry(source, id, price, timestamp)
activate Oracle
Oracle->>Oracle: Validate timestamp vs latest / staleness
Oracle->>Store: Persist PriceDataRecordID(timestamp, sequence, price)
Oracle-->>MsgServer: OK
deactivate Oracle
MsgServer-->>Pyth: Response
deactivate MsgServer
Note over Epochs,Oracle: Epoch-driven pruning
Epochs->>Oracle: AfterEpochEnd(epochIdentifier)
activate Oracle
Oracle->>Store: Enumerate keys older than (now - PriceRetention)
Store-->>Oracle: List stale keys
Oracle->>Store: Delete up to MaxPrunePerEpoch
Oracle-->>Epochs: Pruning done
deactivate Oracle
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 9
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
_docs/pyth-integration.md (1)
696-717:⚠️ Potential issue | 🟠 MajorRemove
feed_contracts_paramsfrom v2 oracle examples—this field is not part of the v2 schema.The documentation at lines 696-717 and 760-792 includes
feed_contracts_paramswith v1 nested types (/akash.oracle.v1.PythContractParams,/akash.oracle.v1.WormholeContractParams) inside/akash.oracle.v2.MsgUpdateParams. However, the v2 oracle params schema does not include this field. Actual v2 params support only:sources,min_price_sources,max_price_staleness_period,twap_window,max_price_deviation_bps,price_retention,prune_epoch, andmax_prune_per_epoch. Remove the entirefeed_contracts_paramsblock from both examples.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 696 - 717, The examples showing "/akash.oracle.v2.MsgUpdateParams" incorrectly include the "feed_contracts_params" block with v1 nested types ("/akash.oracle.v1.PythContractParams" and "/akash.oracle.v1.WormholeContractParams"); remove the entire "feed_contracts_params" array from both example MsgUpdateParams objects and any references to those v1 types so the JSON only contains the valid v2 fields (sources, min_price_sources, max_price_staleness_period, twap_window, max_price_deviation_bps, price_retention, prune_epoch, max_prune_per_epoch).
🧹 Nitpick comments (13)
contracts/pyth/src/integration_tests.rs (1)
369-384: Add an explicit assertion for Field 4 (timestamp) tag.The comment documents Field 4, but the test only checks Fields 2 and 3. Adding a
0x22tag assertion will tighten regression coverage.Suggested test hardening
// Find field 3 tag (0x1a) let field3_pos = binary.iter().position(|&b| b == 0x1a); assert!(field3_pos.is_some(), "Field 3 (price) tag not found"); + // Find field 4 tag (0x22) + let field4_pos = binary.iter().position(|&b| b == 0x22); + assert!(field4_pos.is_some(), "Field 4 (timestamp) tag not found"); + // Verify field order assert!(field2_pos.unwrap() < field3_pos.unwrap()); + assert!(field3_pos.unwrap() < field4_pos.unwrap());🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/integration_tests.rs` around lines 369 - 384, The test currently searches for tags for Field 2 and Field 3 using binary.iter().position and asserts their ordering; add the same check for Field 4 by locating the 0x22 tag (e.g., create field4_pos using binary.iter().position(|&b| b == 0x22)), assert that field4_pos.is_some() with a helpful message ("Field 4 (timestamp) tag not found"), and include it in the ordering assertion (ensure field3_pos.unwrap() < field4_pos.unwrap()) so the test verifies presence and order of the timestamp tag alongside field2_pos and field3_pos.x/wasm/bindings/akash_query.go (1)
52-53: Update the doc comment to the new unit.
MaxPriceStalenessPeriodreflects the time-based API, but the comment still says “in blocks.” Leaving the old unit in the exported docs makes this migration easier to misread for binding consumers.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/wasm/bindings/akash_query.go` around lines 52 - 53, The doc comment for MaxPriceStalenessPeriod incorrectly states "in blocks" but the field now uses a time-based unit; update the comment for the exported field MaxPriceStalenessPeriod to reflect the correct time unit (e.g., "maximum price staleness period in seconds" or the actual time unit used by the API) so consumers aren't misled by the old "blocks" wording and ensure the comment matches the JSON-tagged field.x/oracle/keeper/timestamp.go (1)
49-67: Consider adding format validation in DecodeTimestamp.
DecodeTimestamponly validates buffer length but doesn't verify that:
- Delimiter positions contain '.'
- Digit positions contain valid ASCII digits ('0'-'9')
Corrupted data could produce incorrect timestamps without returning an error. However, since this is used internally with data written by
EncodeTimestamp, the risk is low.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/timestamp.go` around lines 49 - 67, DecodeTimestamp currently only checks buffer length and then calls readDigits blindly, so corrupted input can produce wrong values; update DecodeTimestamp to validate that the buffer length equals TimestampEncodedSize, that expected delimiter positions (e.g., bytes at indexes used between year/month/day/hour/minute/second/msec/usec/nsec) contain the '.' or other fixed separators used by EncodeTimestamp, and that every byte passed to readDigits (or before calling readDigits) is an ASCII digit ('0'..'9'); on any invalid delimiter or non-digit, return a descriptive error instead of proceeding; reference the DecodeTimestamp function, TimestampEncodedSize constant, and the readDigits usage to locate where to add these checks.x/oracle/keeper/prune.go (2)
116-118: Unsafe type assertion could panic.
k.(*keeper)will panic if a different implementation ofKeeperinterface is passed. Consider using a safe type assertion with a check.♻️ Proposed fix
func EpochHooksFor(k Keeper) epochstypes.EpochHooks { - return k.(*keeper) + kp, ok := k.(*keeper) + if !ok { + panic("EpochHooksFor requires *keeper implementation") + } + return kp }Alternatively, if this is always expected to be
*keeper, consider adding theEpochHooks()method to theKeeperinterface to avoid the type assertion entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/prune.go` around lines 116 - 118, EpochHooksFor performs an unsafe type assertion k.(*keeper) which can panic if a different Keeper implementation is passed; change this to use a safe assertion (ok := k.(*keeper)) and return nil or a no-op epochstypes.EpochHooks when !ok, or better yet add an EpochHooks() epochstypes.EpochHooks method to the Keeper interface and call k.EpochHooks() to avoid any assertion; update the EpochHooksFor function (and any callers) to handle the nil/no-op case or to rely on the new interface method.
95-103: Silently ignoring errors during pruning could mask issues.The
WalkandRemoveerrors are discarded with_ =. While pruning is non-critical, silently ignoring errors could mask storage corruption or other issues that operators should be aware of.♻️ Proposed fix - log errors instead of ignoring
- _ = k.prices.Walk(ctx, rng, func(key types.PriceDataRecordID, _ types.PriceDataState) (bool, error) { + err := k.prices.Walk(ctx, rng, func(key types.PriceDataRecordID, _ types.PriceDataState) (bool, error) { toDelete = append(toDelete, key) count++ return count >= maxDelete, nil }) + if err != nil { + k.Logger(ctx).Error("failed to walk prices for pruning", "error", err) + } for _, key := range toDelete { - _ = k.prices.Remove(ctx, key) + if err := k.prices.Remove(ctx, key); err != nil { + k.Logger(ctx).Error("failed to remove price during pruning", "key", key, "error", err) + } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/prune.go` around lines 95 - 103, The pruning code currently discards errors from k.prices.Walk and k.prices.Remove; change it to capture and log those errors instead of ignoring them: assign the Walk return value to err (err := k.prices.Walk(...)), and if err != nil log a warning/error with context (include rng, count/maxDelete or other identifying info) rather than discarding; likewise, for each Remove call check its error (err := k.prices.Remove(ctx, key)) and log failures including the PriceDataRecordID key so operators can investigate, but continue the loop so pruning still proceeds on best-effort. Ensure you use the existing logger on the keeper (e.g., k.logger or k.Logger) or a context-aware logger when emitting messages.x/oracle/keeper/abci.go (2)
40-42: Consider extracting the block time constant.The magic number
6(seconds per block) is used here to compute TWAP duration. This assumption should be documented or extracted as a named constant for clarity and maintainability.+const secondsPerBlock = 6 + func (k *keeper) EndBlocker(ctx context.Context) error { // ... - twapDuration := time.Duration(params.TwapWindow) * 6 * time.Second + twapDuration := time.Duration(params.TwapWindow) * secondsPerBlock * time.Second🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/abci.go` around lines 40 - 42, Replace the magic number 6 used to compute twapDuration with a named constant (e.g., BlockTimeSeconds or BlockTime) and add a short comment explaining it represents the assumed seconds per block; update the computation twapDuration := time.Duration(params.TwapWindow) * BlockTimeSeconds * time.Second (and any other occurrences) to use the constant instead of 6, and ensure the constant is defined near related code (abci keeper file) so references like params.TwapWindow, twapDuration, and twapStart remain clear and maintainable.
33-37: Returning nil on params error prevents chain halt but silently skips aggregation.When
GetParamsfails, the function logs and returnsnil, which means no price aggregation occurs for that block. While this prevents a chain halt, consider whether emitting a metric or more visible alert would help operators detect this condition.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/abci.go` around lines 33 - 37, When GetParams(sctx) fails inside abci.go currently you only log via sctx.Logger().Error and return nil; instead, keep the non-halting behavior but also emit an operator-visible signal: increment/record a metric and emit an ABCI event or alert before returning. Update the error branch for GetParams to call your telemetry/metrics helper (e.g. k.metrics.ParamsFetchFailures.Inc() or telemetry.Record("oracle.params_fetch_failures")) and create an event on the context (e.g. sctx.EventManager().EmitEvent or sctx.Event(...)) with the error details alongside the existing sctx.Logger().Error call, then return nil as before.x/bme/keeper/keeper.go (1)
139-145: Transient schema is built but never used.The
tschemais created fromtsb.Build()and stored on the keeper (lines 140, 145), buttschemais never referenced anywhere in the codebase. No collections are added to the transient schema buildertsb, making this unused scaffolding. Either:
- Add a TODO comment if this is reserved for future transient state
- Remove the transient store setup (
tkey,tsb,tschema) if not needed🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/bme/keeper/keeper.go` around lines 139 - 145, The transient schema (tschema) is built from tsb.Build() and assigned to k.tschema but never used; either remove the transient store setup (tkey, tsb, the tsb.Build() call and assignment k.tschema = tschema) to eliminate unused scaffolding, or explicitly mark it as intentional for future use by adding a TODO comment next to tsb/tschema (and leave k.tschema assignment) clarifying that transient collections will be added later; locate the tsb, tschema, tkey symbols and the k.tschema assignment in keeper.go to apply one of these two fixes.x/oracle/keeper/keeper.go (2)
376-379: Consider graceful error handling instead of panic.Panicking on
Walkerrors could cause node crashes in production. While panicking may be acceptable for truly unrecoverable store corruption, consider returning an error or logging and returning an empty result for transient failures.♻️ Proposed error handling
err := k.prices.Walk(ctx, rng, func(key types.PriceDataRecordID, val types.PriceDataState) (stop bool, err error) { res = append(res, types.PriceData{ ID: key, State: val, }) return false, nil }) if err != nil { - panic(err.Error()) + ctx.Logger().Error("failed to walk prices", "error", err) + return nil // or return res with partial data } return res🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/keeper.go` around lines 376 - 379, The panic on Walk error should be replaced with graceful error handling: locate the code that calls Walk (the block that currently does `if err != nil { panic(err.Error()) }`) and change it to return the error to the caller or log it and return a safe empty result; use the keeper's logger (e.g., k.logger or a provided logger) or wrap the error with context (e.g., "keeper: Walk failed") before returning it from the containing function (or convert to a non-fatal path if the function signature expects a result), ensuring callers can handle transient filesystem/store errors instead of crashing the process.
546-561: Potential overflow with nanosecond arithmetic for long durations.Using
Nanoseconds()fortimeWeight(Line 554, 556) could overflowint64for durations exceeding ~292 years, which is unlikely in practice. However, ifTwapWindowis ever misconfigured to an extremely large value, this could cause unexpected behavior.The current usage appears safe given typical oracle parameters, but consider documenting the expected bounds.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/keeper.go` around lines 546 - 561, The loop computes timeWeight via duration.Nanoseconds() which can overflow int64 for extremely large durations; update the calculation in the loop that references dataPoints, timeWeight, now and weightedSum to safely clamp the duration before converting to int64: compute the duration (either now.Sub(current.ID.Timestamp) or dataPoints[i-1].ID.Timestamp.Sub(current.ID.Timestamp)), check against math.MaxInt64 and set timeWeight to math.MaxInt64 if it would overflow, otherwise use duration.Nanoseconds(); keep the rest of the weightedSum logic unchanged and add a short comment documenting the clamp and expected TWAP bounds.upgrades/software/v2.1.0/upgrade.go (1)
61-61: Hardcoded contract address reduces flexibility.The contract address is hardcoded, which ties this upgrade to a specific deployment. Consider documenting why this specific address is used or extracting it to a constant with an explanatory comment for future maintainers.
💡 Suggested improvement
+// pythContractAddr is the mainnet Pyth oracle contract deployed during the v2.0.0 upgrade. +// This address is used to migrate the contract bytecode during the v2.1.0 upgrade. +const pythContractAddr = "akash1nc5tatafv6eyq7llkr2gv50ff9e22mnf70qgjlv737ktmt4eswrqyagled" + func (up *upgrade) UpgradeHandler() upgradetypes.UpgradeHandler { return func(ctx context.Context, plan upgradetypes.Plan, fromVM module.VersionMap) (module.VersionMap, error) { // ... - contractAddr := "akash1nc5tatafv6eyq7llkr2gv50ff9e22mnf70qgjlv737ktmt4eswrqyagled" + contractAddr := pythContractAddr🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.1.0/upgrade.go` at line 61, The hardcoded contract address assigned to variable contractAddr reduces flexibility; extract this value into a named constant (e.g., DefaultContractAddr) or load it from configuration/environment and add an explanatory comment why this particular address is used (or under what conditions it should be changed) so future maintainers understand and can override it; update any references to contractAddr in upgrade.go to use the new constant/config accessor and ensure tests or upgrade docs reflect the change.x/oracle/keeper/grpc_query_test.go (1)
175-178: Clarify the expected zero-timestamp record.Line 177 asserts that the second result has a zero timestamp (
IsZero()). This suggests an initial/sentinel record exists. Consider adding a brief comment explaining why this record is expected, to aid future maintainers.📝 Suggested documentation
res, err = suite.queryClient.Prices(ctx, req) require.NoError(t, err) require.NotNil(t, res) require.Len(t, res.Prices, 2) require.True(t, res.Prices[0].ID.Timestamp.Equal(ts1)) + // Second result is the initial/sentinel record with zero timestamp require.True(t, res.Prices[1].ID.Timestamp.IsZero())🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/grpc_query_test.go` around lines 175 - 178, Add a brief inline comment above the assertion that checks res.Prices[1].ID.Timestamp.IsZero() explaining that the second price entry is an expected sentinel/initial record (hence a zero timestamp) and why it appears (e.g., inserted as an initial/default record by the keeper/genesis setup); update the test around the require.True(t, res.Prices[1].ID.Timestamp.IsZero()) line next to references to res.Prices and ts1 so future maintainers understand this expected zero-timestamp entry.contracts/pyth/src/oracle.rs (1)
149-165: Casting signed integers to u64 may produce incorrect encoding for negative values.Lines 155 and 161 cast
i64andi32directly tou64for varint encoding. Whilegoogle.protobuf.Timestampvalues should be non-negative in practice (seconds since epoch, nanos 0-999999999), this cast will produce incorrect wire format if negative values are ever passed.For signed protobuf fields, zigzag encoding (
sint64/sint32) or two's complement handling is required. However, since Timestamp fields are defined asint64/int32(notsint64/sint32) in protobuf, and Go's protobuf library encodes them as unsigned varints for non-negative values, this works correctly as long as values are non-negative.Consider adding a debug assertion or early return to guard against invalid negative timestamps:
🛡️ Proposed defensive guard
fn encode_timestamp(&self) -> Vec<u8> { let mut buf = Vec::new(); + // google.protobuf.Timestamp requires non-negative seconds and nanos in [0, 999999999] + debug_assert!(self.timestamp_seconds >= 0, "timestamp_seconds must be non-negative"); + debug_assert!(self.timestamp_nanos >= 0 && self.timestamp_nanos < 1_000_000_000, "timestamp_nanos must be in [0, 999999999]"); + // Field 1: seconds (tag = 0x08 = (1 << 3) | 0 for varint) if self.timestamp_seconds != 0 { buf.push(0x08); encode_varint(&mut buf, self.timestamp_seconds as u64); }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/oracle.rs` around lines 149 - 165, The encode_timestamp method currently casts timestamp_seconds and timestamp_nanos directly to u64 before varint encoding which yields incorrect wire bytes if negatives slip in; update encode_timestamp to assert and guard against negative values: in the encode_timestamp function add a debug_assert!(self.timestamp_seconds >= 0 && self.timestamp_nanos >= 0) and an explicit runtime check (if self.timestamp_seconds < 0 || self.timestamp_nanos < 0 { panic!("encode_timestamp: negative timestamp values not allowed") }) before calling encode_varint, referencing the encode_timestamp method and the timestamp_seconds/timestamp_nanos fields and leaving encode_varint usage unchanged for the normal non-negative path.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@contracts/pyth/src/contract.rs`:
- Around line 250-268: The current update logic in the contract accepts and
overwrites on equal publish_time, causing last-on-chain wins; modify the price
feed state (struct price_feed and storage usage in PRICE_FEED) to include a
monotonic tie-breaker field from the attestation (e.g., publish_slot or
sequence_number) and update the conditional in the price update path to only
replace stored values (price, conf, expo, publish_time, prev_publish_time, etc.)
when either publish_time > price_feed.publish_time OR (publish_time ==
price_feed.publish_time AND attestation_slot > price_feed.publish_slot); ensure
you also set prev_publish_slot when replacing and persist the new publish_slot
to storage via PRICE_FEED.save, and add any necessary initialization/default
handling for the new field.
In `@script/upgrades.sh`:
- Around line 599-601: The unconditional cp of "$srcval/wasm" causes the script
to fail when the wasm directory is absent; update the block that currently calls
rm -rf "wasm" and cp -r "$srcval/wasm" ./ to first test for the existence of
"$srcval/wasm" (e.g., check if it's a directory) and only perform the cp when
present, leaving the rm and cp of "$srcval/data" unchanged; reference the cp
invocation using the variable srcval and the target directory wasm so you modify
the correct lines.
In `@script/wasm2go.sh`:
- Line 12: Update the script header comment to reflect the new OUTPUT_FILE path
(OUTPUT_FILE="upgrades/software/v2.1.0/contracts.go"); find the top-of-file
usage/usage comment that still references v2.0.0 and change it to v2.1.0 so the
comment matches the actual OUTPUT_FILE value and avoids operator confusion
(search for the header "usage" or any occurrence of "v2.0.0" in wasm2go.sh and
update to "v2.1.0").
In `@upgrades/software/v2.1.0/init.go`:
- Line 14: The migration registration uses the wrong version number; update the
RegisterMigration call to register the oracle migration under the module
ConsensusVersion by changing the second argument from 1 to 2 so that
utypes.RegisterMigration(otypes.ModuleName, 2, newOracleMigration) matches the
oracle module's ConsensusVersion (refer to otypes.ModuleName and
newOracleMigration).
In `@upgrades/software/v2.1.0/oracle.go`:
- Around line 27-40: The iterator keys are being appended by reference
(iter.Key()) which is backed by iterator-owned memory and gets reused on
iter.Next(), so copy each key's bytes before appending; in the loop that uses
store.Iterator and iter.Key()/iter.Next(), create a fresh byte slice (e.g., via
append([]byte(nil), iter.Key()...) or bytes.Clone) and append that copy to keys,
then Close the iterator as before and call store.Delete on the copied keys.
In `@x/bme/keeper/abci.go`:
- Line 22: BeginBlocker currently ignores the error returned by
k.ledgerSequence.Set(ctx, 0); capture the returned error (err :=
k.ledgerSequence.Set(ctx, 0)) and handle it instead of discarding it: if the
write fails, log the error with context (using k.Logger(ctx) or module logger)
and abort (panic/return) so state-reset failures aren't masked and sequencing
guarantees aren't broken; update the BeginBlocker code path around
k.ledgerSequence.Set to check err and react accordingly.
In `@x/bme/keeper/keeper.go`:
- Around line 874-878: The error branch after calling k.calculateCR(sctx) only
checks for otypes.ErrPriceStalled and leaves other errors (notably
bmetypes.ErrZeroPrice returned by calculateCR) unhandled, causing cb.Status to
remain inconsistent; update the error handling in the calculateCR result block
to explicitly check for bmetypes.ErrZeroPrice (using errors.Is(err,
bmetypes.ErrZeroPrice)) and set cb.Status to the appropriate halt status (e.g.,
bmetypes.MintStatusHaltOracle) or otherwise handle/propagate the error
consistently, ensuring cb.Status is always set for non-nil errs from
k.calculateCR.
In `@x/oracle/keeper/abci.go`:
- Line 29: The EndBlocker is using the wrong telemetry key: change the call to
telemetry.ModuleMeasureSince(types.ModuleName, start,
telemetry.MetricKeyBeginBlocker) inside EndBlocker to use the proper end-block
key (telemetry.MetricKeyEndBlocker) so metrics are recorded under the correct
metric; locate the telemetry.ModuleMeasureSince call in the EndBlocker function
and replace telemetry.MetricKeyBeginBlocker with telemetry.MetricKeyEndBlocker.
In `@x/oracle/keeper/keeper.go`:
- Around line 207-209: Replace the hardcoded fallback "time.Second * 12" used
when setting latestID.Timestamp with a configurable parameter; add a
configurable value (e.g., Keeper.DefaultBlockTime or module param
Params.DefaultBlockTime) and use that parameter in place of the magic number
where latestID.Timestamp is set (the code referencing latestID.Timestamp and
ctx.BlockTime()); update the keeper constructor/initializer to accept or load
the default value (and set a sensible default if nil), and wire the module
params (Params struct + Get/Set) or keeper field so chain operators can tune the
fallback instead of the hardcoded 12s.
---
Outside diff comments:
In `@_docs/pyth-integration.md`:
- Around line 696-717: The examples showing "/akash.oracle.v2.MsgUpdateParams"
incorrectly include the "feed_contracts_params" block with v1 nested types
("/akash.oracle.v1.PythContractParams" and
"/akash.oracle.v1.WormholeContractParams"); remove the entire
"feed_contracts_params" array from both example MsgUpdateParams objects and any
references to those v1 types so the JSON only contains the valid v2 fields
(sources, min_price_sources, max_price_staleness_period, twap_window,
max_price_deviation_bps, price_retention, prune_epoch, max_prune_per_epoch).
---
Nitpick comments:
In `@contracts/pyth/src/integration_tests.rs`:
- Around line 369-384: The test currently searches for tags for Field 2 and
Field 3 using binary.iter().position and asserts their ordering; add the same
check for Field 4 by locating the 0x22 tag (e.g., create field4_pos using
binary.iter().position(|&b| b == 0x22)), assert that field4_pos.is_some() with a
helpful message ("Field 4 (timestamp) tag not found"), and include it in the
ordering assertion (ensure field3_pos.unwrap() < field4_pos.unwrap()) so the
test verifies presence and order of the timestamp tag alongside field2_pos and
field3_pos.
In `@contracts/pyth/src/oracle.rs`:
- Around line 149-165: The encode_timestamp method currently casts
timestamp_seconds and timestamp_nanos directly to u64 before varint encoding
which yields incorrect wire bytes if negatives slip in; update encode_timestamp
to assert and guard against negative values: in the encode_timestamp function
add a debug_assert!(self.timestamp_seconds >= 0 && self.timestamp_nanos >= 0)
and an explicit runtime check (if self.timestamp_seconds < 0 ||
self.timestamp_nanos < 0 { panic!("encode_timestamp: negative timestamp values
not allowed") }) before calling encode_varint, referencing the encode_timestamp
method and the timestamp_seconds/timestamp_nanos fields and leaving
encode_varint usage unchanged for the normal non-negative path.
In `@upgrades/software/v2.1.0/upgrade.go`:
- Line 61: The hardcoded contract address assigned to variable contractAddr
reduces flexibility; extract this value into a named constant (e.g.,
DefaultContractAddr) or load it from configuration/environment and add an
explanatory comment why this particular address is used (or under what
conditions it should be changed) so future maintainers understand and can
override it; update any references to contractAddr in upgrade.go to use the new
constant/config accessor and ensure tests or upgrade docs reflect the change.
In `@x/bme/keeper/keeper.go`:
- Around line 139-145: The transient schema (tschema) is built from tsb.Build()
and assigned to k.tschema but never used; either remove the transient store
setup (tkey, tsb, the tsb.Build() call and assignment k.tschema = tschema) to
eliminate unused scaffolding, or explicitly mark it as intentional for future
use by adding a TODO comment next to tsb/tschema (and leave k.tschema
assignment) clarifying that transient collections will be added later; locate
the tsb, tschema, tkey symbols and the k.tschema assignment in keeper.go to
apply one of these two fixes.
In `@x/oracle/keeper/abci.go`:
- Around line 40-42: Replace the magic number 6 used to compute twapDuration
with a named constant (e.g., BlockTimeSeconds or BlockTime) and add a short
comment explaining it represents the assumed seconds per block; update the
computation twapDuration := time.Duration(params.TwapWindow) * BlockTimeSeconds
* time.Second (and any other occurrences) to use the constant instead of 6, and
ensure the constant is defined near related code (abci keeper file) so
references like params.TwapWindow, twapDuration, and twapStart remain clear and
maintainable.
- Around line 33-37: When GetParams(sctx) fails inside abci.go currently you
only log via sctx.Logger().Error and return nil; instead, keep the non-halting
behavior but also emit an operator-visible signal: increment/record a metric and
emit an ABCI event or alert before returning. Update the error branch for
GetParams to call your telemetry/metrics helper (e.g.
k.metrics.ParamsFetchFailures.Inc() or
telemetry.Record("oracle.params_fetch_failures")) and create an event on the
context (e.g. sctx.EventManager().EmitEvent or sctx.Event(...)) with the error
details alongside the existing sctx.Logger().Error call, then return nil as
before.
In `@x/oracle/keeper/grpc_query_test.go`:
- Around line 175-178: Add a brief inline comment above the assertion that
checks res.Prices[1].ID.Timestamp.IsZero() explaining that the second price
entry is an expected sentinel/initial record (hence a zero timestamp) and why it
appears (e.g., inserted as an initial/default record by the keeper/genesis
setup); update the test around the require.True(t,
res.Prices[1].ID.Timestamp.IsZero()) line next to references to res.Prices and
ts1 so future maintainers understand this expected zero-timestamp entry.
In `@x/oracle/keeper/keeper.go`:
- Around line 376-379: The panic on Walk error should be replaced with graceful
error handling: locate the code that calls Walk (the block that currently does
`if err != nil { panic(err.Error()) }`) and change it to return the error to the
caller or log it and return a safe empty result; use the keeper's logger (e.g.,
k.logger or a provided logger) or wrap the error with context (e.g., "keeper:
Walk failed") before returning it from the containing function (or convert to a
non-fatal path if the function signature expects a result), ensuring callers can
handle transient filesystem/store errors instead of crashing the process.
- Around line 546-561: The loop computes timeWeight via duration.Nanoseconds()
which can overflow int64 for extremely large durations; update the calculation
in the loop that references dataPoints, timeWeight, now and weightedSum to
safely clamp the duration before converting to int64: compute the duration
(either now.Sub(current.ID.Timestamp) or
dataPoints[i-1].ID.Timestamp.Sub(current.ID.Timestamp)), check against
math.MaxInt64 and set timeWeight to math.MaxInt64 if it would overflow,
otherwise use duration.Nanoseconds(); keep the rest of the weightedSum logic
unchanged and add a short comment documenting the clamp and expected TWAP
bounds.
In `@x/oracle/keeper/prune.go`:
- Around line 116-118: EpochHooksFor performs an unsafe type assertion
k.(*keeper) which can panic if a different Keeper implementation is passed;
change this to use a safe assertion (ok := k.(*keeper)) and return nil or a
no-op epochstypes.EpochHooks when !ok, or better yet add an EpochHooks()
epochstypes.EpochHooks method to the Keeper interface and call k.EpochHooks() to
avoid any assertion; update the EpochHooksFor function (and any callers) to
handle the nil/no-op case or to rely on the new interface method.
- Around line 95-103: The pruning code currently discards errors from
k.prices.Walk and k.prices.Remove; change it to capture and log those errors
instead of ignoring them: assign the Walk return value to err (err :=
k.prices.Walk(...)), and if err != nil log a warning/error with context (include
rng, count/maxDelete or other identifying info) rather than discarding;
likewise, for each Remove call check its error (err := k.prices.Remove(ctx,
key)) and log failures including the PriceDataRecordID key so operators can
investigate, but continue the loop so pruning still proceeds on best-effort.
Ensure you use the existing logger on the keeper (e.g., k.logger or k.Logger) or
a context-aware logger when emitting messages.
In `@x/oracle/keeper/timestamp.go`:
- Around line 49-67: DecodeTimestamp currently only checks buffer length and
then calls readDigits blindly, so corrupted input can produce wrong values;
update DecodeTimestamp to validate that the buffer length equals
TimestampEncodedSize, that expected delimiter positions (e.g., bytes at indexes
used between year/month/day/hour/minute/second/msec/usec/nsec) contain the '.'
or other fixed separators used by EncodeTimestamp, and that every byte passed to
readDigits (or before calling readDigits) is an ASCII digit ('0'..'9'); on any
invalid delimiter or non-digit, return a descriptive error instead of
proceeding; reference the DecodeTimestamp function, TimestampEncodedSize
constant, and the readDigits usage to locate where to add these checks.
In `@x/wasm/bindings/akash_query.go`:
- Around line 52-53: The doc comment for MaxPriceStalenessPeriod incorrectly
states "in blocks" but the field now uses a time-based unit; update the comment
for the exported field MaxPriceStalenessPeriod to reflect the correct time unit
(e.g., "maximum price staleness period in seconds" or the actual time unit used
by the API) so consumers aren't misled by the old "blocks" wording and ensure
the comment matches the JSON-tagged field.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 3bcc2f09-3bfd-43c3-a9b8-61f061da3586
⛔ Files ignored due to path filters (2)
Cargo.lockis excluded by!**/*.lockgo.sumis excluded by!**/*.sum
📒 Files selected for processing (49)
Cargo.toml_docs/pyth-integration.md_run/init.sh_run/node/prop.jsonapp/types/app.gocontracts/pyth/Cargo.tomlcontracts/pyth/src/contract.rscontracts/pyth/src/integration_tests.rscontracts/pyth/src/oracle.rsgo.modmeta.jsonscript/upgrades.shscript/wasm2go.shtests/e2e/oracle_cli_test.gotests/e2e/oracle_grpc_test.gotests/e2e/pyth_contract_test.gotests/upgrade/test-cases.jsontestutil/oracle/price_feeder.gotestutil/state/suite.goupgrades/software/v2.0.0/upgrade.goupgrades/software/v2.1.0/contracts.goupgrades/software/v2.1.0/init.goupgrades/software/v2.1.0/oracle.goupgrades/software/v2.1.0/upgrade.goupgrades/upgrades.gox/bme/keeper/abci.gox/bme/keeper/grpc_query.gox/bme/keeper/keeper.gox/bme/keeper/key.gox/bme/simulation/genesis.gox/bme/simulation/proposals.gox/deployment/keeper/grpc_query.gox/market/keeper/grpc_query.gox/oracle/alias.gox/oracle/handler/server.gox/oracle/keeper/abci.gox/oracle/keeper/codec.gox/oracle/keeper/genesis.gox/oracle/keeper/grpc_query.gox/oracle/keeper/grpc_query_test.gox/oracle/keeper/keeper.gox/oracle/keeper/key.gox/oracle/keeper/prune.gox/oracle/keeper/timestamp.gox/oracle/module.gox/oracle/simulation/genesis.gox/oracle/simulation/proposals.gox/wasm/bindings/akash_query.gox/wasm/keeper/msg_filter.go
| rm -rf "wasm" | ||
| cp -r "$srcval/data" ./ | ||
| cp -r "$srcval/wasm" ./ |
There was a problem hiding this comment.
Guard optional wasm copy to prevent hard failure.
Line 601 unconditionally copies $srcval/wasm; with set -e, this aborts the script when wasm is absent.
Proposed fix
- rm -rf "data"
- rm -rf "wasm"
- cp -r "$srcval/data" ./
- cp -r "$srcval/wasm" ./
+ rm -rf "data" "wasm"
+ cp -r "$srcval/data" ./
+ if [[ -d "$srcval/wasm" ]]; then
+ cp -r "$srcval/wasm" ./
+ fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@script/upgrades.sh` around lines 599 - 601, The unconditional cp of
"$srcval/wasm" causes the script to fail when the wasm directory is absent;
update the block that currently calls rm -rf "wasm" and cp -r "$srcval/wasm" ./
to first test for the existence of "$srcval/wasm" (e.g., check if it's a
directory) and only perform the cp when present, leaving the rm and cp of
"$srcval/data" unchanged; reference the cp invocation using the variable srcval
and the target directory wasm so you modify the correct lines.
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
x/oracle/keeper/key.go (1)
26-42:⚠️ Potential issue | 🟠 Major
BuildPricePrefixno longer matches the v2 price-key layout.
keeper.pricesis now keyed asSource(4-byte big-endian) +Denom+BaseDenom+Timestamp+Sequence, but this helper still writes a uvarintidand skipsBaseDenom. Any caller using this prefix to scanpriceswill miss v2 entries entirely.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/key.go` around lines 26 - 42, BuildPricePrefix currently writes a uvarint id and omits BaseDenom, so it doesn't produce the v2 key layout; update BuildPricePrefix (and its use of PricesPrefix.Bytes() and EncodeTimestamp) to emit the v2 format: write Source as a 4-byte big-endian (use binary.BigEndian.PutUint32 or equivalent) instead of the uvarint id, then write Denom (length+bytes), then write BaseDenom (length+bytes), and finally write Timestamp (if non-zero) and leave space for Sequence as appropriate; remove the uvarint logic and adjust the function signature/parameters to accept a baseDenom (and a uint32 source) so callers produce keys matching v2 price-key layout.
♻️ Duplicate comments (1)
x/bme/keeper/keeper.go (1)
874-880:⚠️ Potential issue | 🟠 Major
MintStatusHaltOraclenever survives this branch.Line 879 overwrites the stalled-oracle status set on Line 877, so every
calculateCRerror is reported asMintStatusHaltCR.Suggested fix
cr, err := k.calculateCR(sctx) if err != nil { if errors.Is(err, otypes.ErrPriceStalled) { cb.Status = bmetypes.MintStatusHaltOracle - } - cb.Status = bmetypes.MintStatusHaltCR + } else { + cb.Status = bmetypes.MintStatusHaltCR + } } else {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/bme/keeper/keeper.go` around lines 874 - 880, The error handling after calling k.calculateCR(sctx) incorrectly overwrites the MintStatusHaltOracle status because cb.Status is set unconditionally to MintStatusHaltCR; update the branch in the function containing k.calculateCR so that when errors.Is(err, otypes.ErrPriceStalled) sets cb.Status = bmetypes.MintStatusHaltOracle it is not immediately replaced—either use an if/else (set MintStatusHaltCR only when the error is not ErrPriceStalled) or return/exit after setting MintStatusHaltOracle; ensure the logic around k.calculateCR, the err check, and cb.Status assignments preserves MintStatusHaltOracle when appropriate.
🧹 Nitpick comments (3)
contracts/pyth/src/oracle.rs (3)
262-282: Consider adding test coverage for non-zero nanoseconds.The tests use
timestamp_nanos = 0in all cases. Consider adding a test case with non-zero nanoseconds to verify the encoding path at line 159-162 works correctly.📋 Suggested test addition
#[test] fn test_msg_add_price_entry_with_nanos() { let msg = MsgAddPriceEntry::new( "akash1abc123".to_string(), "akt".to_string(), "usd".to_string(), "524683000000000000".to_string(), 1234567890, 500000000, // 0.5 seconds in nanos ); assert_eq!(msg.timestamp_nanos, 500000000); let binary = msg.encode_to_protobuf(); assert!(!binary.is_empty()); // Verify field 4 (timestamp) is present with nanos assert!(binary.iter().any(|&b| b == 0x22)); // Field 4 tag }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/oracle.rs` around lines 262 - 282, Add a unit test that covers non-zero nanoseconds by creating a MsgAddPriceEntry with timestamp_nanos != 0 (e.g., 500_000_000), assert msg.timestamp_nanos equals that value, call msg.encode_to_protobuf(), assert the returned binary is non-empty and verify the protobuf encoding contains the timestamp field (for example by checking the binary contains the field 4 tag 0x22); place the test alongside existing tests (e.g., name it test_msg_add_price_entry_with_nanos) to exercise MsgAddPriceEntry::new and encode_to_protobuf paths that handle timestamp_nanos.
149-165: Manual protobuf encoding is correct but fragile.The signed-to-unsigned casting for varint encoding (
self.timestamp_seconds as u64) produces correct protobuf wire format forint64fields. However, manual protobuf encoding is error-prone and could diverge from the proto definition during future changes.Consider using
prostor another protobuf library for encoding if this becomes more complex.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/oracle.rs` around lines 149 - 165, The encode_timestamp method manually builds protobuf varints and casts signed fields to u64 (timestamp_seconds as u64, timestamp_nanos as u64), which is fragile and risks diverging from the .proto; replace this hand-rolled encoding by using a protobuf library (e.g., prost) to serialize the timestamp message or at minimum move encoding into a single helper that uses prost::Message::encode for a Timestamp struct so encode_timestamp, timestamp_seconds, timestamp_nanos and encode_varint are removed/rewired to call the generated/standard encoder; ensure the produced bytes match the original wire format by using prost-generated types or prost::Message::encode_to_vec.
118-124: Consider behavior when timestamp is Unix epoch (0, 0).When
timestamp_secondsandtimestamp_nanosare both 0,encode_timestamp()returns an empty vector, causing field 4 to be omitted entirely. This is standard protobuf behavior for default values, but if the Unix epoch (1970-01-01 00:00:00 UTC) is a valid price timestamp, it would be indistinguishable from "no timestamp provided" on the receiver side.If epoch timestamps should never occur in practice for price data, this is fine. Otherwise, consider always emitting the timestamp field when you have a valid price entry.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@contracts/pyth/src/oracle.rs` around lines 118 - 124, The current serialization omits field 4 when encode_timestamp() returns an empty Vec, which makes a true Unix epoch (timestamp_seconds == 0 && timestamp_nanos == 0) indistinguishable from “no timestamp”; change the logic so that the timestamp field is always emitted for valid price entries: either update encode_timestamp() to return a non-empty encoded Timestamp even for zero values (so seconds/nanos are encoded explicitly) or remove the is_empty() check in the block that writes field tag 0x22 and instead always push 0x22 and the length/bytes returned by encode_timestamp(); refer to encode_timestamp(), timestamp_seconds, timestamp_nanos, and the serialization code that pushes 0x22 and extends buf to locate and apply the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@_docs/pyth-integration.md`:
- Around line 696-702: The docs introduce the renamed parameter
max_price_staleness_period but later references still use the old
staleness_threshold; update all occurrences (including the troubleshooting table
and the block around lines referencing MsgUpdateParams / params and the section
at 733-738) to consistently use max_price_staleness_period, and ensure any
explanatory text and examples (e.g., suggested operator actions) mention the new
parameter name and its units/semantics instead of staleness_threshold.
In `@x/epochs/keeper/abci.go`:
- Around line 12-15: The BeginBlocker function currently calls
telemetry.ModuleMeasureSince(types.ModuleName, start,
telemetry.MetricKeyEndBlocker) which incorrectly attributes begin-block timing
to the end-block metric; update the call in BeginBlocker to use
telemetry.MetricKeyBeginBlocker instead so
telemetry.ModuleMeasureSince(types.ModuleName, start,
telemetry.MetricKeyBeginBlocker) is invoked.
In `@x/oracle/keeper/abci.go`:
- Around line 39-42: The TWAP window is being approximated by multiplying
params.TwapWindow by 6 seconds, which reintroduces block-time cadence
assumptions; change the logic around twapDuration/twapStart so TwapWindow is
treated as an explicit duration (e.g., time.Duration(params.TwapWindow) *
time.Second) or, better, compute the TWAP start from actual sample timestamps
instead of a fixed multiplier. Update the code that sets twapDuration and
twapStart (currently using params.TwapWindow and now.Add(...)) to either (a) use
an explicit duration unit for params.TwapWindow and remove the "* 6 *
time.Second" scaling, or (b) derive twapStart by walking/inspecting the oracle
price samples' timestamps to find the correct start time for the TWAP; also
ensure cutoffTime calculation using MaxPriceStalenessPeriod remains in seconds
(time.Duration(params.MaxPriceStalenessPeriod) * time.Second).
- Around line 80-90: The TWAP history call is currently only scoped by denom
causing cross-market mixing when the same asset has multiple baseDenoms; update
getTWAPHistory to accept and range on baseDenom as well (signature change to
include baseDenom and use (source, denom, baseDenom, timestamp) when
iterating/storing samples), then update the call site in the loop in abci.go
(where types.PriceDataRecordID is built using si.source, did.Denom,
did.BaseDenom, si.latestTimestamp) to pass did.BaseDenom into getTWAPHistory and
adjust any other callers of getTWAPHistory accordingly so histories are scoped
per (source, denom, baseDenom, timestamp).
- Around line 51-52: latestByDenom is iterated non-deterministically; collect
the map keys (DataID) into a slice, sort that slice, and then iterate over the
sorted keys when processing entries instead of using "for did, sources := range
latestByDenom". Update the loop that calls setPriceHealth, aggregatedPrices.Set,
and emits events to use the sorted key order so state mutations and event
emission are deterministic; keep references to latestByDenom, DataID (or
types.DataID), setPriceHealth, and aggregatedPrices.Set to locate and modify the
existing code.
In `@x/oracle/keeper/grpc_query_test.go`:
- Around line 172-177: The test currently expects two entries on page 2 which is
wrong for Limit:2 after inserting three prices; update the assertions in the
Prices pagination test (the call to suite.queryClient.Prices with req and the
variables res, req, ts1) so that page 2 asserts require.Len(t, res.Prices, 1)
and require.True(t, res.Prices[0].ID.Timestamp.Equal(ts1)) and remove the
assertion that the second entry has a zero timestamp; keep the existing call to
suite.queryClient.Prices and only adjust these expectations to match the
remaining single record.
In `@x/oracle/keeper/grpc_query.go`:
- Around line 29-37: The change removed the previous default of reverse
(newest-first) pagination by passing a zero-value PageRequest to
query.CollectionFilteredPaginate; restore the original behavior by ensuring
pageReq.Reverse is set to true when req.Pagination is nil (i.e., after
creating/copying pageReq from req.Pagination, if req.Pagination == nil then set
pageReq.Reverse = true) so that keeper.prices continues to be paginated
newest-first unless the caller explicitly supplied a Pagination.
In `@x/oracle/keeper/prune.go`:
- Around line 57-66: The current code ignores errors from k.latestPriceID.Walk
and from k.pruneSourcePrices (and inner delete calls), causing silent partial
pruning; update the Walk callback to capture and propagate errors returned by
k.pruneSourcePrices and by the Walk itself (i.e., return the error from the
callback so Walk surfaces it), and modify callers (e.g., AfterEpochEnd) to check
Walk's error and return it instead of treating pruning as successful; also apply
the same treatment to the other walk block around pruneSourcePrices (lines
~95-103) so all delete/Walk errors are not discarded but returned up the call
chain.
- Around line 73-103: pruneSourcePrices currently deletes only k.prices entries
leaving latestPriceID (the index used by EndBlocker) potentially pointing to a
removed record; update pruneSourcePrices to reconcile latestPriceID for the
affected source/denom/baseDenom after deletions: after removing keys (or if
deletions stopped early due to maxDelete), walk/find the newest remaining
types.PriceDataRecordID for that source+denom+baseDenom (or detect none remain)
and update or delete the stored latestPriceID entry accordingly (use the same
key types.PriceDataRecordID, k.prices iteration helpers, and the keeper
field/method that persists latestPriceID so EndBlocker will not read a dangling
index). Ensure this update happens whether you removed all returned keys or only
a prefix due to maxDelete.
In `@x/oracle/keeper/timestamp.go`:
- Around line 49-66: DecodeTimestamp currently only checks buffer length and
uses readDigits which doesn't validate characters, allowing malformed buffers to
produce normalized (but incorrect) times; update DecodeTimestamp to verify that
separator bytes at positions 4,7,10,13,16,19,23,27,30 match expected ASCII
separators (e.g. '-', 'T', ':', '.', etc.), ensure each numeric slice contains
only ASCII digits before calling readDigits, and validate numeric ranges
(year>0, month 1-12, day within month's valid range considering leap years, hour
0-23, minute 0-59, second 0-59, subsecond parts 0-999) returning an error on any
invalid input; apply the same validation logic to the other timestamp decoder(s)
that use readDigits so malformed buffers fail instead of being normalized by
time.Date.
---
Outside diff comments:
In `@x/oracle/keeper/key.go`:
- Around line 26-42: BuildPricePrefix currently writes a uvarint id and omits
BaseDenom, so it doesn't produce the v2 key layout; update BuildPricePrefix (and
its use of PricesPrefix.Bytes() and EncodeTimestamp) to emit the v2 format:
write Source as a 4-byte big-endian (use binary.BigEndian.PutUint32 or
equivalent) instead of the uvarint id, then write Denom (length+bytes), then
write BaseDenom (length+bytes), and finally write Timestamp (if non-zero) and
leave space for Sequence as appropriate; remove the uvarint logic and adjust the
function signature/parameters to accept a baseDenom (and a uint32 source) so
callers produce keys matching v2 price-key layout.
---
Duplicate comments:
In `@x/bme/keeper/keeper.go`:
- Around line 874-880: The error handling after calling k.calculateCR(sctx)
incorrectly overwrites the MintStatusHaltOracle status because cb.Status is set
unconditionally to MintStatusHaltCR; update the branch in the function
containing k.calculateCR so that when errors.Is(err, otypes.ErrPriceStalled)
sets cb.Status = bmetypes.MintStatusHaltOracle it is not immediately
replaced—either use an if/else (set MintStatusHaltCR only when the error is not
ErrPriceStalled) or return/exit after setting MintStatusHaltOracle; ensure the
logic around k.calculateCR, the err check, and cb.Status assignments preserves
MintStatusHaltOracle when appropriate.
---
Nitpick comments:
In `@contracts/pyth/src/oracle.rs`:
- Around line 262-282: Add a unit test that covers non-zero nanoseconds by
creating a MsgAddPriceEntry with timestamp_nanos != 0 (e.g., 500_000_000),
assert msg.timestamp_nanos equals that value, call msg.encode_to_protobuf(),
assert the returned binary is non-empty and verify the protobuf encoding
contains the timestamp field (for example by checking the binary contains the
field 4 tag 0x22); place the test alongside existing tests (e.g., name it
test_msg_add_price_entry_with_nanos) to exercise MsgAddPriceEntry::new and
encode_to_protobuf paths that handle timestamp_nanos.
- Around line 149-165: The encode_timestamp method manually builds protobuf
varints and casts signed fields to u64 (timestamp_seconds as u64,
timestamp_nanos as u64), which is fragile and risks diverging from the .proto;
replace this hand-rolled encoding by using a protobuf library (e.g., prost) to
serialize the timestamp message or at minimum move encoding into a single helper
that uses prost::Message::encode for a Timestamp struct so encode_timestamp,
timestamp_seconds, timestamp_nanos and encode_varint are removed/rewired to call
the generated/standard encoder; ensure the produced bytes match the original
wire format by using prost-generated types or prost::Message::encode_to_vec.
- Around line 118-124: The current serialization omits field 4 when
encode_timestamp() returns an empty Vec, which makes a true Unix epoch
(timestamp_seconds == 0 && timestamp_nanos == 0) indistinguishable from “no
timestamp”; change the logic so that the timestamp field is always emitted for
valid price entries: either update encode_timestamp() to return a non-empty
encoded Timestamp even for zero values (so seconds/nanos are encoded explicitly)
or remove the is_empty() check in the block that writes field tag 0x22 and
instead always push 0x22 and the length/bytes returned by encode_timestamp();
refer to encode_timestamp(), timestamp_seconds, timestamp_nanos, and the
serialization code that pushes 0x22 and extends buf to locate and apply the
change.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 0b06d4e4-237a-4bb0-b668-d6c246ccf260
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (47)
_docs/pyth-integration.md_run/init.sh_run/node/prop.jsonapp/types/app.gocontracts/pyth/src/contract.rscontracts/pyth/src/integration_tests.rscontracts/pyth/src/oracle.rsgo.modmeta.jsonscript/upgrades.shscript/wasm2go.shtests/e2e/oracle_cli_test.gotests/e2e/oracle_grpc_test.gotests/e2e/pyth_contract_test.gotests/upgrade/test-cases.jsontestutil/oracle/price_feeder.gotestutil/state/suite.goupgrades/software/v2.0.0/upgrade.goupgrades/software/v2.1.0/contracts.goupgrades/software/v2.1.0/init.goupgrades/software/v2.1.0/oracle.goupgrades/software/v2.1.0/upgrade.goupgrades/upgrades.gox/bme/keeper/abci.gox/bme/keeper/keeper.gox/bme/keeper/key.gox/bme/simulation/genesis.gox/bme/simulation/proposals.gox/deployment/keeper/grpc_query.gox/epochs/keeper/abci.gox/market/keeper/grpc_query.gox/oracle/alias.gox/oracle/handler/server.gox/oracle/keeper/abci.gox/oracle/keeper/codec.gox/oracle/keeper/genesis.gox/oracle/keeper/grpc_query.gox/oracle/keeper/grpc_query_test.gox/oracle/keeper/keeper.gox/oracle/keeper/key.gox/oracle/keeper/prune.gox/oracle/keeper/timestamp.gox/oracle/module.gox/oracle/simulation/genesis.gox/oracle/simulation/proposals.gox/wasm/bindings/akash_query.gox/wasm/keeper/msg_filter.go
✅ Files skipped from review due to trivial changes (5)
- go.mod
- x/oracle/simulation/genesis.go
- meta.json
- tests/upgrade/test-cases.json
- x/bme/keeper/key.go
🚧 Files skipped from review as they are similar to previous changes (23)
- upgrades/upgrades.go
- x/wasm/keeper/msg_filter.go
- x/market/keeper/grpc_query.go
- tests/e2e/oracle_grpc_test.go
- x/bme/simulation/genesis.go
- tests/e2e/oracle_cli_test.go
- upgrades/software/v2.0.0/upgrade.go
- contracts/pyth/src/contract.rs
- _run/node/prop.json
- x/wasm/bindings/akash_query.go
- x/bme/keeper/abci.go
- testutil/state/suite.go
- script/wasm2go.sh
- upgrades/software/v2.1.0/oracle.go
- x/oracle/handler/server.go
- testutil/oracle/price_feeder.go
- x/oracle/keeper/genesis.go
- _run/init.sh
- x/bme/simulation/proposals.go
- upgrades/software/v2.1.0/init.go
- script/upgrades.sh
- upgrades/software/v2.1.0/upgrade.go
- x/oracle/keeper/keeper.go
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (3)
x/oracle/handler/server.go (1)
28-37:⚠️ Potential issue | 🟠 MajorReject invalid client timestamps before calling the keeper.
x/oracle/keeper/keeper.goonly enforces monotonicity against the latest stored timestamp. With the direct pass-through here, an omitted timestamp now fails as "too old", and a future timestamp can movelatestPriceIDahead of block time so later normal submissions are rejected until the clock catches up. Validatereq.Timestamphere (required + not aftersctx.BlockTime()/ bounded skew) before persisting it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/handler/server.go` around lines 28 - 37, Add validation in msgServer.AddPriceEntry: before calling ms.keeper.AddPriceEntry, ensure req.Timestamp is present and within allowed bounds (not in the future relative to sctx.BlockTime() and not older than an acceptable skew/threshold); if invalid, return an error. Locate the AddPriceEntry handler (uses sdk.UnwrapSDKContext to get sctx and currently calls ms.keeper.AddPriceEntry) and add the checks on req.Timestamp versus sctx.BlockTime() (and a configured/constant max skew) to reject omitted, too-old, or future timestamps so keeper state (e.g., latestPriceID) cannot be moved ahead of block time.x/oracle/keeper/abci.go (1)
129-138:⚠️ Potential issue | 🟠 MajorDon't emit a success event after a failed state write.
If
aggregatedPrices.Setfails, the code logs the error and still appendsEventAggregatedPrice. That lets indexers observe a price update that never reached state. Stop processing this denom when the write fails.🐛 Minimal fix
if health.IsHealthy && len(latestPrices) > 0 { err = k.aggregatedPrices.Set(sctx, did, aggregatedPrice) if err != nil { - sctx.Logger().Error( - "set aggregated price", - "reason", err.Error(), - ) + return err } evts = append(evts, &types.EventAggregatedPrice{Price: aggregatedPrice}) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/abci.go` around lines 129 - 138, The code logs an error from k.aggregatedPrices.Set(sctx, did, aggregatedPrice) but still appends EventAggregatedPrice to evts, allowing indexers to see an update that wasn't persisted; update the flow in the handler (the block using aggregatedPrices.Set, sctx.Logger().Error, and evts slice) so that if Set returns an error you stop processing this denom (e.g., return or continue out of the loop) and do not append EventAggregatedPrice — ensure the error is logged as before and the function/loop exits for that denom to prevent emitting a success event for a failed write.x/oracle/keeper/key.go (1)
26-42:⚠️ Potential issue | 🟠 MajorMake
BuildPricePrefixmatchPriceDataRecordIDKey's binary layout.
x/oracle/keeper/codec.gonow encodespriceskeys assource(uint32-be) / denom / baseDenom / timestamp / sequence, but this helper still writessourceas a uvarint and appendstimestampimmediately afterdenom. Any raw scan built from this prefix will miss the new keys oncetsis set, and the source portion no longer lines up either. Align this helper with the codec layout (likely includingbaseDenom) or remove it before anything starts relying on it.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/key.go` around lines 26 - 42, BuildPricePrefix currently writes source as a uvarint and places timestamp immediately after denom, but PriceDataRecordIDKey/codec.go encodes keys as source (uint32 big-endian) / denom / baseDenom / timestamp / sequence; update BuildPricePrefix to produce the same binary layout by writing id as a 4-byte big-endian uint32 (not uvarint), writing denom then baseDenom (handle empty baseDenom consistently with codec), and then appending EncodeTimestamp(ts) only after baseDenom; ensure byte-length prefixes (if used) match codec conventions and remove the old uvarint/incorrect ordering so raw scans align with PriceDataRecordIDKey.
♻️ Duplicate comments (1)
script/upgrades.sh (1)
599-601:⚠️ Potential issue | 🟠 MajorGuard optional
wasmcopy to avoid aborting setup.Line 601 unconditionally runs
cp -r "$srcval/wasm" ./; withset -e, this exits whenwasmis absent in the source state.Proposed fix
rm -rf "data" rm -rf "wasm" cp -r "$srcval/data" ./ - cp -r "$srcval/wasm" ./ + if [[ -d "$srcval/wasm" ]]; then + cp -r "$srcval/wasm" ./ + fi#!/bin/bash set -euo pipefail tmp="$(mktemp -d)" trap 'rm -rf "$tmp"' EXIT mkdir -p "$tmp/src/data" "$tmp/dst" cp -r "$tmp/src/data" "$tmp/dst"/ if cp -r "$tmp/src/wasm" "$tmp/dst"/; then echo "unexpected: copy succeeded" else echo "expected: copy fails when source wasm directory is missing" fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@script/upgrades.sh` around lines 599 - 601, The unconditional cp of "$srcval/wasm" (the line cp -r "$srcval/wasm" ./) can abort under set -e if the source wasm directory is missing; change it to a guarded copy: check existence of the directory (e.g., [ -d "$srcval/wasm" ] or test -e) before running cp, or perform the cp in a conditional (if cp -r "$srcval/wasm" ./; then ... fi) so the script continues when wasm is absent; keep the existing rm -rf "wasm" and the cp -r "$srcval/data" ./ behavior unchanged and ensure any success/failure is handled without causing the script to exit.
🧹 Nitpick comments (1)
x/bme/keeper/keeper.go (1)
109-145: MoveledgerSequenceto the transient schema.
ledgerSequenceis reset to0at the start of every block inBeginBlocker(abci.go:22), making it per-block state. Persisting it to the persistent store only adds unnecessary state churn—usetsb(the transient schema already defined in keeper.go:110–113) instead.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/bme/keeper/keeper.go` around lines 109 - 145, The ledgerSequence item is created on the persistent schema builder (sb) but should be transient; change the creation of ledgerSequence from collections.NewItem(sb, LedgerSequenceKey, "ledger_sequence", collections.Int64Value) to use the transient builder tsb instead (i.e., call collections.NewItem(tsb, LedgerSequenceKey, "ledger_sequence", collections.Int64Value)), remove it from sb’s map of items, rebuild tschema and schema as shown, and ensure the keeper struct still assigns k.schema = schema and k.tschema = tschema so ledgerSequence lives in the transient store accessed via tschema/tsb.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@tests/e2e/pyth_contract_test.go`:
- Line 578: The comment for TestWormholeResponseParsing contains a stray
trailing backtick; open the test file and edit the comment text for the
TestWormholeResponseParsing declaration to remove the trailing backtick so the
comment reads "// TestWormholeResponseParsing tests parsing of Wormhole contract
responses" without the backtick.
In `@upgrades/software/v2.1.0/upgrade.go`:
- Around line 53-79: Before calling RunMigrations or immediately after it
returns, read the pre-wipe oracle state (params and latest price snapshots) and
persist them into the v2 store instead of overwriting with only
otypes.DefaultParams(); specifically: capture existing params via the old oracle
keeper APIs, translate/map fields into the v2 params shape (don’t discard
governance-tuned values) and use up.Keepers.Akash.Oracle.SetParams to write the
mapped params; additionally seed the v2 aggregatedPrices, pricesHealth,
latestPriceID, prices and pricesSequence with the last known snapshots (or
initialize them deterministically from the captured prices) so the system won’t
be left empty until the next submission, and ensure this logic occurs around
RunMigrations/up.MM.RunMigrations and before relying on the new oracle state.
In `@x/bme/keeper/keeper.go`:
- Around line 874-879: The current error handling after calling
calculateCR(sctx) incorrectly overwrites MintStatusHaltOracle because cb.Status
= bmetypes.MintStatusHaltCR runs unconditionally; change the logic in the error
branch so that when errors.Is(err, otypes.ErrPriceStalled) you set cb.Status =
bmetypes.MintStatusHaltOracle and return or use an else (or switch) for other
errors to set cb.Status = bmetypes.MintStatusHaltCR; update the block around
calculateCR(sctx) so the HaltOracle assignment is not immediately overwritten.
In `@x/oracle/keeper/abci.go`:
- Around line 34-38: The EndBlocker currently calls k.GetParams(sctx) and on
error only logs and returns nil, which silently masks a consensus-path store
read failure; instead, change the error handling in EndBlocker so it does not
treat a failed k.GetParams(sctx) as a no-op—either propagate the error or panic.
Specifically, replace the sctx.Logger().Error + return nil branch handling the
error from k.GetParams(sctx) with a hard failure (e.g., panic with the wrapped
error or return the error to the caller) so that GetParams failures are not
ignored and execution halts (refer to k.GetParams, EndBlocker, and the current
sctx.Logger().Error usage to locate the code).
In `@x/wasm/bindings/akash_query.go`:
- Around line 52-53: The comment for the MaxPriceStalenessPeriod field is
incorrect (it says "in blocks") while the code treats this value as a time
period in seconds; update the comment above MaxPriceStalenessPeriod to state
that the unit is seconds (e.g., "maximum price staleness period in seconds") so
it matches the JSON tag/field name and the usage in the oracle keeper (abci.go)
where it's handled as seconds.
---
Outside diff comments:
In `@x/oracle/handler/server.go`:
- Around line 28-37: Add validation in msgServer.AddPriceEntry: before calling
ms.keeper.AddPriceEntry, ensure req.Timestamp is present and within allowed
bounds (not in the future relative to sctx.BlockTime() and not older than an
acceptable skew/threshold); if invalid, return an error. Locate the
AddPriceEntry handler (uses sdk.UnwrapSDKContext to get sctx and currently calls
ms.keeper.AddPriceEntry) and add the checks on req.Timestamp versus
sctx.BlockTime() (and a configured/constant max skew) to reject omitted,
too-old, or future timestamps so keeper state (e.g., latestPriceID) cannot be
moved ahead of block time.
In `@x/oracle/keeper/abci.go`:
- Around line 129-138: The code logs an error from k.aggregatedPrices.Set(sctx,
did, aggregatedPrice) but still appends EventAggregatedPrice to evts, allowing
indexers to see an update that wasn't persisted; update the flow in the handler
(the block using aggregatedPrices.Set, sctx.Logger().Error, and evts slice) so
that if Set returns an error you stop processing this denom (e.g., return or
continue out of the loop) and do not append EventAggregatedPrice — ensure the
error is logged as before and the function/loop exits for that denom to prevent
emitting a success event for a failed write.
In `@x/oracle/keeper/key.go`:
- Around line 26-42: BuildPricePrefix currently writes source as a uvarint and
places timestamp immediately after denom, but PriceDataRecordIDKey/codec.go
encodes keys as source (uint32 big-endian) / denom / baseDenom / timestamp /
sequence; update BuildPricePrefix to produce the same binary layout by writing
id as a 4-byte big-endian uint32 (not uvarint), writing denom then baseDenom
(handle empty baseDenom consistently with codec), and then appending
EncodeTimestamp(ts) only after baseDenom; ensure byte-length prefixes (if used)
match codec conventions and remove the old uvarint/incorrect ordering so raw
scans align with PriceDataRecordIDKey.
---
Duplicate comments:
In `@script/upgrades.sh`:
- Around line 599-601: The unconditional cp of "$srcval/wasm" (the line cp -r
"$srcval/wasm" ./) can abort under set -e if the source wasm directory is
missing; change it to a guarded copy: check existence of the directory (e.g., [
-d "$srcval/wasm" ] or test -e) before running cp, or perform the cp in a
conditional (if cp -r "$srcval/wasm" ./; then ... fi) so the script continues
when wasm is absent; keep the existing rm -rf "wasm" and the cp -r
"$srcval/data" ./ behavior unchanged and ensure any success/failure is handled
without causing the script to exit.
---
Nitpick comments:
In `@x/bme/keeper/keeper.go`:
- Around line 109-145: The ledgerSequence item is created on the persistent
schema builder (sb) but should be transient; change the creation of
ledgerSequence from collections.NewItem(sb, LedgerSequenceKey,
"ledger_sequence", collections.Int64Value) to use the transient builder tsb
instead (i.e., call collections.NewItem(tsb, LedgerSequenceKey,
"ledger_sequence", collections.Int64Value)), remove it from sb’s map of items,
rebuild tschema and schema as shown, and ensure the keeper struct still assigns
k.schema = schema and k.tschema = tschema so ledgerSequence lives in the
transient store accessed via tschema/tsb.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 7ff53d15-b327-40d9-8b02-9bf307da26a9
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (48)
_docs/pyth-integration.md_run/init.sh_run/node/prop.jsonapp/types/app.gocontracts/pyth/src/contract.rscontracts/pyth/src/integration_tests.rscontracts/pyth/src/oracle.rsgo.modmeta.jsonscript/upgrades.shscript/wasm2go.shtests/e2e/oracle_cli_test.gotests/e2e/oracle_grpc_test.gotests/e2e/pyth_contract_test.gotests/upgrade/test-cases.jsontestutil/oracle/price_feeder.gotestutil/state/suite.goupgrades/software/v2.0.0/upgrade.goupgrades/software/v2.1.0/contracts.goupgrades/software/v2.1.0/init.goupgrades/software/v2.1.0/oracle.goupgrades/software/v2.1.0/upgrade.goupgrades/upgrades.gox/bme/keeper/abci.gox/bme/keeper/keeper.gox/bme/keeper/key.gox/bme/simulation/genesis.gox/bme/simulation/proposals.gox/deployment/keeper/grpc_query.gox/epochs/keeper/abci.gox/market/keeper/grpc_query.gox/oracle/alias.gox/oracle/handler/server.gox/oracle/keeper/abci.gox/oracle/keeper/codec.gox/oracle/keeper/genesis.gox/oracle/keeper/grpc_query.gox/oracle/keeper/grpc_query_test.gox/oracle/keeper/keeper.gox/oracle/keeper/key.gox/oracle/keeper/prune.gox/oracle/keeper/timestamp.gox/oracle/keeper/timestamp_test.gox/oracle/module.gox/oracle/simulation/genesis.gox/oracle/simulation/proposals.gox/wasm/bindings/akash_query.gox/wasm/keeper/msg_filter.go
✅ Files skipped from review due to trivial changes (7)
- upgrades/upgrades.go
- x/oracle/simulation/genesis.go
- go.mod
- tests/upgrade/test-cases.json
- upgrades/software/v2.1.0/init.go
- meta.json
- x/oracle/keeper/timestamp.go
🚧 Files skipped from review as they are similar to previous changes (20)
- x/wasm/keeper/msg_filter.go
- x/market/keeper/grpc_query.go
- x/bme/simulation/genesis.go
- x/bme/simulation/proposals.go
- x/epochs/keeper/abci.go
- tests/e2e/oracle_grpc_test.go
- contracts/pyth/src/contract.rs
- x/oracle/alias.go
- tests/e2e/oracle_cli_test.go
- script/wasm2go.sh
- x/oracle/simulation/proposals.go
- upgrades/software/v2.0.0/upgrade.go
- x/bme/keeper/abci.go
- _docs/pyth-integration.md
- x/bme/keeper/key.go
- testutil/oracle/price_feeder.go
- _run/init.sh
- _run/node/prop.json
- contracts/pyth/src/oracle.rs
- x/oracle/keeper/keeper.go
Signed-off-by: Artur Troian <troian@users.noreply.github.com>
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
_docs/pyth-integration.md (1)
769-790:⚠️ Potential issue | 🟠 MajorSync this guardian-set example with the repo's v5 list.
_run/init.shLine 45-Line 68 is annotated as Wormhole mainnet guardian set 5, but this example still uses different 1st/7th/15th/16th entries (58CC…,54Ce…,71AA…,8192…). Copy-pasting this proposal will configure the wrong guardians and break VAA verification.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_docs/pyth-integration.md` around lines 769 - 790, The guardian_addresses array in the WormholeContractParams example contains outdated entries (notably the 1st, 7th, 15th, and 16th addresses) and must be synced to the repo's Wormhole guardian set v5; update the guardian_addresses array to exactly match the canonical v5 list used in the repo (source of truth: the v5 guardian set in _run/init.sh) so copy the correct addresses from that v5 listing and replace the mismatched entries in this example.
♻️ Duplicate comments (1)
script/wasm2go.sh (1)
7-12:⚠️ Potential issue | 🟡 MinorHeader comment still points to
v2.0.0; update it tov2.1.0.Line 12 writes to
upgrades/software/v2.1.0/contracts.go, but Line 7 still documentsv2.0.0. Please keep the usage header aligned with the actual output path.Suggested fix
-# to upgrades/software/v2.0.0/contracts.go with byte slice variables. +# to upgrades/software/v2.1.0/contracts.go with byte slice variables.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@script/wasm2go.sh` around lines 7 - 12, Update the script header comment that currently references "v2.0.0" so it matches the OUTPUT_FILE target "upgrades/software/v2.1.0/contracts.go"; locate the top-of-file comment near the set -euo pipefail and replace the v2.0.0 text with v2.1.0 (ensure consistency with the ARTIFACTS_DIR and OUTPUT_FILE variables).
🧹 Nitpick comments (4)
_run/init.sh (1)
311-323: Avoid hard-coding proposal ID1.Line 319 is only correct on a pristine chain. The first earlier proposal or a partially rerun setup makes this function vote the wrong item. Reading the created ID from
submit-proposalwould make the flow resilient.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@_run/init.sh` around lines 311 - 323, The script currently hard-codes proposal ID 1 after running `akash tx gov submit-proposal /tmp/oracle-params.json`; change it to capture the created proposal ID from the `akash tx gov submit-proposal` output (e.g., parse stdout for the proposal id or use `--output json`) into a variable like PROPOSAL_ID and then use that variable in `akash tx gov vote $PROPOSAL_ID yes --from validator` and in the subsequent wait/check logic (replace the fixed `sleep 70` with a short loop that polls `akash query gov proposal $PROPOSAL_ID` until the proposal reaches a final status). Ensure references to `submit-proposal`, `/tmp/oracle-params.json`, `akash tx gov vote`, and the new PROPOSAL_ID variable are updated accordingly.testutil/oracle/price_feeder.go (1)
9-9: Consider renaming the import alias to reflect v2.The alias
oraclev1is misleading since it now importspkg.akt.dev/go/node/oracle/v2. Consider renaming tooracletypesororaclev2for clarity.- oraclev1 "pkg.akt.dev/go/node/oracle/v2" + oracletypes "pkg.akt.dev/go/node/oracle/v2"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@testutil/oracle/price_feeder.go` at line 9, The import alias oraclev1 is misleading because it points to pkg.akt.dev/go/node/oracle/v2; rename the alias (e.g., to oraclev2 or oracletypes) in the import statement and update all usages of the symbol oraclev1 (functions, types, constants) in this file (e.g., any oraclev1.X references) to the new alias so the code and identifiers reflect the v2 package name.x/bme/keeper/keeper.go (1)
139-145: Remove the unusedtschemaassignment or the entire transient schema setup if not needed.The
tkeyparameter and transient store service are used correctly. However,tschemais built fromtsbbut never used after assignment tok.tschema. If transient collections are not needed for BME, remove thetsbandtschemasetup entirely. Otherwise, clarify the intended use of the transient schema.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/bme/keeper/keeper.go` around lines 139 - 145, The code builds a transient schema (tschema := tsb.Build()) and assigns it to k.tschema but never uses it; either remove the transient schema setup (tsb, tschema, k.tschema) if BME does not require transient collections, or retain it and document/use k.tschema where transient collections are needed. Locate the transient builder variable named tsb and the assignment k.tschema, then either delete the tsb construction and the tschema/assignment lines and any unused tkey/transient-related variables, or add the intended usage of k.tschema (e.g., passing it into transient store initialization or methods) and ensure tschema comes from tsb.Build() and its error is handled consistently with other schema logic.x/oracle/keeper/keeper.go (1)
537-570: Consider handling single-price edge case gracefully.When there's only one data point with timestamp equal to
now,totalWeightbecomes 0 andErrTWAPZeroWeightis returned. This could occur when a price is submitted and aggregation runs in the same block.Consider returning the single price directly when
len(dataPoints) == 1andtotalWeight == 0:♻️ Proposed fix
func calculateTWAP(now time.Time, dataPoints []types.PriceData) (sdkmath.LegacyDec, error) { if len(dataPoints) == 0 { return sdkmath.LegacyZeroDec(), errorsmod.Wrap( sdkerrors.ErrNotFound, "no price data for requested source", ) } weightedSum := sdkmath.LegacyZeroDec() totalWeight := int64(0) for i := 0; i < len(dataPoints); i++ { current := dataPoints[i] var timeWeight int64 if i > 0 { // dataPoints are in descending order, so dataPoints[i-1] is newer than current timeWeight = dataPoints[i-1].ID.Timestamp.Sub(current.ID.Timestamp).Nanoseconds() } else { timeWeight = now.Sub(current.ID.Timestamp).Nanoseconds() } weightedSum = weightedSum.Add(current.State.Price.Mul(sdkmath.LegacyNewDec(timeWeight))) totalWeight += timeWeight } if totalWeight == 0 { + // Single price at current time - return it directly + if len(dataPoints) == 1 { + return dataPoints[0].State.Price, nil + } return sdkmath.LegacyZeroDec(), types.ErrTWAPZeroWeight } return weightedSum.Quo(sdkmath.LegacyNewDec(totalWeight)), nil }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/keeper.go` around lines 537 - 570, The calculateTWAP function can return ErrTWAPZeroWeight when a single dataPoint has timestamp == now; update calculateTWAP to handle this edge case by checking if len(dataPoints) == 1 and totalWeight == 0 (or compute the single-point weight condition) and return that single data point's price (current.State.Price) directly as the TWAP instead of dividing by zero; modify the loop or add a post-loop conditional in calculateTWAP to detect this scenario and return current.State.Price as the result with nil error.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@x/oracle/handler/server.go`:
- Line 36: The keeper's AddPriceEntry currently only checks that new timestamps
are not before the latest stored entry but accepts arbitrarily future
timestamps; update keeper.AddPriceEntry to reject timestamps that exceed the
current block time by a tolerance (use the provided context's BlockTime(), e.g.,
ctx.BlockTime() + tolerance like 1 hour or a configurable parameter) and return
a descriptive error when req.Timestamp > ctx.BlockTime().Add(tolerance). Ensure
this validation runs before persisting the entry and reference the server call
site ms.keeper.AddPriceEntry(sctx, source, req.ID, req.Price, req.Timestamp) so
submitted future-dated entries are blocked at keeper level.
In `@x/oracle/keeper/abci.go`:
- Around line 40-43: The code treats params.TwapWindow (an int64 block count) as
a time.Duration when computing twapStart; change twapDuration to a time.Duration
by converting the block count into a real time interval (multiply
params.TwapWindow by the per-block time duration) before calling
sctx.BlockTime().Add. Locate the twapDuration/twapStart computation in abci.go
(where now := sctx.BlockTime(), twapDuration := params.TwapWindow, twapStart :=
now.Add(-twapDuration)) and replace twapDuration with a computed duration like
time.Duration(params.TwapWindow) * <perBlockDuration> (use your node/config
value or block spacing constant) so twapStart uses a time.Duration argument.
---
Outside diff comments:
In `@_docs/pyth-integration.md`:
- Around line 769-790: The guardian_addresses array in the
WormholeContractParams example contains outdated entries (notably the 1st, 7th,
15th, and 16th addresses) and must be synced to the repo's Wormhole guardian set
v5; update the guardian_addresses array to exactly match the canonical v5 list
used in the repo (source of truth: the v5 guardian set in _run/init.sh) so copy
the correct addresses from that v5 listing and replace the mismatched entries in
this example.
---
Duplicate comments:
In `@script/wasm2go.sh`:
- Around line 7-12: Update the script header comment that currently references
"v2.0.0" so it matches the OUTPUT_FILE target
"upgrades/software/v2.1.0/contracts.go"; locate the top-of-file comment near the
set -euo pipefail and replace the v2.0.0 text with v2.1.0 (ensure consistency
with the ARTIFACTS_DIR and OUTPUT_FILE variables).
---
Nitpick comments:
In `@_run/init.sh`:
- Around line 311-323: The script currently hard-codes proposal ID 1 after
running `akash tx gov submit-proposal /tmp/oracle-params.json`; change it to
capture the created proposal ID from the `akash tx gov submit-proposal` output
(e.g., parse stdout for the proposal id or use `--output json`) into a variable
like PROPOSAL_ID and then use that variable in `akash tx gov vote $PROPOSAL_ID
yes --from validator` and in the subsequent wait/check logic (replace the fixed
`sleep 70` with a short loop that polls `akash query gov proposal $PROPOSAL_ID`
until the proposal reaches a final status). Ensure references to
`submit-proposal`, `/tmp/oracle-params.json`, `akash tx gov vote`, and the new
PROPOSAL_ID variable are updated accordingly.
In `@testutil/oracle/price_feeder.go`:
- Line 9: The import alias oraclev1 is misleading because it points to
pkg.akt.dev/go/node/oracle/v2; rename the alias (e.g., to oraclev2 or
oracletypes) in the import statement and update all usages of the symbol
oraclev1 (functions, types, constants) in this file (e.g., any oraclev1.X
references) to the new alias so the code and identifiers reflect the v2 package
name.
In `@x/bme/keeper/keeper.go`:
- Around line 139-145: The code builds a transient schema (tschema :=
tsb.Build()) and assigns it to k.tschema but never uses it; either remove the
transient schema setup (tsb, tschema, k.tschema) if BME does not require
transient collections, or retain it and document/use k.tschema where transient
collections are needed. Locate the transient builder variable named tsb and the
assignment k.tschema, then either delete the tsb construction and the
tschema/assignment lines and any unused tkey/transient-related variables, or add
the intended usage of k.tschema (e.g., passing it into transient store
initialization or methods) and ensure tschema comes from tsb.Build() and its
error is handled consistently with other schema logic.
In `@x/oracle/keeper/keeper.go`:
- Around line 537-570: The calculateTWAP function can return ErrTWAPZeroWeight
when a single dataPoint has timestamp == now; update calculateTWAP to handle
this edge case by checking if len(dataPoints) == 1 and totalWeight == 0 (or
compute the single-point weight condition) and return that single data point's
price (current.State.Price) directly as the TWAP instead of dividing by zero;
modify the loop or add a post-loop conditional in calculateTWAP to detect this
scenario and return current.State.Price as the result with nil error.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 1b3b90ea-7147-4a75-a91e-ba46fc17bdad
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (49)
_docs/pyth-integration.md_run/init.sh_run/node/prop.jsonapp/ante.goapp/types/app.gocontracts/pyth/src/contract.rscontracts/pyth/src/integration_tests.rscontracts/pyth/src/oracle.rsgo.modmeta.jsonscript/upgrades.shscript/wasm2go.shtests/e2e/oracle_cli_test.gotests/e2e/oracle_grpc_test.gotests/e2e/pyth_contract_test.gotests/upgrade/test-cases.jsontestutil/oracle/price_feeder.gotestutil/state/suite.goupgrades/software/v2.0.0/upgrade.goupgrades/software/v2.1.0/contracts.goupgrades/software/v2.1.0/init.goupgrades/software/v2.1.0/oracle.goupgrades/software/v2.1.0/upgrade.goupgrades/upgrades.gox/bme/keeper/abci.gox/bme/keeper/keeper.gox/bme/keeper/key.gox/bme/simulation/genesis.gox/bme/simulation/proposals.gox/deployment/keeper/grpc_query.gox/epochs/keeper/abci.gox/market/keeper/grpc_query.gox/oracle/alias.gox/oracle/handler/server.gox/oracle/keeper/abci.gox/oracle/keeper/codec.gox/oracle/keeper/genesis.gox/oracle/keeper/grpc_query.gox/oracle/keeper/grpc_query_test.gox/oracle/keeper/keeper.gox/oracle/keeper/key.gox/oracle/keeper/prune.gox/oracle/keeper/timestamp.gox/oracle/keeper/timestamp_test.gox/oracle/module.gox/oracle/simulation/genesis.gox/oracle/simulation/proposals.gox/wasm/bindings/akash_query.gox/wasm/keeper/msg_filter.go
✅ Files skipped from review due to trivial changes (12)
- x/epochs/keeper/abci.go
- tests/e2e/oracle_cli_test.go
- x/bme/simulation/proposals.go
- x/oracle/simulation/genesis.go
- x/oracle/simulation/proposals.go
- meta.json
- x/oracle/keeper/timestamp_test.go
- x/deployment/keeper/grpc_query.go
- tests/upgrade/test-cases.json
- x/oracle/keeper/genesis.go
- upgrades/software/v2.1.0/upgrade.go
- x/oracle/keeper/prune.go
🚧 Files skipped from review as they are similar to previous changes (15)
- x/bme/simulation/genesis.go
- upgrades/upgrades.go
- script/upgrades.sh
- testutil/state/suite.go
- x/market/keeper/grpc_query.go
- upgrades/software/v2.0.0/upgrade.go
- contracts/pyth/src/contract.rs
- x/oracle/alias.go
- upgrades/software/v2.1.0/init.go
- go.mod
- x/bme/keeper/key.go
- x/oracle/keeper/grpc_query.go
- _run/node/prop.json
- x/oracle/keeper/timestamp.go
- contracts/pyth/src/integration_tests.rs
Signed-off-by: Artur Troian <troian@users.noreply.github.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
x/deployment/module.go (1)
157-160:⚠️ Potential issue | 🟡 MinorStale docstring: EndBlock no longer processes denom migrations.
The comment states that
EndBlockprocesses deferred denom migrations, but the implementation inabci.gois now a no-op. Update the comment to reflect the current behavior.📝 Suggested fix
-// EndBlock processes deferred uakt→uact denom migrations using oracle price. +// EndBlock performs no-op as denom migrations are now handled during upgrade. func (am AppModule) EndBlock(ctx context.Context) error { return am.keeper.EndBlocker(ctx) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/deployment/module.go` around lines 157 - 160, Update the stale comment for AppModule.EndBlock to reflect current behavior: replace the mention of processing deferred uakt→uact denom migrations with a concise note that EndBlock simply delegates to keeper.EndBlocker and is currently a no-op (or performs module-specific end-of-block tasks if any). Reference the AppModule.EndBlock method and the keeper.EndBlocker call so the maintainer can find and edit the docstring accordingly.
♻️ Duplicate comments (2)
x/oracle/keeper/prune.go (2)
33-45:⚠️ Potential issue | 🟠 MajorReturn prune failures so x/epochs can drop the cache write.
x/epochs/keeper/abci.goonly commits the cached hook context whenAfterEpochEndreturnsnil. Logging the error here and still returning success means a walk/remove failure can partially prune state and still get written.🩹 Suggested change
pruned, err := k.prunePrices(sctx, cutoff, params.MaxPrunePerEpoch) if err != nil { - k.Logger(sctx).Error("failed to prune price records", "error", err) + return err }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/prune.go` around lines 33 - 45, The prune handler currently logs errors from k.prunePrices(sctx, cutoff, params.MaxPrunePerEpoch) but always returns nil, allowing x/epochs to commit despite prune failures; change the control flow in the surrounding function (the caller of k.prunePrices) so that if err != nil it returns that error (propagating the failure to x/epochs) instead of only logging it, while still logging successes (the existing Info when pruned > 0) before returning nil on success. Ensure you update the branch that checks err from k.prunePrices to return err and keep the Logger calls for context.
76-112:⚠️ Potential issue | 🟠 MajorReconcile
latestPriceIDafter deleting a prefix.This only removes entries from
k.prices. If the deleted set includes the current latest record, or the prefix becomes empty,latestPriceIDis left pointing at missing data. After the delete pass, either drop the latest entry for empty prefixes or move it to the newest surviving record.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@x/oracle/keeper/prune.go` around lines 76 - 112, pruneSourcePrices currently removes entries from k.prices but never updates latestPriceID, leaving it pointing at deleted/missing data; after the deletion loop in pruneSourcePrices, check the prefix (same Source/Denom/BaseDenom) and if no records remain remove the latestPriceID entry for that prefix, otherwise find the newest surviving record (e.g., by walking the same range in reverse or querying the last key) and update latestPriceID to that record; ensure you reference and update the keeper state that stores latestPriceID (the field/methods that read/write latestPriceID) and handle errors from the lookup/update paths.
🧹 Nitpick comments (1)
upgrades/software/v2.1.0/deployment.go (1)
30-30: Unsafe type assertion may panic during upgrade.The unchecked type assertion will panic if
StoreKey()returns something other than*storetypes.KVStoreKey. While this is typically the case for module stores, a defensive check prevents cryptic panics during upgrade execution.🛡️ Suggested defensive check
- skey := m.StoreKey().(*storetypes.KVStoreKey) + skey, ok := m.StoreKey().(*storetypes.KVStoreKey) + if !ok { + return fmt.Errorf("deployment migration: expected *KVStoreKey, got %T", m.StoreKey()) + }You'll need to add
"fmt"to the imports.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@upgrades/software/v2.1.0/deployment.go` at line 30, Replace the unsafe type assertion on StoreKey() with a safe comma-ok check: call m.StoreKey(), assert to *storetypes.KVStoreKey using "skey, ok := m.StoreKey().(*storetypes.KVStoreKey)", and handle the !ok case by returning or bubbling a clear error (e.g., return fmt.Errorf("unexpected store key type: %T", m.StoreKey())) instead of allowing a panic; also add "fmt" to imports to format the error. This change targets the skey assignment in deployment.go (the StoreKey() call) and ensures upgrade execution fails with a clear error rather than panicking.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@x/deployment/module.go`:
- Around line 157-160: Update the stale comment for AppModule.EndBlock to
reflect current behavior: replace the mention of processing deferred uakt→uact
denom migrations with a concise note that EndBlock simply delegates to
keeper.EndBlocker and is currently a no-op (or performs module-specific
end-of-block tasks if any). Reference the AppModule.EndBlock method and the
keeper.EndBlocker call so the maintainer can find and edit the docstring
accordingly.
---
Duplicate comments:
In `@x/oracle/keeper/prune.go`:
- Around line 33-45: The prune handler currently logs errors from
k.prunePrices(sctx, cutoff, params.MaxPrunePerEpoch) but always returns nil,
allowing x/epochs to commit despite prune failures; change the control flow in
the surrounding function (the caller of k.prunePrices) so that if err != nil it
returns that error (propagating the failure to x/epochs) instead of only logging
it, while still logging successes (the existing Info when pruned > 0) before
returning nil on success. Ensure you update the branch that checks err from
k.prunePrices to return err and keep the Logger calls for context.
- Around line 76-112: pruneSourcePrices currently removes entries from k.prices
but never updates latestPriceID, leaving it pointing at deleted/missing data;
after the deletion loop in pruneSourcePrices, check the prefix (same
Source/Denom/BaseDenom) and if no records remain remove the latestPriceID entry
for that prefix, otherwise find the newest surviving record (e.g., by walking
the same range in reverse or querying the last key) and update latestPriceID to
that record; ensure you reference and update the keeper state that stores
latestPriceID (the field/methods that read/write latestPriceID) and handle
errors from the lookup/update paths.
---
Nitpick comments:
In `@upgrades/software/v2.1.0/deployment.go`:
- Line 30: Replace the unsafe type assertion on StoreKey() with a safe comma-ok
check: call m.StoreKey(), assert to *storetypes.KVStoreKey using "skey, ok :=
m.StoreKey().(*storetypes.KVStoreKey)", and handle the !ok case by returning or
bubbling a clear error (e.g., return fmt.Errorf("unexpected store key type: %T",
m.StoreKey())) instead of allowing a panic; also add "fmt" to imports to format
the error. This change targets the skey assignment in deployment.go (the
StoreKey() call) and ensures upgrade execution fails with a clear error rather
than panicking.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 267b53a0-3733-4a8e-b51c-60089954f195
⛔ Files ignored due to path filters (1)
go.sumis excluded by!**/*.sum
📒 Files selected for processing (15)
app/ante.goapp/types/app.gogo.modtests/upgrade/test-cases.jsonupgrades/software/v2.0.0/contracts.goupgrades/software/v2.0.0/deployment.goupgrades/software/v2.0.0/init.goupgrades/software/v2.0.0/upgrade.goupgrades/software/v2.0.0/wasm.goupgrades/software/v2.1.0/deployment.goupgrades/software/v2.1.0/init.gox/deployment/keeper/abci.gox/deployment/keeper/keeper.gox/deployment/module.gox/oracle/keeper/prune.go
💤 Files with no reviewable changes (5)
- upgrades/software/v2.0.0/init.go
- x/deployment/keeper/keeper.go
- upgrades/software/v2.0.0/wasm.go
- upgrades/software/v2.0.0/deployment.go
- upgrades/software/v2.0.0/upgrade.go
✅ Files skipped from review due to trivial changes (1)
- tests/upgrade/test-cases.json
🚧 Files skipped from review as they are similar to previous changes (3)
- app/ante.go
- upgrades/software/v2.1.0/init.go
- go.mod
Signed-off-by: Artur Troian <troian@users.noreply.github.com>
remove sequence resent on BeginBlocker Signed-off-by: Artur Troian <troian@users.noreply.github.com>
Signed-off-by: Artur Troian <troian@users.noreply.github.com>
Description
Closes: #XXXX
Author Checklist
All items are required. Please add a note to the item if the item is not applicable and
please add links to any relevant follow-up issues.
I have...
!to the type prefix if API or client breaking changeCHANGELOG.md