Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
25 changes: 25 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,31 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

## [Unreleased]

### Breaking
- `ClockRing::with_hasher` / `ClockRing::try_with_hasher` and the `ConcurrentClockRing` equivalents now require a third argument: a `KeysAreTrusted` marker. This forces every caller that opts out of the default DoS-resistant `RandomState` hasher to acknowledge the trade-off at the call site, making hash-collision-DoS exposure reviewable via `grep KeysAreTrusted`. Callers using `ClockRing::new` / `try_new` are unaffected. Migration: add `, KeysAreTrusted::new()` to existing `with_hasher` / `try_with_hasher` calls.

### Added
- `ClockRing::KeysAreTrusted` β€” ZST acknowledgement type required by hasher-configurable constructors. Constructible via `KeysAreTrusted::new()` or `KeysAreTrusted::default()`; see its type-level docs for guidance on when a non-randomized hasher is appropriate.
- Re-export `KeysAreTrusted` from `cachekit::ds`.
- `#[track_caller]` on `ClockRing::with_hasher` so `MAX_CAPACITY` panics blame the call site rather than the constructor body.
- `ClockRing::try_clone` β€” fallible clone that mirrors `try_new` / `try_with_hasher` by routing every backing allocation through `try_reserve_exact` / `try_reserve`, surfacing allocator failure as `ClockRingError::AllocationFailed` instead of aborting the process. Prefer this over `Clone::clone` when cloning with attacker-influenced capacity.

### Fixed
- **ClockRing `insert_swap` silent value drop** β€” when the index mapped a key to a slot that had been emptied (a broken invariant reachable only via a malformed `Hash`/`Eq`/`Clone` impl on `K`), the caller's value was silently discarded. The repair path now places the value in the empty slot, refreshes the index entry, and recomputes `len` from slot occupancy; a `debug_assert!` surfaces the root-cause corruption in debug/test/fuzz builds.
- **ClockRing `insert_swap` silent value drop on out-of-bounds index corruption** β€” the sibling of the repair path above also silently discarded the caller's value when the index mapped a key to a slot index that was *out of bounds* for `slots` (reachable only via state corruption, never from safe use of this module). For `V` owning a file descriptor, lock, `Arc<_>`, or any resource with observable `Drop`, this leaked the resource across a single corruption event without surfacing any error. `insert_swap` now hands the value back via the `replaced` return position and removes the stale index entry, preserving the no-silent-value-loss contract; `ClockRing::insert` drops it at the caller boundary (matching the update path) and `ConcurrentClockRing::insert` drops it after releasing the write lock. A `debug_assert!` surfaces the root-cause corruption in debug/test/fuzz builds. Regression test `insert_swap_returns_value_when_index_points_out_of_bounds`.
- **ClockRing `step` helper now total for `cap == 0`** β€” previously protected only by a `debug_assert!` (stripped in release), so a future caller that forgot the capacity-zero guard would hit a release-mode division-by-zero panic. `step(_, 0)` now returns `0`.
- **ClockRing `insert_swap` eviction-path exception safety** β€” a panicking `K::Clone` (most plausibly OOM in a heap-allocating `Clone` impl) reached during full-ring eviction previously left the ring with an empty slot while `len == capacity`, turning the next `insert` into an `unreachable!("occupied slot missing under full ring")` panic. For `ConcurrentClockRing` this converted a single transient failure into a persistent-panic DoS for every caller sharing the ring. The fix clones the new key *before* any destructive state change, so a panic there leaves invariants intact. Regression tests `insert_swap_eviction_panic_in_clone_preserves_invariants` and `insert_swap_eviction_panic_in_clone_does_not_leak_evictee` guard both the invariant and the index integrity.
- **ClockRing `insert_swap` / `pop_victim` eviction-path exception safety vs panicking `Hash` / `Eq`** β€” both sites previously ran `self.slots[idx].take()` *before* `self.index.remove(&evictee.key)`. A panic in a user-supplied `Hash` or `Eq` impl during the subsequent `HashMap::remove` would then leave the ring with an empty slot while `len == capacity` (for `insert_swap`) or `len` un-decremented (for `pop_victim`), again tripping `unreachable!("occupied slot missing under full ring")` on the next `insert` β€” a persistent-panic DoS amplified across every thread sharing a `ConcurrentClockRing` handle. The fix reads the evictee key *by borrow* and removes it from the index **before** emptying the slot, so a panic in the hasher leaves slot / `len` / `referenced` untouched. Regression tests `insert_swap_eviction_panic_in_hash_preserves_invariants` and `pop_victim_panic_in_hash_preserves_invariants` install a `Hash` impl with a caller-controlled panic budget and verify ring invariants and follow-up-insert recovery.

### Changed
- `ClockRing::clone` is now a hand-written impl (replacing the `#[derive(Clone)]`) so the abort-on-OOM failure mode inherited from `Vec`/`HashMap` is documented on the impl itself and callers are directed to `try_clone` when recovery matters. Behavior and trait bounds (`K: Clone, V: Clone, S: Clone`) are unchanged.
- `ClockRing` / `ConcurrentClockRing` `Debug` impls are now hand-written (replacing the derived `Debug`) and redact all stored keys and values, exposing only `len`, `capacity`, `hand`, and metrics counters when the `metrics` feature is enabled. The derived `Debug` recursed through every `Entry<K, V>` and `HashMap<K, usize, S>` entry, turning `tracing::debug!`, `dbg!`, and panic-unwind backtraces into a secret-exposure channel for caches keyed on session tokens / API keys / auth headers or holding sensitive values. Callers that need full contents can iterate via `ClockRing::iter` / `into_inner().iter()` and print entries they've vetted. No trait bounds were added: `Debug` is now implementable for *any* `ClockRing<K, V, S>` regardless of whether `K` / `V` / `S` implement `Debug`. Regression tests `debug_impl_does_not_leak_keys_or_values` and `concurrent_debug_impl_does_not_leak_keys_or_values`.

### Documentation
- New `## Memory Budgeting` module-level section documenting that `approx_bytes` counts `size_of::<V>()` only and does not follow heap pointers β€” workloads with variable-sized values should enforce a byte budget at the call site.
- New `## Timing Side Channels` module-level section documenting that `HashMap`-backed lookup timing reveals key presence, and recommending `ClockRing` not be used as the backing store for caches whose key set must remain confidential against a co-located attacker.
- `Extend<(K, V)> for ClockRing` impl now carries a memory-budget caveat pointing callers at `pop_victim`-based byte accounting when values are attacker-influenced; evicted entries are silently dropped by `Extend` and the aggregate memory use during a bulk insert is bounded only by `capacity Γ— max(size_of(V_i))` across the iterator.

## [0.7.0] - 2026-04-09

### Breaking
Expand Down
2 changes: 1 addition & 1 deletion bench-support/src/metrics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -747,7 +747,7 @@ where
let cache_size = std::mem::size_of_val(cache);
MemoryEstimate {
total_bytes: cache_size,
bytes_per_entry: if entries > 0 { cache_size / entries } else { 0 },
bytes_per_entry: cache_size.checked_div(entries).unwrap_or(0),
entry_count: entries,
}
}
Expand Down
Loading
Loading