Conversation
|
👋 Thanks for assigning @tnull as a reviewer! |
b3cd70f to
0df2c6b
Compare
| }; | ||
|
|
||
| // Recreate the table to ensure the correct PRIMARY KEY regardless of migration history. | ||
| // Tables migrated from v1 have PK (primary_namespace, key) only — missing |
There was a problem hiding this comment.
Huh, I'm confused, why is this true? We explictly have
// Add new 'secondary_namespace' column
let sql = format!(
"ALTER TABLE {}
ADD secondary_namespace TEXT DEFAULT \"\" NOT NULL;",
kv_table_name
);in migrate_v1_to_v2? Am I missing something, or is this Claude hallucinating?
There was a problem hiding this comment.
In v1 to v2 is added but its not a part of the primary key so we can't use ON CONFLICT on writes for just updating the value. Before we could use replace but now we want to keep the created_at date. This rewrites the table to make it a part of the primary key, otherwise we need to do a query inside to look up the created_at date when replacing. Could add a unique index instead but this felt cleaner
There was a problem hiding this comment.
Still confused
This rewrites the table to make it a part of the primary key
But it doesn't?
PRIMARY KEY (primary_namespace, secondary_namespace, key)
And arguably created_at really shouldn't be part of the key?
There was a problem hiding this comment.
When the v1->v2 migration added secondary_namespace but didn't update the primary key, so databases that started with v1 had PK (primary_namespace, key) instead of PK (primary_namespace, secondary_namespace, key).
This issue didn't cause problems because we used INSERT OR REPLACE so we'd silently overwrite rows without errors (besides potentially losing data). However now we want to use ON CONFLICT so we don't replace the created_at and just update the value. This however now breaks for databases that started in v1 because we'll get conflicts even when the secondary namespace is different because the primary namespace and key are the primary key. So instead of silently replacing data, we now actually error because we're trying to do the ON CONFLICT updates
There was a problem hiding this comment.
Hmm, right. But can we check whether we already have the right primary key, before we initiate a potentially costly/laggy migration? Node that still have the v1 schema would need to have been upgraded since LDK Node v0.2. I doubt there would be (m)any nodes requiring this migration at all, given this is such an early version that almost nobody ran in production.
AFAICT you should be able to use PRAGMA table_info(table_name) and check the pk column.
src/io/sqlite_store/migrations.rs
Outdated
|
|
||
| tx.execute( | ||
| &format!( | ||
| "CREATE TABLE {} ( |
There was a problem hiding this comment.
Honestly not sure if we need all that recreation logic, why can't we just do:
let sql = format!(
"ALTER TABLE {}
ADD created_at INTEGER NOT NULL DEFAULT 0,
kv_table_name
);
?
tnull
left a comment
There was a problem hiding this comment.
Thanks! Can we also extend the persistence_backwards_compatibility test to setup an ldk_node_070::Node and check we can continue to upgrade from it, too? (Could be refactored into do_persistence_backwards_compatibility taking the necessary parameters to DRY up the test logic)
0df2c6b to
143d16f
Compare
|
Added test |
| }; | ||
|
|
||
| // Recreate the table to ensure the correct PRIMARY KEY regardless of migration history. | ||
| // Tables migrated from v1 have PK (primary_namespace, key) only — missing |
There was a problem hiding this comment.
Still confused
This rewrites the table to make it a part of the primary key
But it doesn't?
PRIMARY KEY (primary_namespace, secondary_namespace, key)
And arguably created_at really shouldn't be part of the key?
| "INSERT INTO {} (primary_namespace, secondary_namespace, key, value, created_at) \ | ||
| VALUES (:primary_namespace, :secondary_namespace, :key, :value, :created_at) \ | ||
| ON CONFLICT(primary_namespace, secondary_namespace, key) DO UPDATE SET value = excluded.value;", |
There was a problem hiding this comment.
With ON CONFLICT DO UPDATE, updating a record won't move it in the list. Was insertion-order intentional, or do we want last updated first like the old INSERT OR REPLACE gave us?
There was a problem hiding this comment.
Idk if this really matters? Only thing where i would see it is the payment store but we load all of those into a hashmap when initing. I could see us moving to the paginated version in the future anyways which will have its own sorting already.
143d16f to
4dbc5ab
Compare
|
Gave better explanation around the v2 migration bug and why v3 is the way it is. Also changed |
4dbc5ab to
5374af2
Compare
|
This branch will likely need a rebase at this point. The CI failures related to VSS integration are now passing. |
Implement rust-lightning's PaginatedKVStore/PaginatedKVStoreSync traits on SqliteStore with a v2→v3 schema migration that adds a created_at column for tracking insertion order. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
5374af2 to
619525c
Compare
done |
|
🔔 1st Reminder Hey @tnull! This PR has been waiting for your review. |
|
🔔 2nd Reminder Hey @tnull! This PR has been waiting for your review. |
|
🔔 3rd Reminder Hey @tnull! This PR has been waiting for your review. |
There was a problem hiding this comment.
GPT reviewed this PR:
High: list_paginated returns a next_page_token even when there is no next page at an exact page boundary. In src/io/sqlite_store/mod.rs:638, the token is emitted whenever keys.len() == PAGE_SIZE, so a namespace with exactly 50/100/... entries produces a bogus token and forces
callers into an extra empty query. The new test at src/io/sqlite_store/mod.rs:956 codifies that behavior, but it conflicts with the trait contract (next_page_token should be None when there are no more pages) and with the filesystem reference implementation. The usual fix is to read
PAGE_SIZE + 1 rows and only emit a token if the extra row exists.Medium: creation order is tracked with wall-clock time, which is not monotonic. In src/io/sqlite_store/mod.rs:416, created_at comes from SystemTime::now(). If the clock jumps backward (NTP correction, VM resume, manual clock change), later-created keys can sort behind older ones,
violating the “reverse creation order” guarantee and making pagination unstable. A monotonic DB-backed sequence (INTEGER PRIMARY KEY/autoincrement-style ordering) would avoid that.
Both seem worth addressing?
Closes #804
Implement rust-lightning's PaginatedKVStore/PaginatedKVStoreSync traits
on SqliteStore with a v2→v3 schema migration that adds a created_at
column for tracking insertion order.