Conversation
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Replace the sequential per-record insert loop (which committed after each row) with a shared cursor approach matching the sync implementation. _insert_single_record_async handles SQL building and execution on the shared cursor; a single _maybe_commit() and cache invalidation follow. Adds three tests covering the FK, non-FK IntegrityError, and generic sqlite3.Error rollback paths. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Add lookup_cache() and store_cache() to QueryBuilder, encapsulating the cache_bypassed guard, key generation, and cache_get/set calls that were previously duplicated verbatim in both _fetch_result and _fetch_projection_result across the sync and async query paths. Both paths now call the shared helpers, removing ~30 lines of duplicate cache logic. The three public accessors (cache_bypassed, query_cache_ttl, make_cache_key) that existed solely to expose internals to the async layer are removed as they are no longer needed. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
#5 - Add optional db_context parameter to _create_instance_from_data / create_instance_from_data so callers can supply the binding directly. AsyncSqliterDB._create_instance_from_data now delegates in a single call instead of delegating then overwriting the db_context attribute. #6 - Remove the redundant _in_transaction getter/setter property pair. in_transaction is now a single direct-read property; the two write sites in __aenter__/__aexit__ call _sync.set_in_transaction() explicitly. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
- Improve AsyncSqliterDB and AsyncQueryBuilder class docstrings from internal implementation notes to user-facing descriptions - Add async mention to pyproject.toml project description Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Add a 15th TUI demo category 'Async Support' with 9 runnable demos covering: connection, context manager, CRUD, bulk insert, queries, FK lazy loading (with explicit wrong/correct patterns), FK eager loading, reverse relationships, and transactions with rollback. Key implementation details: - Demos run via a ThreadPoolExecutor to avoid conflicting with Textual's running event loop - Display code is sanitised to strip infra boilerplate (_ASYNC_AVAILABLE guard, _run_async call, file cleanup) before showing to the user - asyncio.run() unavailable when aiosqlite is not installed; each demo returns a helpful install message instead Also adds docs/tui-demo/async.md mirroring the existing tui-demo page style, with code snippets, What Happens sections, gotcha admonitions, and a full sync-vs-async quick-reference table. Updates mkdocs.yml nav and the tui-demo index page. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Add a 'mypy and Static Type Checking' section to the async guide explaining the AsyncForeignKey typing trade-off (typed as T for eager loading ergonomics, but lazy-loaded runtime value is AsyncLazyLoader[T]) and the reverse accessor limitation (installed via setattr, so __getattribute__ returns object under strict mypy). Update the TUI async demos to show the required cast() workarounds in the FK lazy and reverse demos, with multi-line explanatory comments. Update docs/tui-demo/async.md to mirror the cast patterns with matching !!! note admonitions linking back to the guide. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
… icons Remove the '⚡' icon from the Async Support demo category so all 15 categories are consistent (all use icon=""). Add a TODO item to revisit adding suitable icons to all categories in one pass. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 51 minutes and 11 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds an optional asynchronous API under Changes
Sequence Diagram(s)sequenceDiagram
actor App as Application
participant AsyncDB as AsyncSqliterDB
participant Builder as AsyncQueryBuilder
participant Conn as aiosqlite.Connection
participant SQLite as SQLite
App->>AsyncDB: await AsyncSqliterDB.create(db_file)
AsyncDB->>Conn: await connect()
Conn->>SQLite: open DB
SQLite-->>Conn: ready
Conn-->>AsyncDB: connected
AsyncDB-->>App: instance
App->>AsyncDB: await insert(instance)
AsyncDB->>AsyncDB: build InsertPlan (sync helper)
AsyncDB->>Conn: await execute(sql, values)
Conn->>SQLite: write
SQLite-->>Conn: result
Conn-->>AsyncDB: cursor
AsyncDB->>AsyncDB: invalidate cache
AsyncDB-->>App: instance with pk
App->>Builder: Builder = AsyncDB.select(Model)
Builder->>AsyncDB: build_execution_plan()
Builder->>Conn: await execute(select_sql)
Conn->>SQLite: read rows
SQLite-->>Conn: rows
Conn-->>Builder: cursor
Builder->>Builder: convert rows -> models
alt prefetch requested
Builder->>AsyncDB: await prefetch queries
AsyncDB->>Conn: await execute(prefetch_sql)
end
Builder-->>App: list of models
App->>AsyncDB: await AsyncDB.close()
AsyncDB->>AsyncDB: maybe_commit()
AsyncDB->>Conn: await close()
Conn->>SQLite: close
SQLite-->>Conn: closed
Conn-->>AsyncDB: closed
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Up to standards ✅🟢 Issues
|
| Metric | Results |
|---|---|
| Complexity | 701 |
| Duplication | 68 |
🟢 Coverage 100.00% diff coverage · +0.00% coverage variation
Metric Results Coverage variation ✅ +0.00% coverage variation (-1.00%) Diff coverage ✅ 100.00% diff coverage Coverage variation details
Coverable lines Covered lines Coverage Common ancestor commit (a27ead8) 4869 4869 100.00% Head commit (9e1fa52) 6409 (+1540) 6409 (+1540) 100.00% (+0.00%) Coverage variation is the difference between the coverage for the head and common ancestor commits of the pull request branch:
<coverage of head commit> - <coverage of common ancestor commit>Diff coverage details
Coverable lines Covered lines Diff coverage Pull request (#149) 1777 1777 100.00% Diff coverage is the percentage of lines that are covered by tests out of the coverable lines that the pull request added or modified:
<covered lines added or modified>/<coverable lines added or modified> * 100%
TIP This summary will be updated as you push new changes. Give us feedback
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Use isinstance() instead of type() for bulk_insert type checking (both sync and async), and move AsyncLazyLoader to TYPE_CHECKING block since it's only used in a string annotation. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Update bulk_insert mixed-model tests to expect TypeError instead of ValueError after isinstance refactor. Add full coverage for async_demos.py including execution, ImportError fallback, and guard clause paths. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Fix TestImportErrorFallback to use monkeypatch for all sys.modules cleanup, and also save/restore the parent package's async_demos attribute (importlib.import_module updates it, which patch() uses for resolution). Replace AsyncSqliterDB.get patch with full mock using SimpleNamespace return values to avoid dangling aiosqlite connections. Use real function in test_leading_blank_lines_removed instead of fragile MagicMock. Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 20
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
docs/index.md (1)
10-17:⚠️ Potential issue | 🟡 MinorClarify that async support requires an optional runtime dependency.
The documentation currently advertises async APIs (line 17) whilst stating that Pydantic is the only runtime dependency (line 13). This is misleading because the optional async extra requires the additional
aiosqlitepackage. Update the wording to clarify that core synchronous usage only needs Pydantic, whilst async support is optional and requires the async extra.📝 Proposed wording
SQLite databases using Pydantic models. The only external run-time dependency -is Pydantic itself. +for core synchronous usage is Pydantic itself. Async support is available via +the optional async extra, which installs the additional async SQLite driver. @@ -projects. It supports both synchronous and asynchronous APIs. +projects. It supports synchronous APIs and optional asynchronous APIs. @@ -- Optional async database, query, and ORM support +- Optional async database, query, and ORM support via the async extra @@ -- No external dependencies other than Pydantic +- No required external dependencies other than Pydantic for core sync usage🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/index.md` around lines 10 - 17, Update the paragraph that mentions "Pydantic" as the only runtime dependency and the sentence that advertises "async APIs" to clarify that core synchronous usage only requires Pydantic, while async support is optional and provided via an extra that requires the additional "aiosqlite" package; specifically change the lines containing the phrases "The only external run-time dependency is Pydantic itself." and "It supports both synchronous and asynchronous APIs." to state that async support requires installing the optional async extra (e.g., "pip install sqliter[async]") and list "aiosqlite" as the additional runtime dependency for async usage.README.md (1)
54-65:⚠️ Potential issue | 🟡 MinorUpdate the “no external dependencies” feature wording.
With optional async support now documented, Line 65 reads as if optional extras do not add external packages. “No required external dependencies…” avoids contradicting the new
aiosqliteextra.Proposed wording tweak
-- No external dependencies other than Pydantic +- No required external dependencies other than Pydantic🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 54 - 65, Update the README bullet that currently reads "No external dependencies other than Pydantic" to clarify that extras are optional; replace it with wording such as "No required external dependencies other than Pydantic (optional extras like aiosqlite may be needed for async features)" so the statement aligns with the documented optional async support and the aiosqlite extra.
🧹 Nitpick comments (5)
sqliter/orm/m2m.py (1)
1041-1048: Index creation: granularity change swallows subsequent indexes on first failure, and conn/cursor are re-acquired per iteration.Two minor points on the new loop:
- Previously each index had its own
try/except(per the PR summary), so a failure on the first FK index would still allow the second to be attempted. With the single outertry/except sqlite3.Error, the first error aborts the loop and silently skips remaining indexes. Given that creation failures are already tolerated, this is a low-severity reliability regression rather than a bug — but worth a conscious choice. Logging the swallowed exception (even atdebug) would help diagnose missing indexes later.conn = db.connect()andcursor = conn.cursor()are re-acquired inside the loop for eachindex_sql.db.connect()returns the already-open connection so this is not incorrect, just redundant; hoisting them outside the loop (as done for theCREATE TABLEabove) keeps the two blocks symmetric.♻️ Suggested tidy-up
# Create indexes on both FK columns + conn = db.connect() + cursor = conn.cursor() try: for index_sql in build_junction_index_sqls(junction_table, columns): - conn = db.connect() - cursor = conn.cursor() db._execute(cursor, index_sql) # noqa: SLF001 conn.commit() except sqlite3.Error: pass # Non-critical: index creation failure🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqliter/orm/m2m.py` around lines 1041 - 1048, The current index-creation loop wraps all indexes in a single try/except and reacquires conn/cursor each iteration; change it to acquire conn and cursor once (hoist conn = db.connect() and cursor = conn.cursor() outside the loop) and run each index_sql inside its own try/except sqlite3.Error so one failing index doesn't stop subsequent ones; in each per-index except block log the caught exception (e.g. via process logger or db logger) including the index_sql to aid diagnosis, and continue to the next index; keep using db._execute(cursor, index_sql) and conn.commit() after each successful create.docs/quickstart.md (1)
49-55: Consider also showing thepipequivalent alongsideuv.Readers who don't use
uvwon't have a direct command to copy; a one-linepip install 'sqliter-py[async]'alternative (as done in the installation page) would make this self-contained. The link toguide/asyncio.mdis fine.📝 Suggested addition
```bash uv add 'sqliter-py[async]' +# or +pip install 'sqliter-py[async]' ```🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/quickstart.md` around lines 49 - 55, Update the code block that currently shows the uv command "uv add 'sqliter-py[async]'" to include the pip equivalent on the next line; specifically, in the same fenced bash block that references "uv add 'sqliter-py[async]'", add a second line "pip install 'sqliter-py[async]'" (with a comment or separator like "# or" if you prefer) so readers who don't use uv have a direct pip install command..pre-commit-config.yaml (1)
29-36: LGTM — mypy dependency added for async typing.Adding
aiosqlitetoadditional_dependenciesensures the strict mypy hook can resolve types imported from the newsqliter.asynciomodules.One optional consideration: the lockfile pins
aiosqlite==0.22.1, but this hook installs unpinned. If you've seen type-check drift from dep bumps elsewhere, pinning here too would keep the pre-commit check reproducible with the lockfile.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.pre-commit-config.yaml around lines 29 - 36, The pre-commit hook lists `additional_dependencies` including `aiosqlite` unpinned which can diverge from the locked `aiosqlite==0.22.1`; update the `additional_dependencies` entry to pin `aiosqlite==0.22.1` (i.e., replace `aiosqlite` with `aiosqlite==0.22.1`) so the strict mypy hook runs with the same version as the lockfile and keeps type-checking reproducible.tests/tui/unit/test_demos_async.py (1)
175-189: Report the function that actually failed.Line 188 always reports
_run_async_conn, which makes failures from the other demo extractors harder to diagnose.Proposed fix
- f"_demo_code({_run_async_conn.__name__}) is empty" + f"_demo_code({func.__name__}) is empty"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/tui/unit/test_demos_async.py` around lines 175 - 189, The assertion message always references _run_async_conn even when another extractor failed; update the assertion in the loop that iterates over functions (the list containing _run_async_conn, _run_async_context, _run_async_crud, etc.) to use the current loop variable's name (func.__name__) and/or include the value of code, so _demo_code(func) failures report the actual failing function; locate the loop and replace the hardcoded _run_async_conn reference in the f-string with func.__name__ (and optionally show code snippet or its length) to make failures diagnosable.sqliter/asyncio/db.py (1)
438-447:table_namecomputed twice.Line 438 already computes
table_name; line 442 immediately rebinds it from the tuple returned by_build_create_table_sql(...). Either drop the first assignment or destructure onlycreate_table_sql, fk_columnsfrom the helper.♻️ Suggested tweak
- table_name = model_class.get_table_name() - if force: - await self._execute_sql(f"DROP TABLE IF EXISTS {table_name}") - - table_name, create_table_sql, fk_columns = ( - self._sync._build_create_table_sql( # noqa: SLF001 - model_class, - exists_ok=exists_ok, - ) - ) + table_name, create_table_sql, fk_columns = ( + self._sync._build_create_table_sql( # noqa: SLF001 + model_class, + exists_ok=exists_ok, + ) + ) + if force: + await self._execute_sql(f"DROP TABLE IF EXISTS {table_name}")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqliter/asyncio/db.py` around lines 438 - 447, The local variable table_name is computed twice: once from model_class.get_table_name() and again when unpacking the tuple returned by _build_create_table_sql(model_class, exists_ok=exists_ok). Remove the redundant first assignment or change the unpacking of _build_create_table_sql to only extract create_table_sql and fk_columns (leaving table_name from the initial model_class.get_table_name()), updating any subsequent uses accordingly (refer to symbols: model_class, table_name, _build_create_table_sql, exists_ok, force, and _execute_sql).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/api-reference/async-sqliterdb.md`:
- Around line 291-292: The example dereferences the optional result of
db.get(Book, 1) without guarding for None; update the snippet around the get()
call (the Book variable and subsequent book.author.fetch() usage) to check that
book is not None (e.g., add assert book is not None or an explicit if book is
None: branch returning/handling the missing case) before calling
book.author.fetch() so the example matches the documented T | None return of
get().
In `@pyproject.toml`:
- Around line 31-35: The optional-dependencies block has a mismatch where
"extras" pins inflect==7.0.0 but "full" allows inflect>=7.0.0; update the "full"
entry in pyproject.toml's [project.optional-dependencies] so its inflect
constraint matches the "extras" pin (use inflect==7.0.0) to ensure
sqliter-py[full] and sqliter-py[extras] resolve the same pluralisation
dependency.
In `@sqliter/asyncio/__init__.py`:
- Around line 7-30: The current try/except sets _IMPORT_ERROR for any
ImportError from importing AsyncSqliterDB/AsyncQueryBuilder which masks real
import-time bugs; change the exception handling so you only capture
ModuleNotFoundError for the missing dependency "aiosqlite" and re-raise other
import problems. Concretely, in the import block that references AsyncSqliterDB
and AsyncQueryBuilder, replace the broad except ImportError as exc with an
except ModuleNotFoundError as exc that checks exc.name (or exc.args) for
"aiosqlite" and assigns _IMPORT_ERROR = exc only in that case; for any other
ImportError/ModuleNotFoundError re-raise immediately so real errors in
sqliter.asyncio.db or .query surface. Ensure __getattr__ and the _IMPORT_ERROR
symbol remain unchanged.
In `@sqliter/asyncio/db.py`:
- Around line 685-707: The code currently writes negative cache entries (None)
in the get-path when result is falsy, which can mask cross-process inserts; in
the branch where result is None (inside the method that uses
get_plan.table_name, cache_key, cache_ttl, bypass_cache and _cache_set), change
behavior to avoid storing a negative entry unless an explicit negative-cache TTL
was provided (or unless cache_ttl is explicitly set), i.e., skip calling
_cache_set with None when cache_ttl is the default/unspecified; alternatively
add a separate negative_ttl parameter and only cache None when that is non-null,
and leave a short comment referencing invalidate_table_cache to explain
cross-process invalidation limitations.
- Around line 571-582: The exception handlers perform a rollback when not
self.in_transaction but ignore self.auto_commit, which can discard a caller's
pending work; change the rollback guard to require auto_commit as well (i.e.,
only rollback when self.auto_commit and not self.in_transaction and self.conn is
not None). Implement this once as a small helper (e.g., _maybe_rollback
mirroring _maybe_commit) and call it from the except blocks in insert (the block
using insert_plan), bulk_insert, update, and delete so the same correct guard is
applied everywhere.
- Around line 527-535: The _execute_sql helper unconditionally calls await
conn.commit(), which breaks transactions started by async with db:; change it to
call the existing transaction-aware commit helper (use await
self._maybe_commit(conn) or self._maybe_commit() depending on signature) after
executing SQL so commits only occur when not inside an explicit transaction.
Locate _execute_sql (uses self.connect(), conn.cursor(),
self._execute_async(cursor, sql)) and replace the direct await conn.commit()
with the transaction-aware call, leaving exception handling (raise
SqlExecutionError(sql) from exc) intact; this preserves create_table, FK/index,
and M2M helpers behavior without flushing enclosing transactions.
- Around line 355-376: get_table_names() currently auto-closes a connection it
opened, which breaks in-memory DBs; modify the auto-close logic so that if a
connection was opened by this call (was_connected is False) you only call
self.close() afterwards when the database is not in-memory (check
self.is_memory), i.e. keep the existing connect() behavior but change the final
conditional to: close only if not was_connected and not self.is_memory; refer to
get_table_names, is_connected, connect, close, and is_memory when making the
change.
- Around line 798-822: The __aexit__ implementation currently closes the DB
connection and unconditionally clears cache/stats, which destroys in-memory DBs
and defeats cache across transactional blocks; change __aexit__ (paired with
__aenter__) to only perform transaction semantics: call await
self.conn.rollback() on error or await self.conn.commit() on success, call
self._sync.set_in_transaction(value=False), and return without calling await
self.conn.close() or setting self.conn = None; remove the unconditional
self._sync.clear_cache() and self._sync.reset_cache_stats() calls here and
instead invalidate cache only for tables modified during the transaction (e.g.
add or call a helper like _invalidate_modified_tables() that clears entries for
affected tables), leaving full connection teardown and global cache clears to
the explicit close() method and preserving behavior for memory=True and
long-lived connections.
In `@sqliter/asyncio/orm/fields.py`:
- Around line 42-46: The fetch method currently treats only None as a missing FK
and will attempt to load when _fk_id == 0; change the presence check to use
truthiness (e.g., if not self._fk_id) so that 0 is treated as an unsaved/missing
FK and sets self._cached = None and returns None; apply the same fix to the
other similar check around lines 117-119 (the other loader/resolver method
referencing _fk_id/_cached) so all relationship loading uses the repo’s
truthiness convention for pk.
- Around line 48-65: The AsyncLazyLoader code currently swallows all
SqliterError exceptions in the getter (the block using self._db.get,
self._to_model, self._fk_id and setting self._cached), making real DB failures
appear as missing relationships; remove the broad except SqliterError handler
(or if you want to log, log and re-raise) so that SqliterError propagates
instead of being converted to None, leaving the existing behavior where db.get()
returns None for a genuinely missing row.
In `@sqliter/asyncio/orm/m2m.py`:
- Around line 404-418: The add/remove/clear/set methods currently delegate to
self._manager but leave the wrapper's cached snapshot self._items stale; after
each awaited call to self._manager.add/remove/clear/set, invalidate or refresh
self._items (e.g., set self._items = None or re-fetch into self._items) so
subsequent fetch_all(), count(), or exists() on the same wrapper see current
data; update the async methods add, remove, clear, and set to refresh or clear
self._items immediately after the manager operation completes.
- Around line 304-308: The set method currently calls clear() before
validating/writing the replacement instances which can lose the original links
if add() fails; change set(self, *instances) to first validate/obtain all
replacement primary keys (ensure each instance has a saved PK or resolve it) and
then perform clear() followed by add() inside a single transaction so any
failure during add() rolls back; use the existing clear, add and set method
names and the ORM's transaction context manager to wrap the clear/add sequence
atomically.
In `@sqliter/asyncio/orm/model.py`:
- Around line 29-42: The FK cache keyed by name can become stale when the
underlying FK value changes; in the branch handling cached = cache.get(name)
(and when cached is an AsyncLazyLoader or a loaded model), compare the cached
item's primary key (for AsyncLazyLoader check cached.fk_id or for loaded model
check its PK via fk_descriptors[name].to_model) against the current fk_id, and
if they differ rebuild the cache entry (create a new AsyncLazyLoader with
instance=self, to_model=object.__getattribute__(self,
"fk_descriptors")[name].to_model, fk_id=fk_id, db_context=db_context) before
returning; also handle the case where cached is a loaded model by replacing it
with a fresh AsyncLazyLoader when PK mismatch.
- Around line 22-24: The current check only treats fk_id == None as missing, so
when fk_id == 0 an AsyncLazyLoader will be created and may query pk=0; change
the presence check after fk_id = object.__getattribute__(self, f"{name}_id") to
treat falsy values as unloaded (use a truthiness check like "if not fk_id") so
that unsaved/missing FKs (pk == 0) are handled the same as None and the
AsyncLazyLoader for the relation (where the code constructs/returns an
AsyncLazyLoader) is not created for missing FK values.
In `@sqliter/asyncio/orm/query.py`:
- Around line 117-124: The _build_query currently only checks fk_value for None,
which allows unsaved parent instances with pk==0 to run reverse queries; update
the condition in _build_query to use a truthiness check on fk_id (e.g., if not
fk_id or self._db is None: return None) so fk_value==0 is treated as missing.
Locate the _build_query method and the fk_value/fk_field usage to adjust the
guard before calling self._db.select(self.to_model).filter(...), ensuring
reverse queries are skipped for falsy PKs.
In `@sqliter/asyncio/query.py`:
- Around line 506-515: The validation in async query update should raise
InvalidUpdateError instead of RecordUpdateError: when checking primary key via
model_class.get_primary_key() (pk_field) and when validating unknown fields
using model_class.model_fields (invalid_fields), replace the raised
RecordUpdateError instances with InvalidUpdateError and preserve the same error
messages (e.g., "Cannot update the primary key '...'" and the comma-separated
unknown field names) so callers can catch validation errors the same way as the
sync QueryBuilder.update().
In `@sqliter/query/query.py`:
- Around line 2258-2261: You're mutating self._fields while building the
execution plan which changes the cache key; instead, create a local copy (e.g.,
selected_fields = list(self._fields) or tuple(self._fields)) and ensure "pk" is
appended to that local selected_fields when needed, then use selected_fields for
the join/fields variable and for any cache-key construction so self._fields
remains unchanged; this prevents the cache being written under a different key
for calls like db.select(Model, fields=["name"]).
In `@sqliter/sqliter.py`:
- Around line 1418-1428: The code currently uses isinstance(inst, model_class)
in the bulk-insert validation which permits subclasses and can mix table caches
between calls; change the check to require exact type equality (e.g., type(inst)
is model_class or type(inst) != model_class) so every element in instances has
the same concrete class as the first one; this ensures _insert_single_record()
and bulk_insert() operate on the exact same model class and table cache.
In `@sqliter/tui/demos/async_demos.py`:
- Around line 266-298: The demo functions (async main and _run_async_reverse)
create an AsyncSqliterDB instance and currently return early on None results
without calling await db.close(); wrap the core demo body after creating db in a
try/finally so that await db.close() is always executed in the finally block
(keep early returns inside the try), e.g., ensure the logic around fresh = await
db.get(Book, book.pk) and the author is None guard in _run_async_reverse are
inside the try and that the finally calls await db.close() to guarantee cleanup.
In `@tests/test_asyncio_core.py`:
- Around line 1342-1372: The test creates a dynamic model Sale without restoring
the global ModelRegistry, which can leak metadata; wrap the dynamic model
registration so you save the current ModelRegistry before creating Sale and
restore it after the test (e.g., capture the registry state, run the code that
defines Sale and uses AsyncSqliterDB, then reassign the saved registry back),
ensuring the ModelRegistry global is restored regardless of test outcome (use
try/finally around the block that defines/uses Sale).
---
Outside diff comments:
In `@docs/index.md`:
- Around line 10-17: Update the paragraph that mentions "Pydantic" as the only
runtime dependency and the sentence that advertises "async APIs" to clarify that
core synchronous usage only requires Pydantic, while async support is optional
and provided via an extra that requires the additional "aiosqlite" package;
specifically change the lines containing the phrases "The only external run-time
dependency is Pydantic itself." and "It supports both synchronous and
asynchronous APIs." to state that async support requires installing the optional
async extra (e.g., "pip install sqliter[async]") and list "aiosqlite" as the
additional runtime dependency for async usage.
In `@README.md`:
- Around line 54-65: Update the README bullet that currently reads "No external
dependencies other than Pydantic" to clarify that extras are optional; replace
it with wording such as "No required external dependencies other than Pydantic
(optional extras like aiosqlite may be needed for async features)" so the
statement aligns with the documented optional async support and the aiosqlite
extra.
---
Nitpick comments:
In @.pre-commit-config.yaml:
- Around line 29-36: The pre-commit hook lists `additional_dependencies`
including `aiosqlite` unpinned which can diverge from the locked
`aiosqlite==0.22.1`; update the `additional_dependencies` entry to pin
`aiosqlite==0.22.1` (i.e., replace `aiosqlite` with `aiosqlite==0.22.1`) so the
strict mypy hook runs with the same version as the lockfile and keeps
type-checking reproducible.
In `@docs/quickstart.md`:
- Around line 49-55: Update the code block that currently shows the uv command
"uv add 'sqliter-py[async]'" to include the pip equivalent on the next line;
specifically, in the same fenced bash block that references "uv add
'sqliter-py[async]'", add a second line "pip install 'sqliter-py[async]'" (with
a comment or separator like "# or" if you prefer) so readers who don't use uv
have a direct pip install command.
In `@sqliter/asyncio/db.py`:
- Around line 438-447: The local variable table_name is computed twice: once
from model_class.get_table_name() and again when unpacking the tuple returned by
_build_create_table_sql(model_class, exists_ok=exists_ok). Remove the redundant
first assignment or change the unpacking of _build_create_table_sql to only
extract create_table_sql and fk_columns (leaving table_name from the initial
model_class.get_table_name()), updating any subsequent uses accordingly (refer
to symbols: model_class, table_name, _build_create_table_sql, exists_ok, force,
and _execute_sql).
In `@sqliter/orm/m2m.py`:
- Around line 1041-1048: The current index-creation loop wraps all indexes in a
single try/except and reacquires conn/cursor each iteration; change it to
acquire conn and cursor once (hoist conn = db.connect() and cursor =
conn.cursor() outside the loop) and run each index_sql inside its own try/except
sqlite3.Error so one failing index doesn't stop subsequent ones; in each
per-index except block log the caught exception (e.g. via process logger or db
logger) including the index_sql to aid diagnosis, and continue to the next
index; keep using db._execute(cursor, index_sql) and conn.commit() after each
successful create.
In `@tests/tui/unit/test_demos_async.py`:
- Around line 175-189: The assertion message always references _run_async_conn
even when another extractor failed; update the assertion in the loop that
iterates over functions (the list containing _run_async_conn,
_run_async_context, _run_async_crud, etc.) to use the current loop variable's
name (func.__name__) and/or include the value of code, so _demo_code(func)
failures report the actual failing function; locate the loop and replace the
hardcoded _run_async_conn reference in the f-string with func.__name__ (and
optionally show code snippet or its length) to make failures diagnosable.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: fb1d9fed-75c3-46d5-96a8-f25460702891
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (43)
.markdownlint.yaml.markdownlintrc.pre-commit-config.yamlREADME.mdTODO.mddocs/api-reference/async-orm.mddocs/api-reference/async-query-builder.mddocs/api-reference/async-sqliterdb.mddocs/api-reference/index.mddocs/guide/asyncio.mddocs/guide/foreign-keys/orm.mddocs/guide/guide.mddocs/index.mddocs/installation.mddocs/quickstart.mddocs/tui-demo/async.mddocs/tui-demo/index.mdmkdocs.ymlpyproject.tomlrequirements-dev.txtsqliter/asyncio/__init__.pysqliter/asyncio/db.pysqliter/asyncio/orm/__init__.pysqliter/asyncio/orm/fields.pysqliter/asyncio/orm/m2m.pysqliter/asyncio/orm/model.pysqliter/asyncio/orm/query.pysqliter/asyncio/query.pysqliter/orm/m2m.pysqliter/query/query.pysqliter/sqliter.pysqliter/tui/demos/__init__.pysqliter/tui/demos/async_demos.pytests/test_aggregates.pytests/test_asyncio_core.pytests/test_asyncio_orm.pytests/test_bulk_insert.pytests/test_indexes.pytests/test_prefetch_related.pytests/test_query.pytests/test_sqliter.pytests/test_unique.pytests/tui/unit/test_demos_async.py
💤 Files with no reviewable changes (1)
- .markdownlintrc
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
Signed-off-by: Grant Ramsay <seapagan@gmail.com>
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
sqliter/sqliter.py (1)
161-224:⚠️ Potential issue | 🟠 MajorTrack transaction depth before exposing context state.
__enter__sets_in_transaction = True, but every__exit__commits/rolls back and resets it toFalse. Nestedwith db:blocks can therefore commit inside an outer transaction; later outer writes may auto-commit, and an outer exception can no longer roll back the full unit of work.Proposed nested-context guard
self._in_transaction = False + self._transaction_depth = 0 + self._rollback_requested = False @@ def __enter__(self) -> Self: @@ self.connect() + self._transaction_depth += 1 self._in_transaction = True return self @@ - if self.conn: + if exc_type is not None: + self._rollback_requested = True + + self._transaction_depth = max(0, self._transaction_depth - 1) + + if self.conn and self._transaction_depth == 0: try: - if exc_type: + if self._rollback_requested: # Roll back the transaction if there was an exception self.conn.rollback() else: self.conn.commit() finally: + self._rollback_requested = False self._in_transaction = FalseAlso applies to: 1658-1708
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqliter/sqliter.py` around lines 161 - 224, The code uses a boolean _in_transaction toggled in __enter__ and __exit__, which breaks nested with db: blocks; change to a transaction depth counter (e.g. _transaction_depth: int = 0), increment it in __enter__ and decrement it in __exit__, set _in_transaction (or replace its uses) to True when _transaction_depth > 0, and only perform the actual commit/rollback and connection state changes when _transaction_depth reaches 0; update the in_transaction property to return _transaction_depth > 0 and ensure __enter__, __exit__, and any auto-commit logic consult the depth so inner contexts do not prematurely commit/rollback (refer to __enter__, __exit__, in_transaction, and the current _in_transaction usage).
🧹 Nitpick comments (3)
sqliter/tui/demos/async_demos.py (1)
421-453: Clean up the transaction demo file even on failures.If
_run_async(main())raises before line 453, the temporary SQLite file is left behind. Wrap the runner and unlink in afinally.Proposed fix
- _run_async(main()) - Path(db_path).unlink(missing_ok=True) + try: + _run_async(main()) + finally: + Path(db_path).unlink(missing_ok=True) return output.getvalue()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqliter/tui/demos/async_demos.py` around lines 421 - 453, The temporary DB file created via tempfile.NamedTemporaryFile (db_path) can be left behind if _run_async(main()) raises; wrap the call to _run_async(main()) in a try/finally so Path(db_path).unlink(missing_ok=True) is executed in the finally block to always remove the temp file; keep the current delete=False behavior and ensure unlink uses missing_ok=True to avoid secondary errors.sqliter/query/query.py (2)
2788-2828: Reuse the new statement builders in sync mutations.
build_delete_statement()andbuild_update_statement()are added for shared planning, but the sync terminal methods still rebuild the same SQL manually. Using the builders here keeps sync and async mutation behaviour aligned.Proposed refactor
def delete(self) -> int: @@ - sql = f'DELETE FROM "{self.table_name}"' - - # Build the WHERE clause with special handling for None (NULL in SQL) - values, where_clause = self._parse_filter() - - if self.filters: - sql += f" WHERE {where_clause}" + sql, values = self.build_delete_statement() @@ def update(self, values: dict[str, Any]) -> int: @@ - # Build SET clause - set_clauses: list[str] = [] - set_values: list[Any] = [] - - # Auto-set updated_at timestamp if the field exists and wasn't - # explicitly provided - if "updated_at" in valid_fields and "updated_at" not in values: - current_timestamp = int(time.time()) - set_clauses.append('"updated_at" = ?') - set_values.append(current_timestamp) - - for field_name, value in values.items(): - # Serialize the value if needed - serialized = self.model_class.serialize_field(value) - db_column = self._model_field_to_db_column( - self.model_class, field_name - ) - set_clauses.append(f'"{db_column}" = ?') - set_values.append(serialized) - - set_clause = ", ".join(set_clauses) - - # Build the full UPDATE SQL - sql = f'UPDATE "{self.table_name}" SET {set_clause}' - - # Build the WHERE clause - where_values, where_clause = self._parse_filter() - if self.filters: - sql += f" WHERE {where_clause}" - - # Combine SET values with WHERE values - all_values = set_values + where_values + sql, all_values = self.build_update_statement( + values, + current_timestamp=int(time.time()), + )Also applies to: 2963-3073
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqliter/query/query.py` around lines 2788 - 2828, The sync mutation implementations rebuild DELETE/UPDATE SQL independently which duplicates logic; refactor the sync terminal methods that currently construct SQL manually (the ones corresponding to the async variants) to call the shared builders build_delete_statement() and build_update_statement(values, current_timestamp=...) instead: obtain sql, values = build_delete_statement() for deletes and sql, values = build_update_statement(values_dict, current_timestamp=current_timestamp) for updates, then pass those to the existing execution path (use the returned sql string and values list) so sync and async mutations share the same SQL/planning logic and avoid duplication.
1281-1290: Use the public cursor execution wrapper here.This module now depends on
db.execute_cursor(...)elsewhere; this remainingself.db._execute(...)call keeps a private-access exception and can drift from the public execution path.Proposed refactor
conn = self.db.connect() cursor = conn.cursor() - self.db._execute(cursor, sql, values) # noqa: SLF001 + self.db.execute_cursor(cursor, sql, values) return cursor.fetchall()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@sqliter/query/query.py` around lines 1281 - 1290, The code calls the private helper self.db._execute(...) in the method that builds and runs the m2m junction query (after build_m2m_junction_query) which bypasses the public execution wrapper; change this to use the public API by calling self.db.execute_cursor(cursor, sql, values) (or self.db.execute_cursor(sql, values) if the helper signature opens/creates the cursor) so execution goes through the same public path, preserves logging/transaction behavior, and removes private-access usage; update the invocation site that currently uses self.db._execute to call execute_cursor with the same sql and values and ensure the cursor/connection handling matches execute_cursor's contract.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/api-reference/async-sqliterdb.md`:
- Around line 87-260: Update the AsyncSqliterDB reference to include the missing
public properties and cache methods and correct the __aenter__ return type: add
the read-only properties db_filename, debug, logger, return_local_time,
is_connected, is_memory, filename, in_transaction to the "Read-Only Properties"
section and add cache_get, cache_set, invalidate_table_cache to the "Cache
Methods" section (alongside clear_cache, reset_cache_stats, get_cache_stats);
also change the documented signature of __aenter__ to return Self (not
AsyncSqliterDB) so subclassing IDE inference matches the implementation (look
for AsyncSqliterDB, __aenter__, db_filename, in_transaction, cache_get,
cache_set, invalidate_table_cache).
In `@docs/tui-demo/async.md`:
- Line 240: The examples use the deprecated direction argument by calling
.order("qty", "DESC"); update these calls (e.g., the expression first = await
db.select(Item).order("qty", "DESC").fetch_first()) to use the keyword-only
reverse boolean instead (e.g., .order("qty", reverse=True)) so the order() call
uses reverse=True rather than passing "DESC", and adjust any similar occurrences
in the async examples accordingly.
In `@sqliter/asyncio/orm/fields.py`:
- Around line 35-54: The fetch method currently uses _cached None to mean both
"not loaded" and "loaded but missing", causing repeated DB queries; change the
Field loader to track loaded state (e.g., add a boolean like _loaded) or use a
sentinel object separate from None, update the class initialization to set that
sentinel/_loaded default, and modify fetch (and any cache-clearing logic) to set
the sentinel/_loaded after the first DB call so subsequent calls return cached
None instead of re-querying; refer to the fetch method, _cached, _fk_id, and _db
to locate where to add and check the loaded flag or sentinel.
In `@sqliter/asyncio/orm/m2m.py`:
- Around line 454-506: The descriptor __get__ currently declares and casts
return types as SyncManyToMany/ManyToManyManager/PrefetchedM2MResult but
actually constructs AsyncManyToManyManager and AsyncPrefetchedM2MResult; update
the overload signatures and the concrete return annotation/casts to use the
async types (e.g., AsyncManyToManyManager and AsyncPrefetchedM2MResult) so type
checkers and IDEs see the correct async API; apply the same fixes to the second
similar block noted (lines ~512-559) and ensure the cast strings and overload
return unions reference the Async* variants rather than the sync ones.
- Around line 240-303: add(), remove(), and clear() perform DB writes but never
invalidate cached M2M queries, so add a cache-invalidation step after successful
commits: after await db.maybe_commit() in add(), call a helper like
self._invalidate_m2m_cache(from_pk, [to_pk...]) (for symmetrical relations
ensure you pass both sides or the sorted pair), in remove() call the same helper
for from_pk and each to_pk you acted on (skip missing pks), and in clear() call
self._invalidate_m2m_cache(from_pk) (or pass any collected related pks if
available) so prefetched/cached queries for both sides are cleared; implement
the helper (e.g. _invalidate_m2m_cache) to evict the appropriate cached query
keys for this relationship.
- Around line 410-428: The add/remove/clear/set methods currently reassign
self._items after calling the manager, which breaks the descriptor prefetch
cache that still references the old list; change each method (add, remove,
clear, set) to fetch the updated items via await self._manager.fetch_all() into
a local variable and then update the existing list in place (e.g., replace
contents via slice assignment) so the original list object referenced by the
descriptor/_prefetch_cache is mutated; if self._items is None, initialize it to
the fetched list instead.
In `@sqliter/sqliter.py`:
- Around line 899-902: The current loop around self._execute_sql(index_sql)
swallows all SqlExecutionError exceptions (using suppress), which can hide
failed junction-index DDL; remove the blanket suppress and instead catch
exceptions per index_sql, log a clear error including the failing SQL and
re-raise (or collect and raise after the loop) so callers/tests see the failure;
specifically update the block that iterates index_sqls (where create_sql,
index_sqls and self._execute_sql are used and SqlExecutionError is referenced)
to stop silently ignoring errors and surface them with contextual logging and/or
an error raised.
In `@sqliter/tui/demos/async_demos.py`:
- Around line 20-30: The broad except ImportError hides unrelated import errors
inside sqliter.asyncio; change the handler so it only swallows the missing
optional dependency (aiosqlite) and re-raises other ImportError instances: wrap
the imports of AsyncSqliterDB, AsyncBaseDBModel, AsyncForeignKey, and
AsyncReverseQuery in a try/except ImportError as e, and in the except check
e.name == "aiosqlite" or "aiosqlite" in str(e) before setting _ASYNC_AVAILABLE =
False; otherwise re-raise the exception so real internal errors in
sqliter.asyncio surface.
- Line 232: The call to order is using the deprecated positional direction
argument (".order(\"qty\", \"DESC\")"); update the usage to the modern API by
replacing the positional direction with the reverse keyword argument (use
".order(\"qty\", reverse=True)") where the db.select(Item).order call occurs so
it no longer triggers DeprecationWarning and matches other demos/tests.
In `@tests/test_m2m.py`:
- Around line 655-674: Update the test_prefetched_wrapper_refreshes_after_writes
to detect a stale cached _items by adding a different tag than the one already
seeded into the PrefetchedM2MResult: seed the initial relationship with tag1 (as
you do), construct PrefetchedM2MResult([tag1], manager), then call
prefetched.add(tag2) and assert prefetched.count() == 2 (instead of adding tag1
and asserting 1). This ensures PrefetchedM2MResult.add() refreshes the cached
_items (see PrefetchedM2MResult.add() / _items and the refresh logic in
sqliter/orm/m2m.py around the 584-590 range).
---
Outside diff comments:
In `@sqliter/sqliter.py`:
- Around line 161-224: The code uses a boolean _in_transaction toggled in
__enter__ and __exit__, which breaks nested with db: blocks; change to a
transaction depth counter (e.g. _transaction_depth: int = 0), increment it in
__enter__ and decrement it in __exit__, set _in_transaction (or replace its
uses) to True when _transaction_depth > 0, and only perform the actual
commit/rollback and connection state changes when _transaction_depth reaches 0;
update the in_transaction property to return _transaction_depth > 0 and ensure
__enter__, __exit__, and any auto-commit logic consult the depth so inner
contexts do not prematurely commit/rollback (refer to __enter__, __exit__,
in_transaction, and the current _in_transaction usage).
---
Nitpick comments:
In `@sqliter/query/query.py`:
- Around line 2788-2828: The sync mutation implementations rebuild DELETE/UPDATE
SQL independently which duplicates logic; refactor the sync terminal methods
that currently construct SQL manually (the ones corresponding to the async
variants) to call the shared builders build_delete_statement() and
build_update_statement(values, current_timestamp=...) instead: obtain sql,
values = build_delete_statement() for deletes and sql, values =
build_update_statement(values_dict, current_timestamp=current_timestamp) for
updates, then pass those to the existing execution path (use the returned sql
string and values list) so sync and async mutations share the same SQL/planning
logic and avoid duplication.
- Around line 1281-1290: The code calls the private helper self.db._execute(...)
in the method that builds and runs the m2m junction query (after
build_m2m_junction_query) which bypasses the public execution wrapper; change
this to use the public API by calling self.db.execute_cursor(cursor, sql,
values) (or self.db.execute_cursor(sql, values) if the helper signature
opens/creates the cursor) so execution goes through the same public path,
preserves logging/transaction behavior, and removes private-access usage; update
the invocation site that currently uses self.db._execute to call execute_cursor
with the same sql and values and ensure the cursor/connection handling matches
execute_cursor's contract.
In `@sqliter/tui/demos/async_demos.py`:
- Around line 421-453: The temporary DB file created via
tempfile.NamedTemporaryFile (db_path) can be left behind if _run_async(main())
raises; wrap the call to _run_async(main()) in a try/finally so
Path(db_path).unlink(missing_ok=True) is executed in the finally block to always
remove the temp file; keep the current delete=False behavior and ensure unlink
uses missing_ok=True to avoid secondary errors.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 490f9f96-14f4-4b2a-a8af-d703c2c837aa
📒 Files selected for processing (32)
docs/api-reference/async-sqliterdb.mddocs/api-reference/sqliterdb.mddocs/guide/caching.mddocs/guide/data-operations.mddocs/guide/guide.mddocs/guide/transactions.mddocs/quickstart.mddocs/tui-demo/async.mddocs/tui-demo/connection.mdsqliter/asyncio/__init__.pysqliter/asyncio/db.pysqliter/asyncio/orm/fields.pysqliter/asyncio/orm/m2m.pysqliter/asyncio/orm/model.pysqliter/asyncio/orm/query.pysqliter/asyncio/query.pysqliter/orm/m2m.pysqliter/orm/model.pysqliter/query/query.pysqliter/sqliter.pysqliter/tui/demos/async_demos.pysqliter/tui/demos/connection.pytests/test_asyncio_core.pytests/test_asyncio_orm.pytests/test_bulk_insert.pytests/test_cache.pytests/test_context_manager.pytests/test_foreign_keys_orm.pytests/test_m2m.pytests/test_query.pytests/test_sqliter.pytests/tui/unit/test_demos_async.py
✅ Files skipped from review due to trivial changes (5)
- docs/quickstart.md
- docs/guide/transactions.md
- docs/guide/data-operations.md
- tests/tui/unit/test_demos_async.py
- tests/test_asyncio_orm.py
🚧 Files skipped from review as they are similar to previous changes (6)
- tests/test_bulk_insert.py
- tests/test_query.py
- sqliter/asyncio/init.py
- sqliter/orm/m2m.py
- sqliter/asyncio/orm/query.py
- sqliter/asyncio/db.py
| ### Async context manager | ||
|
|
||
| ```python | ||
| async def __aenter__(self) -> AsyncSqliterDB: | ||
| async def __aexit__(self, exc_type, exc, tb) -> None: | ||
| ``` | ||
|
|
||
| Example: | ||
|
|
||
| ```python | ||
| async with AsyncSqliterDB("app.db") as db: | ||
| ... | ||
| ``` | ||
|
|
||
| > [!WARNING] | ||
| > | ||
| > Breaking change in `0.21.0`: `async with` commits or rolls back the | ||
| > transaction but does not close the connection on exit. Call `await db.close()` | ||
| > explicitly when the async database instance is no longer needed. | ||
|
|
||
| --- | ||
|
|
||
| ## Table Methods | ||
|
|
||
| ### `create_table()` | ||
|
|
||
| ```python | ||
| async def create_table( | ||
| self, | ||
| model_class: type[BaseDBModel], | ||
| *, | ||
| exists_ok: bool = True, | ||
| force: bool = False, | ||
| ) -> None: | ||
| ``` | ||
|
|
||
| Creates tables, indexes, FK constraints, and ORM many-to-many junction tables. | ||
|
|
||
| ### `drop_table()` | ||
|
|
||
| ```python | ||
| async def drop_table( | ||
| self, | ||
| model_class: type[BaseDBModel], | ||
| ) -> None: | ||
| ``` | ||
|
|
||
| ### `get_table_names()` | ||
|
|
||
| ```python | ||
| async def get_table_names(self) -> list[str]: | ||
| ``` | ||
|
|
||
| This is a method in the async API instead of a property because async | ||
| properties cannot be awaited. | ||
|
|
||
| ## Read-Only Properties | ||
|
|
||
| ### `auto_commit` | ||
|
|
||
| ```python | ||
| @property | ||
| def auto_commit(self) -> bool: | ||
| ``` | ||
|
|
||
| ### `is_autocommit` | ||
|
|
||
| ```python | ||
| @property | ||
| def is_autocommit(self) -> bool: | ||
| ``` | ||
|
|
||
| Compatibility alias for the sync property name. | ||
|
|
||
| --- | ||
|
|
||
| ## CRUD Methods | ||
|
|
||
| ### `insert()` | ||
|
|
||
| ```python | ||
| async def insert(self, model_instance: T) -> T: | ||
| ``` | ||
|
|
||
| ### `bulk_insert()` | ||
|
|
||
| ```python | ||
| async def bulk_insert( | ||
| self, | ||
| instances: Sequence[T], | ||
| *, | ||
| timestamp_override: bool = False, | ||
| ) -> list[T]: | ||
| ``` | ||
|
|
||
| ### `get()` | ||
|
|
||
| ```python | ||
| async def get( | ||
| self, | ||
| model_class: type[T], | ||
| primary_key_value: int, | ||
| *, | ||
| bypass_cache: bool = False, | ||
| cache_ttl: int | None = None, | ||
| ) -> T | None: | ||
| ``` | ||
|
|
||
| ### `update()` | ||
|
|
||
| ```python | ||
| async def update(self, model_instance: BaseDBModel) -> None: | ||
| ``` | ||
|
|
||
| ### `update_where()` | ||
|
|
||
| ```python | ||
| async def update_where( | ||
| self, | ||
| model_class: type[T], | ||
| where: dict[str, Any], | ||
| values: dict[str, Any], | ||
| ) -> int: | ||
| ``` | ||
|
|
||
| ### `delete()` | ||
|
|
||
| ```python | ||
| async def delete( | ||
| self, | ||
| model_class: type[BaseDBModel], | ||
| primary_key_value: int | str, | ||
| ) -> None: | ||
| ``` | ||
|
|
||
| ### `select()` | ||
|
|
||
| ```python | ||
| def select( | ||
| self, | ||
| model_class: type[T], | ||
| fields: list[str] | None = None, | ||
| exclude: list[str] | None = None, | ||
| ) -> AsyncQueryBuilder[T]: | ||
| ``` | ||
|
|
||
| This is sync to construct the query object. The query's terminal methods are | ||
| awaited. | ||
|
|
||
| ## Cache Methods | ||
|
|
||
| ### `clear_cache()` | ||
|
|
||
| ```python | ||
| def clear_cache(self) -> None: | ||
| ``` | ||
|
|
||
| Clears all cached query results. | ||
|
|
||
| ### `reset_cache_stats()` | ||
|
|
||
| ```python | ||
| def reset_cache_stats(self) -> None: | ||
| ``` | ||
|
|
||
| Resets cache hit/miss counters without changing current cached entries. | ||
|
|
||
| ### `get_cache_stats()` | ||
|
|
||
| ```python | ||
| def get_cache_stats(self) -> dict[str, int | float]: | ||
| ``` | ||
|
|
||
| Returns the same cache statistics as the sync API. |
There was a problem hiding this comment.
Reference page omits several public properties and cache methods that exist on AsyncSqliterDB.
Per the implementation in sqliter/asyncio/db.py, the following public surface is not documented here and will likely surprise users reading this as the authoritative reference:
- Properties:
db_filename,debug,logger,return_local_time,is_connected,is_memory,filename,in_transaction(onlyauto_commitandis_autocommitare listed under “Read-Only Properties”). - Cache methods:
cache_get,cache_set,invalidate_table_cache(onlyclear_cache,reset_cache_stats,get_cache_statsare documented).
Also a minor type-signature nit: __aenter__ is documented as returning AsyncSqliterDB, but the actual implementation returns Self (which matters for subclasses getting correct IDE inference).
Consider adding these to the respective sections for parity with docs/api-reference/sqliterdb.md.
🧰 Tools
🪛 LanguageTool
[style] ~87-~87: Three successive sentences begin with the same word. Consider rewording the sentence or use a thesaurus to find a synonym.
Context: ...sync def commit(self) -> None: ### Async context manager python async def __...
(ENGLISH_WORD_REPEAT_BEGINNING_RULE)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/api-reference/async-sqliterdb.md` around lines 87 - 260, Update the
AsyncSqliterDB reference to include the missing public properties and cache
methods and correct the __aenter__ return type: add the read-only properties
db_filename, debug, logger, return_local_time, is_connected, is_memory,
filename, in_transaction to the "Read-Only Properties" section and add
cache_get, cache_set, invalidate_table_cache to the "Cache Methods" section
(alongside clear_cache, reset_cache_stats, get_cache_stats); also change the
documented signature of __aenter__ to return Self (not AsyncSqliterDB) so
subclassing IDE inference matches the implementation (look for AsyncSqliterDB,
__aenter__, db_filename, in_transaction, cache_get, cache_set,
invalidate_table_cache).
| in_stock = await db.select(Item).filter(qty__gt=4).fetch_all() | ||
| print(f"qty > 4: {[i.name for i in in_stock]}") | ||
|
|
||
| first = await db.select(Item).order("qty", "DESC").fetch_first() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect order() definitions and call sites to verify whether
# the second argument is a direction string or a reverse boolean.
rg -n -C3 'def order\s*\(' --type=py
rg -n -C2 '\.order\([^)]*"(ASC|DESC)"|\.order\([^)]*,\s*reverse=' --type=pyRepository: seapagan/sqliter-py
Length of output: 5070
🏁 Script executed:
# Get the full order() method implementation from the async query module
sed -n '62,90p' sqliter/asyncio/query.py
# Also check the sync version for reference
sed -n '2026,2055p' sqliter/query/query.pyRepository: seapagan/sqliter-py
Length of output: 1872
Update deprecated direction argument to reverse parameter in async examples.
The order() method signature includes both a deprecated direction parameter and a keyword-only reverse: bool parameter. Lines 240 and 562 use .order("qty", "DESC"), which passes the string to the deprecated direction argument and triggers a DeprecationWarning. Update these to use the modern API: .order("qty", reverse=True).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/tui-demo/async.md` at line 240, The examples use the deprecated
direction argument by calling .order("qty", "DESC"); update these calls (e.g.,
the expression first = await db.select(Item).order("qty", "DESC").fetch_first())
to use the keyword-only reverse boolean instead (e.g., .order("qty",
reverse=True)) so the order() call uses reverse=True rather than passing "DESC",
and adjust any similar occurrences in the async examples accordingly.
| self._cached: Optional[T] = None | ||
|
|
||
| @property | ||
| def db_context(self) -> object: | ||
| """Return the database context.""" | ||
| return self._db | ||
|
|
||
| async def fetch(self) -> Optional[T]: | ||
| """Load and return the related object, if present.""" | ||
| if not self._fk_id: | ||
| self._cached = None | ||
| return None | ||
|
|
||
| if self._cached is None and self._db is not None: | ||
| result = await self._db.get( | ||
| cast("type[BaseDBModel]", self._to_model), | ||
| self._fk_id, | ||
| ) | ||
| self._cached = cast("Optional[T]", result) | ||
| return self._cached |
There was a problem hiding this comment.
Cache missing FK lookups explicitly.
_cached is None currently means both “not fetched yet” and “fetched and missing”, so repeated await loader.fetch() calls re-query the database for a missing related row. Track a loaded flag or use a sentinel so None results are cached too.
Proposed fix
self._db = db_context
self._cached: Optional[T] = None
+ self._loaded = False
@@
- if self._cached is None and self._db is not None:
+ if not self._loaded and self._db is not None:
result = await self._db.get(
cast("type[BaseDBModel]", self._to_model),
self._fk_id,
)
self._cached = cast("Optional[T]", result)
+ self._loaded = True
return self._cached🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/asyncio/orm/fields.py` around lines 35 - 54, The fetch method
currently uses _cached None to mean both "not loaded" and "loaded but missing",
causing repeated DB queries; change the Field loader to track loaded state
(e.g., add a boolean like _loaded) or use a sentinel object separate from None,
update the class initialization to set that sentinel/_loaded default, and modify
fetch (and any cache-clearing logic) to set the sentinel/_loaded after the first
DB call so subsequent calls return cached None instead of re-querying; refer to
the fetch method, _cached, _fk_id, and _db to locate where to add and check the
loaded flag or sentinel.
| async def add(self, *instances: T) -> None: | ||
| """Add related objects.""" | ||
| db = self._check_context() | ||
| from_pk = self._get_instance_pk() | ||
| sql = self._insert_sql() | ||
|
|
||
| conn = await db.connect() | ||
| cursor = await conn.cursor() | ||
| try: | ||
| for inst in instances: | ||
| to_pk = getattr(inst, "pk", None) | ||
| if not to_pk: | ||
| self._raise_missing_pk() | ||
| to_pk = cast("int", to_pk) | ||
| if self._sql_metadata.symmetrical: | ||
| left_pk, right_pk = sorted([from_pk, to_pk]) | ||
| await db.execute_cursor(cursor, sql, (left_pk, right_pk)) | ||
| else: | ||
| await db.execute_cursor(cursor, sql, (from_pk, to_pk)) | ||
| except Exception: | ||
| await self._rollback_if_needed(db) | ||
| raise | ||
|
|
||
| await db.maybe_commit() | ||
|
|
||
| async def remove(self, *instances: T) -> None: | ||
| """Remove related objects.""" | ||
| db = self._check_context() | ||
| from_pk = self._get_instance_pk() | ||
| sql = self._delete_pair_sql() | ||
|
|
||
| conn = await db.connect() | ||
| cursor = await conn.cursor() | ||
| try: | ||
| for inst in instances: | ||
| to_pk = getattr(inst, "pk", None) | ||
| if not to_pk: | ||
| continue | ||
| to_pk = cast("int", to_pk) | ||
| if self._sql_metadata.symmetrical: | ||
| left_pk, right_pk = sorted([from_pk, to_pk]) | ||
| await db.execute_cursor(cursor, sql, (left_pk, right_pk)) | ||
| else: | ||
| await db.execute_cursor(cursor, sql, (from_pk, to_pk)) | ||
| except Exception: | ||
| await self._rollback_if_needed(db) | ||
| raise | ||
|
|
||
| await db.maybe_commit() | ||
|
|
||
| async def clear(self) -> None: | ||
| """Remove all related objects for this instance.""" | ||
| db = self._check_context() | ||
| sql, params = self._clear_sql() | ||
|
|
||
| conn = await db.connect() | ||
| cursor = await conn.cursor() | ||
| try: | ||
| await db.execute_cursor(cursor, sql, params) | ||
| except Exception: | ||
| await self._rollback_if_needed(db) | ||
| raise | ||
| await db.maybe_commit() | ||
|
|
There was a problem hiding this comment.
Invalidate related query caches after M2M writes.
add(), remove(), and clear() mutate the junction table but do not clear cached queries for either side of the relationship. With caching enabled, prefetched M2M reads can keep returning stale relationship data after these writes.
Proposed cache invalidation hook
class AsyncManyToManyManager(Generic[T]):
"""Async manager for M2M relationships."""
@@
`@property`
def sql_metadata(self) -> M2MSQLMetadata:
"""Return read-only SQL metadata."""
return self._sql_metadata
+
+ def _invalidate_related_caches(self, db: AsyncSqliterDB) -> None:
+ """Invalidate cached queries for both sides of the relationship."""
+ for model in (self._from_model, self._to_model):
+ table_name = cast("type[BaseDBModel]", model).get_table_name()
+ db.invalidate_table_cache(table_name)
@@
- await db.maybe_commit()
+ await db.maybe_commit()
+ self._invalidate_related_caches(db)
@@
- await db.maybe_commit()
+ await db.maybe_commit()
+ self._invalidate_related_caches(db)
@@
- await db.maybe_commit()
+ await db.maybe_commit()
+ self._invalidate_related_caches(db)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/asyncio/orm/m2m.py` around lines 240 - 303, add(), remove(), and
clear() perform DB writes but never invalidate cached M2M queries, so add a
cache-invalidation step after successful commits: after await db.maybe_commit()
in add(), call a helper like self._invalidate_m2m_cache(from_pk, [to_pk...])
(for symmetrical relations ensure you pass both sides or the sorted pair), in
remove() call the same helper for from_pk and each to_pk you acted on (skip
missing pks), and in clear() call self._invalidate_m2m_cache(from_pk) (or pass
any collected related pks if available) so prefetched/cached queries for both
sides are cleared; implement the helper (e.g. _invalidate_m2m_cache) to evict
the appropriate cached query keys for this relationship.
| async def add(self, *instances: T) -> None: | ||
| """Delegate add.""" | ||
| await self._manager.add(*instances) | ||
| self._items = await self._manager.fetch_all() | ||
|
|
||
| async def remove(self, *instances: T) -> None: | ||
| """Delegate remove.""" | ||
| await self._manager.remove(*instances) | ||
| self._items = await self._manager.fetch_all() | ||
|
|
||
| async def clear(self) -> None: | ||
| """Delegate clear.""" | ||
| await self._manager.clear() | ||
| self._items = await self._manager.fetch_all() | ||
|
|
||
| async def set(self, *instances: T) -> None: | ||
| """Delegate set.""" | ||
| await self._manager.set(*instances) | ||
| self._items = await self._manager.fetch_all() |
There was a problem hiding this comment.
Refresh the stored prefetch cache in place.
These methods reassign self._items, but the instance’s _prefetch_cache still points at the original list. A later instance.relationship access will build a new wrapper from that stale cached list. Update the list in place so both the current wrapper and the descriptor cache stay current.
Proposed in-place refresh
class AsyncPrefetchedM2MResult(Generic[T]):
@@
async def filter(
self,
**kwargs: FilterValue,
) -> AsyncQueryBuilder[Any]:
"""Fall back to a real async query for filtered results."""
return await self._manager.filter(**kwargs)
+
+ async def _refresh_items(self) -> None:
+ """Refresh cached items while preserving the prefetch-cache list."""
+ self._items[:] = await self._manager.fetch_all()
@@
async def add(self, *instances: T) -> None:
"""Delegate add."""
await self._manager.add(*instances)
- self._items = await self._manager.fetch_all()
+ await self._refresh_items()
@@
async def remove(self, *instances: T) -> None:
"""Delegate remove."""
await self._manager.remove(*instances)
- self._items = await self._manager.fetch_all()
+ await self._refresh_items()
@@
async def clear(self) -> None:
"""Delegate clear."""
await self._manager.clear()
- self._items = await self._manager.fetch_all()
+ await self._refresh_items()
@@
async def set(self, *instances: T) -> None:
"""Delegate set."""
await self._manager.set(*instances)
- self._items = await self._manager.fetch_all()
+ await self._refresh_items()🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/asyncio/orm/m2m.py` around lines 410 - 428, The add/remove/clear/set
methods currently reassign self._items after calling the manager, which breaks
the descriptor prefetch cache that still references the old list; change each
method (add, remove, clear, set) to fetch the updated items via await
self._manager.fetch_all() into a local variable and then update the existing
list in place (e.g., replace contents via slice assignment) so the original list
object referenced by the descriptor/_prefetch_cache is mutated; if self._items
is None, initialize it to the fetched list instead.
| @overload | ||
| def __get__( | ||
| self, | ||
| instance: None, | ||
| owner: type[object], | ||
| ) -> SyncManyToMany[T]: ... | ||
|
|
||
| @overload | ||
| def __get__( | ||
| self, | ||
| instance: object, | ||
| owner: type[object], | ||
| ) -> Union[ | ||
| ManyToManyManager[T], | ||
| PrefetchedM2MResult[T], | ||
| ]: ... | ||
|
|
||
| def __get__( | ||
| self, | ||
| instance: object | None, | ||
| owner: type[object], | ||
| ) -> Union[ | ||
| SyncManyToMany[T], | ||
| ManyToManyManager[T], | ||
| PrefetchedM2MResult[T], | ||
| ]: | ||
| """Return an async manager or prefetched wrapper.""" | ||
| if instance is None: | ||
| return self | ||
|
|
||
| if isinstance(self.to_model, str): | ||
| msg = ( | ||
| "ManyToMany target model is unresolved. " | ||
| "Define the target model class before " | ||
| "accessing the relationship." | ||
| ) | ||
| raise TypeError(msg) | ||
|
|
||
| manager = AsyncManyToManyManager( | ||
| instance=cast("HasPKAndContext", instance), | ||
| to_model=self.to_model, | ||
| from_model=owner, | ||
| junction_table=self._junction_table or "", | ||
| db_context=getattr(instance, "db_context", None), | ||
| options=ManyToManyOptions(symmetrical=self.m2m_info.symmetrical), | ||
| ) | ||
| cache = getattr(instance, "__dict__", {}).get("_prefetch_cache", {}) | ||
| if self.name and self.name in cache: | ||
| return cast( | ||
| "PrefetchedM2MResult[T]", | ||
| AsyncPrefetchedM2MResult(cache[self.name], manager), | ||
| ) | ||
| return cast("ManyToManyManager[T]", manager) |
There was a problem hiding this comment.
Return async relationship types from async descriptors.
The runtime returns AsyncManyToManyManager / AsyncPrefetchedM2MResult, but the overloads and casts advertise the sync manager/result types. That makes the new async API misleading to type checkers and IDEs.
Proposed type annotation fix
from sqliter.orm.m2m import (
M2MSQLMetadata,
- ManyToManyManager,
ManyToManyOptions,
- PrefetchedM2MResult,
_build_m2m_sql_metadata,
)
@@
def __get__(
self,
instance: object,
owner: type[object],
) -> Union[
- ManyToManyManager[T],
- PrefetchedM2MResult[T],
+ AsyncManyToManyManager[T],
+ AsyncPrefetchedM2MResult[T],
]: ...
@@
) -> Union[
SyncManyToMany[T],
- ManyToManyManager[T],
- PrefetchedM2MResult[T],
+ AsyncManyToManyManager[T],
+ AsyncPrefetchedM2MResult[T],
]:
@@
return cast(
- "PrefetchedM2MResult[T]",
+ "AsyncPrefetchedM2MResult[T]",
AsyncPrefetchedM2MResult(cache[self.name], manager),
)
- return cast("ManyToManyManager[T]", manager)
+ return manager
@@
def __get__(
self,
instance: object,
owner: type[object],
) -> Union[
- ManyToManyManager[Any],
- PrefetchedM2MResult[Any],
+ AsyncManyToManyManager[Any],
+ AsyncPrefetchedM2MResult[Any],
]: ...
@@
) -> Union[
SyncReverseManyToMany,
- ManyToManyManager[Any],
- PrefetchedM2MResult[Any],
+ AsyncManyToManyManager[Any],
+ AsyncPrefetchedM2MResult[Any],
]:
@@
return cast(
- "PrefetchedM2MResult[Any]",
+ "AsyncPrefetchedM2MResult[Any]",
AsyncPrefetchedM2MResult(cache[self._related_name], manager),
)
- return cast("ManyToManyManager[Any]", manager)
+ return managerAlso applies to: 512-559
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/asyncio/orm/m2m.py` around lines 454 - 506, The descriptor __get__
currently declares and casts return types as
SyncManyToMany/ManyToManyManager/PrefetchedM2MResult but actually constructs
AsyncManyToManyManager and AsyncPrefetchedM2MResult; update the overload
signatures and the concrete return annotation/casts to use the async types
(e.g., AsyncManyToManyManager and AsyncPrefetchedM2MResult) so type checkers and
IDEs see the correct async API; apply the same fixes to the second similar block
noted (lines ~512-559) and ensure the cast strings and overload return unions
reference the Async* variants rather than the sync ones.
| self._execute_sql(create_sql) | ||
| for index_sql in index_sqls: | ||
| with suppress(SqlExecutionError): | ||
| self._execute_sql(index_sql) |
There was a problem hiding this comment.
Do not silently hide junction-index DDL failures.
CREATE INDEX IF NOT EXISTS already handles idempotency. Suppressing every SqlExecutionError can leave M2M junction tables without expected indexes while tests and callers see a successful schema setup.
Proposed safer handling
-from contextlib import suppress
@@
self._execute_sql(create_sql)
for index_sql in index_sqls:
- with suppress(SqlExecutionError):
- self._execute_sql(index_sql)
+ self._execute_sql(index_sql)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/sqliter.py` around lines 899 - 902, The current loop around
self._execute_sql(index_sql) swallows all SqlExecutionError exceptions (using
suppress), which can hide failed junction-index DDL; remove the blanket suppress
and instead catch exceptions per index_sql, log a clear error including the
failing SQL and re-raise (or collect and raise after the loop) so callers/tests
see the failure; specifically update the block that iterates index_sqls (where
create_sql, index_sqls and self._execute_sql are used and SqlExecutionError is
referenced) to stop silently ignoring errors and surface them with contextual
logging and/or an error raised.
| try: | ||
| from sqliter.asyncio import AsyncSqliterDB | ||
| from sqliter.asyncio.orm import ( | ||
| AsyncBaseDBModel, | ||
| AsyncForeignKey, | ||
| AsyncReverseQuery, | ||
| ) | ||
|
|
||
| _ASYNC_AVAILABLE = True | ||
| except ImportError: | ||
| _ASYNC_AVAILABLE = False |
There was a problem hiding this comment.
Do not hide unrelated async import failures.
Catching every ImportError makes internal regressions in sqliter.asyncio look like “aiosqlite is not installed”. Only downgrade the known optional-dependency failure; re-raise anything else.
Proposed fix
-except ImportError:
- _ASYNC_AVAILABLE = False
+except ImportError as exc:
+ if getattr(exc, "name", None) != "aiosqlite" and (
+ "aiosqlite is required" not in str(exc)
+ ):
+ raise
+ _ASYNC_AVAILABLE = False🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/tui/demos/async_demos.py` around lines 20 - 30, The broad except
ImportError hides unrelated import errors inside sqliter.asyncio; change the
handler so it only swallows the missing optional dependency (aiosqlite) and
re-raises other ImportError instances: wrap the imports of AsyncSqliterDB,
AsyncBaseDBModel, AsyncForeignKey, and AsyncReverseQuery in a try/except
ImportError as e, and in the except check e.name == "aiosqlite" or "aiosqlite"
in str(e) before setting _ASYNC_AVAILABLE = False; otherwise re-raise the
exception so real internal errors in sqliter.asyncio surface.
| in_stock = await db.select(Item).filter(qty__gt=4).fetch_all() | ||
| output.write(f"qty > 4: {[i.name for i in in_stock]}\n") | ||
|
|
||
| first = await db.select(Item).order("qty", "DESC").fetch_first() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Inspect order() definitions and demo call sites to confirm the
# supported argument style.
rg -n -C3 'def order\s*\(' --type=py
rg -n -C2 '\.order\([^)]*"(ASC|DESC)"|\.order\([^)]*,\s*reverse=' --type=pyRepository: seapagan/sqliter-py
Length of output: 5070
🏁 Script executed:
# View the full order() method implementation
sed -n '2026,2080p' sqliter/query/query.pyRepository: seapagan/sqliter-py
Length of output: 1983
Use the modern reverse parameter instead of the deprecated direction argument.
The code passes "DESC" as a positional second argument, which maps to the deprecated direction parameter and triggers a DeprecationWarning. Change .order("qty", "DESC") to .order("qty", reverse=True) to match the modern API used throughout the tests and other demos.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sqliter/tui/demos/async_demos.py` at line 232, The call to order is using the
deprecated positional direction argument (".order(\"qty\", \"DESC\")"); update
the usage to the modern API by replacing the positional direction with the
reverse keyword argument (use ".order(\"qty\", reverse=True)") where the
db.select(Item).order call occurs so it no longer triggers DeprecationWarning
and matches other demos/tests.
| def test_prefetched_wrapper_refreshes_after_writes( | ||
| self, db: SqliterDB | ||
| ) -> None: | ||
| """Prefetched wrapper reflects delegated write operations.""" | ||
| article = db.insert(Article(title="Guide")) | ||
| tag1 = db.insert(Tag(name="python")) | ||
| tag2 = db.insert(Tag(name="tutorial")) | ||
|
|
||
| manager = cast("ManyToManyManager[Tag]", article.tags) | ||
| prefetched = PrefetchedM2MResult([tag1], manager) | ||
|
|
||
| assert prefetched.count() == 1 | ||
| prefetched.add(tag1) | ||
| assert prefetched.count() == 1 | ||
| prefetched.set(tag2) | ||
| fetched = prefetched.fetch_one() | ||
| assert fetched is not None | ||
| assert fetched.name == "tutorial" | ||
| prefetched.clear() | ||
| assert prefetched.fetch_all() == [] |
There was a problem hiding this comment.
Make the add() assertion detect stale prefetched cache.
Line 668 would pass even if PrefetchedM2MResult.add() did not refresh _items, because the wrapper starts with [tag1] and still has count 1 without any refresh. Seed the initial relationship, then add a different tag so the assertion fails on stale cache. This specifically covers the refresh implemented in sqliter/orm/m2m.py:584-590.
🧪 Proposed test adjustment
def test_prefetched_wrapper_refreshes_after_writes(
self, db: SqliterDB
) -> None:
"""Prefetched wrapper reflects delegated write operations."""
article = db.insert(Article(title="Guide"))
tag1 = db.insert(Tag(name="python"))
tag2 = db.insert(Tag(name="tutorial"))
manager = cast("ManyToManyManager[Tag]", article.tags)
- prefetched = PrefetchedM2MResult([tag1], manager)
+ manager.add(tag1)
+ prefetched = PrefetchedM2MResult(manager.fetch_all(), manager)
assert prefetched.count() == 1
- prefetched.add(tag1)
- assert prefetched.count() == 1
+ prefetched.add(tag2)
+ assert prefetched.count() == 2
+ assert {tag.name for tag in prefetched.fetch_all()} == {
+ "python",
+ "tutorial",
+ }
prefetched.set(tag2)
fetched = prefetched.fetch_one()
assert fetched is not None
assert fetched.name == "tutorial"
prefetched.clear()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def test_prefetched_wrapper_refreshes_after_writes( | |
| self, db: SqliterDB | |
| ) -> None: | |
| """Prefetched wrapper reflects delegated write operations.""" | |
| article = db.insert(Article(title="Guide")) | |
| tag1 = db.insert(Tag(name="python")) | |
| tag2 = db.insert(Tag(name="tutorial")) | |
| manager = cast("ManyToManyManager[Tag]", article.tags) | |
| prefetched = PrefetchedM2MResult([tag1], manager) | |
| assert prefetched.count() == 1 | |
| prefetched.add(tag1) | |
| assert prefetched.count() == 1 | |
| prefetched.set(tag2) | |
| fetched = prefetched.fetch_one() | |
| assert fetched is not None | |
| assert fetched.name == "tutorial" | |
| prefetched.clear() | |
| assert prefetched.fetch_all() == [] | |
| def test_prefetched_wrapper_refreshes_after_writes( | |
| self, db: SqliterDB | |
| ) -> None: | |
| """Prefetched wrapper reflects delegated write operations.""" | |
| article = db.insert(Article(title="Guide")) | |
| tag1 = db.insert(Tag(name="python")) | |
| tag2 = db.insert(Tag(name="tutorial")) | |
| manager = cast("ManyToManyManager[Tag]", article.tags) | |
| manager.add(tag1) | |
| prefetched = PrefetchedM2MResult(manager.fetch_all(), manager) | |
| assert prefetched.count() == 1 | |
| prefetched.add(tag2) | |
| assert prefetched.count() == 2 | |
| assert {tag.name for tag in prefetched.fetch_all()} == { | |
| "python", | |
| "tutorial", | |
| } | |
| prefetched.set(tag2) | |
| fetched = prefetched.fetch_one() | |
| assert fetched is not None | |
| assert fetched.name == "tutorial" | |
| prefetched.clear() | |
| assert prefetched.fetch_all() == [] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/test_m2m.py` around lines 655 - 674, Update the
test_prefetched_wrapper_refreshes_after_writes to detect a stale cached _items
by adding a different tag than the one already seeded into the
PrefetchedM2MResult: seed the initial relationship with tag1 (as you do),
construct PrefetchedM2MResult([tag1], manager), then call prefetched.add(tag2)
and assert prefetched.count() == 2 (instead of adding tag1 and asserting 1).
This ensures PrefetchedM2MResult.add() refreshes the cached _items (see
PrefetchedM2MResult.add() / _items and the refresh logic in sqliter/orm/m2m.py
around the 584-590 range).
Summary
This branch adds first-class async support to SQLiter under
sqliter.asyncioandsqliter.asyncio.orm.It includes:
AsyncSqliterDBandAsyncQueryBuilderAsyncBaseDBModel,AsyncForeignKey, async reverse relationships, and async many-to-many managersbulk_inserttransaction handlingAPI notes
Async ORM intentionally differs from sync ORM for lazy FK loading:
book.author.nameauthor = await book.author.fetch()Reverse relationships and many-to-many accessors keep the same object-oriented shape, but terminal and write operations are awaited.
Follow-up notes
A small docs note remains about current mypy limitations around dynamic async relationship typing. The runtime API is implemented and covered; the note is there to set correct expectations for static typing ergonomics.
Summary by CodeRabbit
New Features
Documentation
Tests
Documentation (Breaking change)