Reduce per-request DB overhead (Task 4)
- Cache setup_completed flag in app.state._setup_complete_cached after first successful is_setup_complete() call; all subsequent API requests skip the DB query entirely (one-way transition, cleared on restart). - Add in-memory session token TTL cache (10 s) in require_auth; the second request with the same token within the window skips session_repo.get_session. - Call invalidate_session_cache() on logout so revoked tokens are evicted immediately rather than waiting for TTL expiry. - Add clear_session_cache() for test isolation. - 5 new tests covering the cached fast-path for both optimisations. - 460 tests pass, 83% coverage, zero ruff/mypy warnings.
This commit is contained in:
@@ -111,6 +111,15 @@ backend/
|
||||
- Group endpoints into routers by feature domain (`routers/jails.py`, `routers/bans.py`, …).
|
||||
- Use appropriate HTTP status codes: `201` for creation, `204` for deletion with no body, `404` for not found, etc.
|
||||
- Use **HTTPException** or custom exception handlers — never return error dicts manually.
|
||||
- **GET endpoints are read-only — never call `db.commit()` or execute INSERT/UPDATE/DELETE inside a GET handler.** If a GET path produces side-effects (e.g., caching resolved data), that write belongs in a background task, a scheduled flush, or a separate POST endpoint. Users and HTTP caches assume GET is idempotent and non-mutating.
|
||||
|
||||
```python
|
||||
# Good — pass db=None on GET so geo_service never commits
|
||||
result = await geo_service.lookup_batch(ips, http_session, db=None)
|
||||
|
||||
# Bad — triggers INSERT + COMMIT per IP inside a GET handler
|
||||
result = await geo_service.lookup_batch(ips, http_session, db=app_db)
|
||||
```
|
||||
|
||||
```python
|
||||
from fastapi import APIRouter, Depends, HTTPException, status
|
||||
@@ -156,6 +165,26 @@ class BanResponse(BaseModel):
|
||||
- Use `aiohttp.ClientSession` for HTTP calls, `aiosqlite` for database access.
|
||||
- Use `asyncio.TaskGroup` (Python 3.11+) when you need to run independent coroutines concurrently.
|
||||
- Long-running startup/shutdown logic goes into the **FastAPI lifespan** context manager.
|
||||
- **Never call `db.commit()` inside a loop.** With aiosqlite, every commit serialises through a background thread and forces an `fsync`. N rows × 1 commit = N fsyncs. Accumulate all writes in the loop, then issue a single `db.commit()` once after the loop ends. The difference between 5,000 commits and 1 commit can be seconds vs milliseconds.
|
||||
|
||||
```python
|
||||
# Good — one commit for the whole batch
|
||||
for ip, info in results.items():
|
||||
await db.execute(INSERT_SQL, (ip, info.country_code, ...))
|
||||
await db.commit() # ← single fsync
|
||||
|
||||
# Bad — one fsync per row
|
||||
for ip, info in results.items():
|
||||
await db.execute(INSERT_SQL, (ip, info.country_code, ...))
|
||||
await db.commit() # ← fsync on every iteration
|
||||
```
|
||||
- **Prefer `executemany()` over calling `execute()` in a loop** when inserting or updating multiple rows with the same SQL template. aiosqlite passes the entire batch to SQLite in one call, reducing Python↔thread overhead on top of the single-commit saving.
|
||||
|
||||
```python
|
||||
# Good
|
||||
await db.executemany(INSERT_SQL, [(ip, cc, cn, asn, org) for ip, info in results.items()])
|
||||
await db.commit()
|
||||
```
|
||||
- Shared resources (DB connections, HTTP sessions) are created once during startup and closed during shutdown — never inside request handlers.
|
||||
|
||||
```python
|
||||
@@ -427,4 +456,7 @@ class SqliteBanRepository:
|
||||
| Handle errors with custom exceptions | Use bare `except:` |
|
||||
| Keep routers thin, logic in services | Put business logic in routers |
|
||||
| Use `datetime.now(datetime.UTC)` | Use naive datetimes |
|
||||
| Run ruff + mypy before committing | Push code that doesn't pass linting |
|
||||
| Run ruff + mypy before committing | Push code that doesn't pass linting |
|
||||
| Keep GET endpoints read-only (no `db.commit()`) | Call `db.commit()` / INSERT inside GET handlers |
|
||||
| Batch DB writes; issue one `db.commit()` after the loop | Commit inside a loop (1 fsync per row) |
|
||||
| Use `executemany()` for bulk inserts | Call `execute()` + `commit()` per row in a loop |
|
||||
Reference in New Issue
Block a user