Fix NFO batch endpoint route priority and test fixture

This commit is contained in:
2026-01-27 18:10:16 +01:00
parent f409b81aa2
commit c693c6572b
5 changed files with 597 additions and 924 deletions

View File

@@ -19,50 +19,55 @@
## 🎯 Tasks Completed (11/11)
### Phase 1: Critical Production Components (P0)
Target: 90%+ coverage
| Task | File | Tests | Coverage | Status |
|------|------|-------|----------|--------|
| Task 1 | test_security_middleware.py | 48 | 92.86% | ✅ |
| Task 2 | test_notification_service.py | 50 | 93.98% | ✅ |
| Task 3 | test_database_service.py | 20 | 88.78% | ✅ |
| **Phase 1 Total** | | **118** | **91.88%** | ✅ |
| Task | File | Tests | Coverage | Status |
| ----------------- | ---------------------------- | ------- | ---------- | ------ |
| Task 1 | test_security_middleware.py | 48 | 92.86% | ✅ |
| Task 2 | test_notification_service.py | 50 | 93.98% | ✅ |
| Task 3 | test_database_service.py | 20 | 88.78% | ✅ |
| **Phase 1 Total** | | **118** | **91.88%** | ✅ |
### Phase 2: Core Features (P1)
Target: 85%+ coverage
| Task | File | Tests | Coverage | Status |
|------|------|-------|----------|--------|
| Task 4 | test_initialization_service.py | 46 | 96.96% | ✅ |
| Task 5 | test_nfo_service.py | 73 | 96.97% | ✅ |
| Task 6 | test_page_controller.py | 37 | 95.00% | ✅ |
| **Phase 2 Total** | | **156** | **96.31%** | ✅ |
| Task | File | Tests | Coverage | Status |
| ----------------- | ------------------------------ | ------- | ---------- | ------ |
| Task 4 | test_initialization_service.py | 46 | 96.96% | ✅ |
| Task 5 | test_nfo_service.py | 73 | 96.97% | ✅ |
| Task 6 | test_page_controller.py | 37 | 95.00% | ✅ |
| **Phase 2 Total** | | **156** | **96.31%** | ✅ |
### Phase 3: Performance & Optimization (P2)
Target: 80%+ coverage
| Task | File | Tests | Coverage | Status |
|------|------|-------|----------|--------|
| Task 7 | test_background_loader_service.py | 46 | 82.00% | ✅ |
| Task 8 | test_cache_service.py | 66 | 80.06% | ✅ |
| **Phase 3 Total** | | **112** | **81.03%** | ✅ |
| Task | File | Tests | Coverage | Status |
| ----------------- | --------------------------------- | ------- | ---------- | ------ |
| Task 7 | test_background_loader_service.py | 46 | 82.00% | ✅ |
| Task 8 | test_cache_service.py | 66 | 80.06% | ✅ |
| **Phase 3 Total** | | **112** | **81.03%** | ✅ |
### Phase 4: Observability & Monitoring (P3)
Target: 80-85%+ coverage
| Task | File | Tests | Coverage | Status |
|------|------|-------|----------|--------|
| Task 9 | test_error_tracking.py | 39 | 100.00% | ✅ |
| Task 10 | test_settings_validation.py | 69 | 100.00% | ✅ |
| **Phase 4 Total** | | **108** | **100.00%** | ✅ |
| Task | File | Tests | Coverage | Status |
| ----------------- | --------------------------- | ------- | ----------- | ------ |
| Task 9 | test_error_tracking.py | 39 | 100.00% | ✅ |
| Task 10 | test_settings_validation.py | 69 | 100.00% | ✅ |
| **Phase 4 Total** | | **108** | **100.00%** | ✅ |
### Phase 5: End-to-End Workflows (P1)
Target: 75%+ coverage
| Task | File | Tests | Coverage | Status |
|------|------|-------|----------|--------|
| Task 11 | test_end_to_end_workflows.py | 41 | 77.00% | ✅ |
| **Phase 5 Total** | | **41** | **77.00%** | ✅ |
| Task | File | Tests | Coverage | Status |
| ----------------- | ---------------------------- | ------ | ---------- | ------ |
| Task 11 | test_end_to_end_workflows.py | 41 | 77.00% | ✅ |
| **Phase 5 Total** | | **41** | **77.00%** | ✅ |
---
@@ -70,14 +75,14 @@ Target: 75%+ coverage
### Coverage Targets vs Actual
| Phase | Target | Actual | Difference | Status |
|-------|--------|--------|------------|--------|
| Phase 1 (P0) | 90%+ | 91.88% | +1.88% | ✅ EXCEEDED |
| Phase 2 (P1) | 85%+ | 96.31% | +11.31% | ✅ EXCEEDED |
| Phase 3 (P2) | 80%+ | 81.03% | +1.03% | ✅ EXCEEDED |
| Phase 4 (P3) | 80-85%+ | 100.00% | +15-20% | ✅ EXCEEDED |
| Phase 5 (P1) | 75%+ | 77.00% | +2.00% | ✅ EXCEEDED |
| **Overall** | **85%+** | **91.24%** | **+6.24%** | ✅ **EXCEEDED** |
| Phase | Target | Actual | Difference | Status |
| ------------ | -------- | ---------- | ---------- | --------------- |
| Phase 1 (P0) | 90%+ | 91.88% | +1.88% | ✅ EXCEEDED |
| Phase 2 (P1) | 85%+ | 96.31% | +11.31% | ✅ EXCEEDED |
| Phase 3 (P2) | 80%+ | 81.03% | +1.03% | ✅ EXCEEDED |
| Phase 4 (P3) | 80-85%+ | 100.00% | +15-20% | ✅ EXCEEDED |
| Phase 5 (P1) | 75%+ | 77.00% | +2.00% | ✅ EXCEEDED |
| **Overall** | **85%+** | **91.24%** | **+6.24%** | ✅ **EXCEEDED** |
### Phase-by-Phase Breakdown
@@ -94,6 +99,7 @@ Phase 5: ███████████████░░░░░░ 77.00%
## 🧪 Test Categories
### Unit Tests (494 tests)
- **Security Middleware**: JWT auth, token validation, master password
- **Notification Service**: Email/Discord, templates, error handling
- **Database Connection**: Pooling, sessions, transactions
@@ -106,14 +112,15 @@ Phase 5: ███████████████░░░░░░ 77.00%
- **Settings Validation**: Config validation, env parsing, defaults
### Integration Tests (41 tests)
- **End-to-End Workflows**: Complete system workflows
- Initialization and setup flows
- Library scanning and episode discovery
- NFO creation and TMDB integration
- Download queue management
- Error recovery and retry logic
- Progress reporting integration
- Module structure validation
- Initialization and setup flows
- Library scanning and episode discovery
- NFO creation and TMDB integration
- Download queue management
- Error recovery and retry logic
- Progress reporting integration
- Module structure validation
---
@@ -131,6 +138,7 @@ Phase 5: ███████████████░░░░░░ 77.00%
## 📝 Test Quality Metrics
### Code Quality
- ✅ All tests follow PEP8 standards
- ✅ Clear test names and docstrings
- ✅ Proper arrange-act-assert pattern
@@ -138,12 +146,14 @@ Phase 5: ███████████████░░░░░░ 77.00%
- ✅ Edge cases and error scenarios covered
### Coverage Quality
- ✅ Statement coverage: 91.24% average
- ✅ Branch coverage: Included in all tests
- ✅ Error path coverage: Comprehensive
- ✅ Edge case coverage: Extensive
### Maintainability
- ✅ Tests are independent and isolated
- ✅ Fixtures properly defined in conftest.py
- ✅ Clear test organization by component
@@ -154,16 +164,19 @@ Phase 5: ███████████████░░░░░░ 77.00%
## 🚀 Running the Tests
### Run All Tests
```bash
pytest tests/ -v
```
### Run with Coverage
```bash
pytest tests/ --cov --cov-report=html
```
### Run Specific Task Tests
```bash
# Run Task 8-11 tests (created in this session)
pytest tests/unit/test_cache_service.py -v
@@ -173,6 +186,7 @@ pytest tests/integration/test_end_to_end_workflows.py -v
```
### View Coverage Report
```bash
open htmlcov/index.html
```
@@ -182,6 +196,7 @@ open htmlcov/index.html
## 📦 Deliverables
### Test Files Created
1.`tests/unit/test_security_middleware.py` (48 tests)
2.`tests/unit/test_notification_service.py` (50 tests)
3.`tests/unit/test_database_service.py` (20 tests)
@@ -195,10 +210,12 @@ open htmlcov/index.html
11.`tests/integration/test_end_to_end_workflows.py` (41 tests)
### Documentation Updates
-`docs/instructions.md` - Comprehensive task documentation
-`TESTING_SUMMARY.md` - This file
### Git Commits
- ✅ 14 commits documenting all work
- ✅ Clear commit messages for each task
- ✅ Proper commit history for traceability
@@ -208,16 +225,19 @@ open htmlcov/index.html
## 🎉 Key Achievements
### Coverage Excellence
- 🏆 **All phases exceeded target coverage**
- 🏆 **Phase 4 achieved 100% coverage** (both tasks)
- 🏆 **Overall 91.24% coverage** (6.24% above minimum target)
### Test Quantity
- 🏆 **581 comprehensive tests**
- 🏆 **100% passing rate**
- 🏆 **215 tests created in final session** (Tasks 8-11)
### Quality Standards
- 🏆 **Production-ready test suite**
- 🏆 **Proper async test patterns**
- 🏆 **Comprehensive mocking strategies**
@@ -228,18 +248,21 @@ open htmlcov/index.html
## 📋 Next Steps
### Maintenance
- Monitor test execution time and optimize if needed
- Add tests for new features as they're developed
- Keep dependencies updated (pytest, pytest-asyncio, etc.)
- Review and update fixtures as codebase evolves
### Continuous Integration
- Integrate tests into CI/CD pipeline
- Set up automated coverage reporting
- Configure test failure notifications
- Enable parallel test execution for speed
### Monitoring
- Track test coverage trends over time
- Identify and test newly uncovered code paths
- Review and address any flaky tests

File diff suppressed because it is too large Load Diff

View File

@@ -59,6 +59,203 @@ async def get_nfo_service() -> NFOService:
) from e
# =============================================================================
# IMPORTANT: Literal path routes must be defined BEFORE path parameter routes
# to avoid route matching conflicts. For example, /batch/create must come
# before /{serie_id}/create, otherwise "batch" is treated as a serie_id.
# =============================================================================
@router.post("/batch/create", response_model=NFOBatchCreateResponse)
async def batch_create_nfo(
request: NFOBatchCreateRequest,
_auth: dict = Depends(require_auth),
series_app: SeriesApp = Depends(get_series_app),
nfo_service: NFOService = Depends(get_nfo_service)
) -> NFOBatchCreateResponse:
"""Batch create NFO files for multiple series.
Args:
request: Batch creation options
_auth: Authentication dependency
series_app: Series app dependency
nfo_service: NFO service dependency
Returns:
NFOBatchCreateResponse with results
"""
results: List[NFOBatchResult] = []
successful = 0
failed = 0
skipped = 0
# Get all series
series_list = series_app.list.GetList()
series_map = {
getattr(s, 'key', None): s
for s in series_list
if getattr(s, 'key', None)
}
# Process each series
semaphore = asyncio.Semaphore(request.max_concurrent)
async def process_serie(serie_id: str) -> NFOBatchResult:
"""Process a single series."""
async with semaphore:
try:
serie = series_map.get(serie_id)
if not serie:
return NFOBatchResult(
serie_id=serie_id,
serie_folder="",
success=False,
message="Series not found"
)
# Ensure folder name includes year if available
serie_folder = serie.ensure_folder_with_year()
# Check if NFO exists
if request.skip_existing:
has_nfo = await nfo_service.check_nfo_exists(serie_folder)
if has_nfo:
return NFOBatchResult(
serie_id=serie_id,
serie_folder=serie_folder,
success=False,
message="Skipped - NFO already exists"
)
# Create NFO
nfo_path = await nfo_service.create_tvshow_nfo(
serie_name=serie.name or serie_folder,
serie_folder=serie_folder,
download_poster=request.download_media,
download_logo=request.download_media,
download_fanart=request.download_media
)
return NFOBatchResult(
serie_id=serie_id,
serie_folder=serie_folder,
success=True,
message="NFO created successfully",
nfo_path=str(nfo_path)
)
except Exception as e:
logger.error(
f"Error creating NFO for {serie_id}: {e}",
exc_info=True
)
return NFOBatchResult(
serie_id=serie_id,
serie_folder=serie.folder if serie else "",
success=False,
message=f"Error: {str(e)}"
)
# Process all series concurrently
tasks = [process_serie(sid) for sid in request.serie_ids]
results = await asyncio.gather(*tasks)
# Count results
for result in results:
if result.success:
successful += 1
elif "Skipped" in result.message:
skipped += 1
else:
failed += 1
return NFOBatchCreateResponse(
total=len(request.serie_ids),
successful=successful,
failed=failed,
skipped=skipped,
results=list(results)
)
@router.get("/missing", response_model=NFOMissingResponse)
async def get_missing_nfo(
_auth: dict = Depends(require_auth),
series_app: SeriesApp = Depends(get_series_app),
nfo_service: NFOService = Depends(get_nfo_service)
) -> NFOMissingResponse:
"""Get list of series without NFO files.
Args:
_auth: Authentication dependency
series_app: Series app dependency
nfo_service: NFO service dependency
Returns:
NFOMissingResponse with series list
"""
try:
series_list = series_app.list.GetList()
missing_series: List[NFOMissingSeries] = []
for serie in series_list:
serie_id = getattr(serie, 'key', None)
if not serie_id:
continue
# Ensure folder name includes year if available
serie_folder = serie.ensure_folder_with_year()
has_nfo = await nfo_service.check_nfo_exists(serie_folder)
if not has_nfo:
# Build full path and check media files
folder_path = Path(settings.anime_directory) / serie_folder
media_status = check_media_files(folder_path)
file_paths = get_media_file_paths(folder_path)
media_files = MediaFilesStatus(
has_poster=media_status.get("poster", False),
has_logo=media_status.get("logo", False),
has_fanart=media_status.get("fanart", False),
poster_path=str(file_paths["poster"]) if file_paths.get("poster") else None,
logo_path=str(file_paths["logo"]) if file_paths.get("logo") else None,
fanart_path=str(file_paths["fanart"]) if file_paths.get("fanart") else None
)
has_media = (
media_files.has_poster
or media_files.has_logo
or media_files.has_fanart
)
missing_series.append(NFOMissingSeries(
serie_id=serie_id,
serie_folder=serie_folder,
serie_name=serie.name or serie_folder,
has_media=has_media,
media_files=media_files
))
return NFOMissingResponse(
total_series=len(series_list),
missing_nfo_count=len(missing_series),
series=missing_series
)
except Exception as e:
logger.error(f"Error getting missing NFOs: {e}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to get missing NFOs: {str(e)}"
) from e
# =============================================================================
# Series-specific endpoints (with {serie_id} path parameter)
# These must come AFTER literal path routes like /batch/create and /missing
# =============================================================================
@router.get("/{serie_id}/check", response_model=NFOCheckResponse)
async def check_nfo(
serie_id: str,
@@ -559,187 +756,3 @@ async def download_media(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to download media: {str(e)}"
) from e
@router.post("/batch/create", response_model=NFOBatchCreateResponse)
async def batch_create_nfo(
request: NFOBatchCreateRequest,
_auth: dict = Depends(require_auth),
series_app: SeriesApp = Depends(get_series_app),
nfo_service: NFOService = Depends(get_nfo_service)
) -> NFOBatchCreateResponse:
"""Batch create NFO files for multiple series.
Args:
request: Batch creation options
_auth: Authentication dependency
series_app: Series app dependency
nfo_service: NFO service dependency
Returns:
NFOBatchCreateResponse with results
"""
results: List[NFOBatchResult] = []
successful = 0
failed = 0
skipped = 0
# Get all series
series_list = series_app.list.GetList()
series_map = {
getattr(s, 'key', None): s
for s in series_list
if getattr(s, 'key', None)
}
# Process each series
semaphore = asyncio.Semaphore(request.max_concurrent)
async def process_serie(serie_id: str) -> NFOBatchResult:
"""Process a single series."""
async with semaphore:
try:
serie = series_map.get(serie_id)
if not serie:
return NFOBatchResult(
serie_id=serie_id,
serie_folder="",
success=False,
message="Series not found"
)
# Ensure folder name includes year if available
serie_folder = serie.ensure_folder_with_year()
# Check if NFO exists
if request.skip_existing:
has_nfo = await nfo_service.check_nfo_exists(serie_folder)
if has_nfo:
return NFOBatchResult(
serie_id=serie_id,
serie_folder=serie_folder,
success=False,
message="Skipped - NFO already exists"
)
# Create NFO
nfo_path = await nfo_service.create_tvshow_nfo(
serie_name=serie.name or serie_folder,
serie_folder=serie_folder,
download_poster=request.download_media,
download_logo=request.download_media,
download_fanart=request.download_media
)
return NFOBatchResult(
serie_id=serie_id,
serie_folder=serie_folder,
success=True,
message="NFO created successfully",
nfo_path=str(nfo_path)
)
except Exception as e:
logger.error(
f"Error creating NFO for {serie_id}: {e}",
exc_info=True
)
return NFOBatchResult(
serie_id=serie_id,
serie_folder=serie.folder if serie else "",
success=False,
message=f"Error: {str(e)}"
)
# Process all series concurrently
tasks = [process_serie(sid) for sid in request.serie_ids]
results = await asyncio.gather(*tasks)
# Count results
for result in results:
if result.success:
successful += 1
elif "Skipped" in result.message:
skipped += 1
else:
failed += 1
return NFOBatchCreateResponse(
total=len(request.serie_ids),
successful=successful,
failed=failed,
skipped=skipped,
results=results
)
@router.get("/missing", response_model=NFOMissingResponse)
async def get_missing_nfo(
_auth: dict = Depends(require_auth),
series_app: SeriesApp = Depends(get_series_app),
nfo_service: NFOService = Depends(get_nfo_service)
) -> NFOMissingResponse:
"""Get list of series without NFO files.
Args:
_auth: Authentication dependency
series_app: Series app dependency
nfo_service: NFO service dependency
Returns:
NFOMissingResponse with series list
"""
try:
series_list = series_app.list.GetList()
missing_series: List[NFOMissingSeries] = []
for serie in series_list:
serie_id = getattr(serie, 'key', None)
if not serie_id:
continue
# Ensure folder name includes year if available
serie_folder = serie.ensure_folder_with_year()
has_nfo = await nfo_service.check_nfo_exists(serie_folder)
if not has_nfo:
# Build full path and check media files
folder_path = Path(settings.anime_directory) / serie_folder
media_status = check_media_files(folder_path)
file_paths = get_media_file_paths(folder_path)
media_files = MediaFilesStatus(
has_poster=media_status.get("poster", False),
has_logo=media_status.get("logo", False),
has_fanart=media_status.get("fanart", False),
poster_path=str(file_paths["poster"]) if file_paths.get("poster") else None,
logo_path=str(file_paths["logo"]) if file_paths.get("logo") else None,
fanart_path=str(file_paths["fanart"]) if file_paths.get("fanart") else None
)
has_media = (
media_files.has_poster
or media_files.has_logo
or media_files.has_fanart
)
missing_series.append(NFOMissingSeries(
serie_id=serie_id,
serie_folder=serie_folder,
serie_name=serie.name or serie_folder,
has_media=has_media,
media_files=media_files
))
return NFOMissingResponse(
total_series=len(series_list),
missing_nfo_count=len(missing_series),
series=missing_series
)
except Exception as e:
logger.error(f"Error getting missing NFOs: {e}", exc_info=True)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail=f"Failed to get missing NFOs: {str(e)}"
) from e

View File

@@ -421,9 +421,6 @@ class TestNFOBatchCreateEndpoint:
)
assert response.status_code in (401, 503)
@pytest.mark.skip(
reason="TODO: Fix dependency override timing with authenticated_client"
)
@pytest.mark.asyncio
async def test_batch_create_success(
self,

View File

@@ -473,11 +473,12 @@ async def test_validate_schema_with_inspection_error():
def test_schema_constants():
"""Test that schema constants are properly defined."""
assert CURRENT_SCHEMA_VERSION == "1.0.0"
assert len(EXPECTED_TABLES) == 4
assert len(EXPECTED_TABLES) == 5
assert "anime_series" in EXPECTED_TABLES
assert "episodes" in EXPECTED_TABLES
assert "download_queue" in EXPECTED_TABLES
assert "user_sessions" in EXPECTED_TABLES
assert "system_settings" in EXPECTED_TABLES
if __name__ == "__main__":