486 lines
20 KiB
Markdown
486 lines
20 KiB
Markdown
# Aniworld Web Application Development Instructions
|
||
|
||
This document provides detailed tasks for AI agents to implement a modern web application for the Aniworld anime download manager. All tasks should follow the coding guidelines specified in the project's copilot instructions.
|
||
|
||
## Project Overview
|
||
|
||
The goal is to create a FastAPI-based web application that provides a modern interface for the existing Aniworld anime download functionality. The core anime logic should remain in `SeriesApp.py` while the web layer provides REST API endpoints and a responsive UI.
|
||
|
||
## Architecture Principles
|
||
|
||
- **Single Responsibility**: Each file/class has one clear purpose
|
||
- **Dependency Injection**: Use FastAPI's dependency system
|
||
- **Clean Separation**: Web layer calls core logic, never the reverse
|
||
- **File Size Limit**: Maximum 500 lines per file
|
||
- **Type Hints**: Use comprehensive type annotations
|
||
- **Error Handling**: Proper exception handling and logging
|
||
|
||
## Additional Implementation Guidelines
|
||
|
||
### Code Style and Standards
|
||
|
||
- **Type Hints**: Use comprehensive type annotations throughout all modules
|
||
- **Docstrings**: Follow PEP 257 for function and class documentation
|
||
- **Error Handling**: Implement custom exception classes with meaningful messages
|
||
- **Logging**: Use structured logging with appropriate log levels
|
||
- **Security**: Validate all inputs and sanitize outputs
|
||
- **Performance**: Use async/await patterns for I/O operations
|
||
|
||
## 📞 Escalation
|
||
|
||
If you encounter:
|
||
|
||
- Architecture issues requiring design decisions
|
||
- Tests that conflict with documented requirements
|
||
- Breaking changes needed
|
||
- Unclear requirements or expectations
|
||
|
||
**Document the issue and escalate rather than guessing.**
|
||
|
||
---
|
||
|
||
## <20> Credentials
|
||
|
||
**Admin Login:**
|
||
|
||
- Username: `admin`
|
||
- Password: `Hallo123!`
|
||
|
||
---
|
||
|
||
## <20>📚 Helpful Commands
|
||
|
||
```bash
|
||
# Run all tests
|
||
conda run -n AniWorld python -m pytest tests/ -v --tb=short
|
||
|
||
# Run specific test file
|
||
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py -v
|
||
|
||
# Run specific test class
|
||
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py::TestWebSocketService -v
|
||
|
||
# Run specific test
|
||
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py::TestWebSocketService::test_broadcast_download_progress -v
|
||
|
||
# Run with extra verbosity
|
||
conda run -n AniWorld python -m pytest tests/ -vv
|
||
|
||
# Run with full traceback
|
||
conda run -n AniWorld python -m pytest tests/ -v --tb=long
|
||
|
||
# Run and stop at first failure
|
||
conda run -n AniWorld python -m pytest tests/ -v -x
|
||
|
||
# Run tests matching pattern
|
||
conda run -n AniWorld python -m pytest tests/ -v -k "auth"
|
||
|
||
# Show all print statements
|
||
conda run -n AniWorld python -m pytest tests/ -v -s
|
||
|
||
#Run app
|
||
conda run -n AniWorld python -m uvicorn src.server.fastapi_app:app --host 127.0.0.1 --port 8000 --reload
|
||
```
|
||
|
||
---
|
||
|
||
## Implementation Notes
|
||
|
||
1. **Incremental Development**: Implement features incrementally, testing each component thoroughly before moving to the next
|
||
2. **Code Review**: Review all generated code for adherence to project standards
|
||
3. **Documentation**: Document all public APIs and complex logic
|
||
4. **Testing**: Maintain test coverage above 80% for all new code
|
||
5. **Performance**: Profile and optimize critical paths, especially download and streaming operations
|
||
6. **Security**: Regular security audits and dependency updates
|
||
7. **Monitoring**: Implement comprehensive monitoring and alerting
|
||
8. **Maintenance**: Plan for regular maintenance and updates
|
||
|
||
---
|
||
|
||
## Task Completion Checklist
|
||
|
||
For each task completed:
|
||
|
||
- [ ] Implementation follows coding standards
|
||
- [ ] Unit tests written and passing
|
||
- [ ] Integration tests passing
|
||
- [ ] Documentation updated
|
||
- [ ] Error handling implemented
|
||
- [ ] Logging added
|
||
- [ ] Security considerations addressed
|
||
- [ ] Performance validated
|
||
- [ ] Code reviewed
|
||
- [ ] Task marked as complete in instructions.md
|
||
- [ ] Infrastructure.md updated and other docs
|
||
- [ ] Changes committed to git; keep your messages in git short and clear
|
||
- [ ] Take the next task
|
||
|
||
---
|
||
|
||
## TODO List:
|
||
|
||
### 🔴 TIER 1: Critical Priority (Security & Data Integrity)
|
||
|
||
#### Test Infrastructure Fixes
|
||
|
||
- [ ] **Fix authenticated_client + mock_download_service fixture conflict** in tests/conftest.py
|
||
- Refactor fixture dependency chain to prevent conflicts
|
||
- Enable 34 currently failing tests in tests/api/test_download_endpoints.py
|
||
- Verify all downstream tests pass after fix
|
||
- Target: 100% of previously failing download endpoint tests passing
|
||
|
||
- [ ] **Fix authenticated_client auth issues** in tests/api/test_config_endpoints.py
|
||
- Resolve dependency override timing issues
|
||
- Enable 18 currently failing configuration endpoint tests
|
||
- Verify authentication state properly propagates
|
||
- Target: 100% of config endpoint tests passing
|
||
|
||
- [ ] **Fix rate limiting state bleeding** in tests/security/test_auth.py
|
||
- Implement proper rate limit reset in fixtures
|
||
- Fix 1 failing test with trio backend
|
||
- Ensure rate limiting state isolated between tests
|
||
- Target: All auth security tests passing
|
||
|
||
#### Scheduler System Tests (0% Coverage)
|
||
|
||
- [ ] **Create tests/api/test_scheduler_endpoints.py** - Scheduler API endpoint tests
|
||
- Test GET /api/scheduler/config (retrieve current configuration)
|
||
- Test POST /api/scheduler/config (update scheduler settings)
|
||
- Test POST /api/scheduler/trigger-rescan (manual trigger)
|
||
- Test scheduler enable/disable functionality
|
||
- Test interval configuration validation (minimum/maximum values)
|
||
- Test unauthorized access rejection (authentication required)
|
||
- Test invalid configuration rejection (validation errors)
|
||
- Target: 80%+ coverage of src/server/api/scheduler_api.py
|
||
|
||
- [ ] **Create tests/unit/test_scheduler_service.py** - Scheduler service logic tests
|
||
- Test scheduled library rescan execution
|
||
- Test scheduler state persistence across restarts
|
||
- Test background task execution and lifecycle
|
||
- Test scheduler conflict resolution (manual vs automated scans)
|
||
- Test error handling during scheduled operations
|
||
- Target: 80%+ coverage of scheduler service logic
|
||
|
||
- [ ] **Create tests/integration/test_scheduler_workflow.py** - End-to-end scheduler tests
|
||
- Test scheduler trigger → library rescan → database update workflow
|
||
- Test scheduler configuration changes apply immediately
|
||
- Test scheduler persistence after application restart
|
||
- Test concurrent manual and automated scan handling
|
||
- Target: Full workflow validation
|
||
|
||
#### NFO Batch Operations Tests (Currently Skipped)
|
||
|
||
- [ ] **Fix NFO batch creation dependency override** in tests/api/test_nfo_endpoints.py
|
||
- Fix TestNFOBatchCreateEndpoint tests (currently skipped)
|
||
- Resolve dependency override timing with authenticated_client
|
||
- Test POST /api/nfo/batch/create endpoint with multiple series
|
||
- Test max_concurrent parameter enforcement
|
||
- Target: All batch endpoint tests passing
|
||
|
||
- [ ] **Create tests/unit/test_nfo_batch_operations.py** - NFO batch logic tests
|
||
- Test concurrent NFO creation with max_concurrent limits
|
||
- Test batch operation error handling (partial failures)
|
||
- Test batch operation progress tracking
|
||
- Test batch operation cancellation
|
||
- Target: 80%+ coverage of batch operation logic in src/core/services/nfo_service.py
|
||
|
||
- [ ] **Create tests/integration/test_nfo_batch_workflow.py** - Batch NFO workflow tests
|
||
- Test creating NFO files for 10+ series simultaneously
|
||
- Test media file download (poster, logo, fanart) in batch
|
||
- Test TMDB API rate limiting during batch operations
|
||
- Test batch operation status updates via WebSocket
|
||
- Target: Full batch workflow validation
|
||
|
||
#### Download Queue Tests (2/36 Passing)
|
||
|
||
- [ ] **Fix download queue fixture issues** enabling 34 failing tests
|
||
- Fix mock_download_service fixture conflicts
|
||
- Test GET /api/queue endpoint (retrieve current queue)
|
||
- Test POST /api/queue/start endpoint (manual start)
|
||
- Test POST /api/queue/stop endpoint (manual stop)
|
||
- Test DELETE /api/queue/clear-completed endpoint
|
||
- Test DELETE /api/queue/clear-failed endpoint
|
||
- Test POST /api/queue/retry endpoint (retry failed downloads)
|
||
- Target: 90%+ of download queue endpoint tests passing
|
||
|
||
- [ ] **Create tests/unit/test_queue_operations.py** - Queue logic tests
|
||
- Test FIFO queue ordering validation
|
||
- Test single download mode enforcement
|
||
- Test queue statistics accuracy (pending/active/completed/failed counts)
|
||
- Test queue reordering functionality
|
||
- Test concurrent queue modifications (race condition prevention)
|
||
- Target: 80%+ coverage of queue management logic
|
||
|
||
- [ ] **Create tests/integration/test_queue_persistence.py** - Queue persistence tests
|
||
- Test queue state persists after application restart
|
||
- Test download progress restoration after restart
|
||
- Test failed download state recovery
|
||
- Test completed download history persistence
|
||
- Target: Full persistence workflow validation
|
||
|
||
#### NFO Auto-Create Integration Tests
|
||
|
||
- [ ] **Create tests/integration/test_nfo_download_workflow.py** - NFO auto-create during download
|
||
- Test NFO file created automatically before episode download
|
||
- Test media files (poster/logo/fanart) downloaded before episode
|
||
- Test NFO creation failure handling (download continues/aborts based on config)
|
||
- Test NFO auto-create configuration toggle (enable/disable)
|
||
- Test NFO update during library scan (configuration option)
|
||
- Test integration between download_service and nfo_service
|
||
- Target: 100% of NFO auto-create workflow scenarios covered
|
||
|
||
- [ ] **Create tests/unit/test_nfo_auto_create.py** - NFO auto-create logic tests
|
||
- Test NFO file existence check before creation
|
||
- Test NFO file path resolution
|
||
- Test media file existence checks
|
||
- Test configuration-based behavior (auto-create on/off)
|
||
- Target: 80%+ coverage of auto-create logic
|
||
|
||
### 🟡 TIER 2: High Priority (Core UX Features)
|
||
|
||
#### Dark Mode Tests
|
||
|
||
- [ ] **Set up JavaScript testing framework** (Jest/Vitest + Playwright)
|
||
- Install and configure Vitest for unit tests
|
||
- Install and configure Playwright for E2E tests
|
||
- Create test script commands in package.json
|
||
- Set up CI integration for JavaScript tests
|
||
- Target: Working test infrastructure for frontend code
|
||
|
||
- [ ] **Create tests/frontend/test_darkmode.js** - Dark mode toggle tests
|
||
- Test dark mode toggle button click event
|
||
- Test theme class applied to document root
|
||
- Test theme persistence in localStorage
|
||
- Test theme loaded from localStorage on page load
|
||
- Test theme switching animation/transitions
|
||
- Test theme affects all UI components (buttons, cards, modals)
|
||
- Target: 80%+ coverage of src/server/web/static/js/darkmode.js
|
||
|
||
#### Setup Page Tests
|
||
|
||
- [ ] **Create tests/frontend/e2e/test_setup_page.spec.js** - Setup page E2E tests
|
||
- Test form validation (required fields, password strength)
|
||
- Test password strength indicator updates in real-time
|
||
- Test form submission with valid data
|
||
- Test form submission with invalid data (error messages)
|
||
- Test setup completion redirects to main application
|
||
- Test all configuration sections (general, security, directories, scheduler, logging, backup, NFO)
|
||
- Target: 100% of setup page user flows covered
|
||
|
||
- [ ] **Create tests/api/test_setup_endpoints.py** - Setup API tests (if not existing)
|
||
- Test POST /api/setup endpoint (initial configuration)
|
||
- Test setup page access when already configured (redirect)
|
||
- Test configuration validation during setup
|
||
- Test setup completion state persists
|
||
- Target: 80%+ coverage of setup endpoint logic
|
||
|
||
#### Settings Modal Tests
|
||
|
||
- [ ] **Create tests/frontend/e2e/test_settings_modal.spec.js** - Settings modal E2E tests
|
||
- Test settings modal opens/closes correctly
|
||
- Test all configuration fields editable
|
||
- Test configuration changes saved with feedback
|
||
- Test configuration validation prevents invalid settings
|
||
- Test backup creation from modal
|
||
- Test backup restoration from modal
|
||
- Test export/import configuration
|
||
- Test browse directory functionality
|
||
- Target: 100% of settings modal user flows covered
|
||
|
||
- [ ] **Create tests/integration/test_config_backup_restore.py** - Configuration backup/restore tests
|
||
- Test backup creation with timestamp
|
||
- Test backup restoration with validation
|
||
- Test backup list retrieval
|
||
- Test backup deletion
|
||
- Test configuration export format (JSON)
|
||
- Test configuration import validation
|
||
- Target: 100% of backup/restore workflows covered
|
||
|
||
#### WebSocket Reconnection Tests
|
||
|
||
- [ ] **Create tests/frontend/test_websocket_reconnection.js** - WebSocket client tests
|
||
- Test WebSocket connection established on page load
|
||
- Test WebSocket authentication with JWT token
|
||
- Test WebSocket reconnection after connection loss
|
||
- Test WebSocket connection retry with exponential backoff
|
||
- Test WebSocket error handling (connection refused, timeout)
|
||
- Test WebSocket message parsing and dispatch
|
||
- Target: 80%+ coverage of src/server/web/static/js/websocket.js
|
||
|
||
- [ ] **Create tests/integration/test_websocket_resilience.py** - WebSocket resilience tests
|
||
- Test multiple concurrent WebSocket clients (stress test 100+ clients)
|
||
- Test WebSocket connection recovery after server restart
|
||
- Test WebSocket authentication token refresh
|
||
- Test WebSocket message ordering guarantees
|
||
- Test WebSocket broadcast filtering (specific clients)
|
||
- Target: Full resilience scenario coverage
|
||
|
||
#### Queue UI Tests
|
||
|
||
- [ ] **Create tests/frontend/test_queue_ui.js** - Queue management UI tests
|
||
- Test start/stop button click handlers
|
||
- Test clear completed button functionality
|
||
- Test clear failed button functionality
|
||
- Test retry failed button functionality
|
||
- Test queue item display updates in real-time
|
||
- Test queue statistics display (pending/active/completed/failed counts)
|
||
- Target: 80%+ coverage of src/server/web/static/js/queue/ modules
|
||
|
||
- [ ] **Create tests/frontend/e2e/test_queue_interactions.spec.js** - Queue E2E tests
|
||
- Test adding items to download queue from library page
|
||
- Test starting download manually
|
||
- Test stopping download manually
|
||
- Test queue reordering (if implemented)
|
||
- Test bulk operations (clear all, retry all)
|
||
- Test queue state persists across page refreshes
|
||
- Target: 100% of queue user interaction flows covered
|
||
|
||
### 🟢 TIER 3: Medium Priority (Edge Cases & Performance)
|
||
|
||
#### TMDB Integration Tests
|
||
|
||
- [ ] **Create tests/unit/test_tmdb_rate_limiting.py** - TMDB rate limiting tests
|
||
- Test TMDB API rate limit detection (429 response)
|
||
- Test exponential backoff retry logic
|
||
- Test TMDB API quota exhaustion handling
|
||
- Test TMDB API error response parsing
|
||
- Test TMDB API timeout handling
|
||
- Target: 80%+ coverage of rate limiting logic in src/core/providers/tmdb_client.py
|
||
|
||
- [ ] **Create tests/integration/test_tmdb_resilience.py** - TMDB API resilience tests
|
||
- Test TMDB API unavailable (503 error)
|
||
- Test TMDB API partial data response
|
||
- Test TMDB API invalid response format
|
||
- Test TMDB API network timeout
|
||
- Test fallback behavior when TMDB unavailable
|
||
- Target: Full error handling coverage
|
||
|
||
#### Performance Tests
|
||
|
||
- [ ] **Create tests/performance/test_large_library.py** - Large library scanning performance
|
||
- Test library scan with 1000+ series
|
||
- Test scan completion time benchmarks (< 5 minutes for 1000 series)
|
||
- Test memory usage during large scans (< 500MB)
|
||
- Test database query performance during scan
|
||
- Test concurrent scan operation handling
|
||
- Target: Performance baselines established for large libraries
|
||
|
||
- [ ] **Create tests/performance/test_nfo_batch_performance.py** - Batch NFO performance tests
|
||
- Test concurrent NFO creation (10, 50, 100 series)
|
||
- Test TMDB API request batching optimization
|
||
- Test media file download concurrency
|
||
- Test memory usage during batch operations
|
||
- Target: Performance baselines for batch operations
|
||
|
||
- [ ] **Create tests/performance/test_websocket_load.py** - WebSocket performance tests
|
||
- Test WebSocket broadcast to 100+ concurrent clients
|
||
- Test message throughput (messages per second)
|
||
- Test connection pool limits
|
||
- Test progress update throttling (avoid flooding)
|
||
- Target: Performance baselines for WebSocket broadcasting
|
||
|
||
#### Edge Case Tests
|
||
|
||
- [ ] **Create tests/unit/test_concurrent_scans.py** - Concurrent scan operation tests
|
||
- Test multiple simultaneous scan requests handled gracefully
|
||
- Test scan cancellation/interruption handling
|
||
- Test database race condition prevention during scans
|
||
- Test scan state consistency with concurrent requests
|
||
- Target: 100% of concurrent operation scenarios covered
|
||
|
||
- [ ] **Create tests/unit/test_download_retry.py** - Download retry logic tests
|
||
- Test automatic retry after download failure
|
||
- Test retry attempt count tracking
|
||
- Test exponential backoff between retries
|
||
- Test maximum retry limit enforcement
|
||
- Test retry state persistence
|
||
- Target: 80%+ coverage of retry logic in download service
|
||
|
||
- [ ] **Create tests/integration/test_series_parsing_edge_cases.py** - Series parsing edge cases
|
||
- Test series folder names with year variations (e.g., "Series (2020)", "Series [2020]")
|
||
- Test series names with special characters
|
||
- Test series names with multiple spaces
|
||
- Test series names in different languages (Unicode)
|
||
- Test malformed folder structures
|
||
- Target: 100% of parsing edge cases covered
|
||
|
||
### 🔵 TIER 4: Low Priority (Polish & Future Features)
|
||
|
||
#### Internationalization Tests
|
||
|
||
- [ ] **Create tests/unit/test_i18n.py** - Internationalization tests
|
||
- Test language file loading (src/server/web/static/i18n/)
|
||
- Test language switching functionality
|
||
- Test translation placeholder replacement
|
||
- Test fallback to English for missing translations
|
||
- Test all UI strings translatable
|
||
- Target: 80%+ coverage of i18n implementation
|
||
|
||
#### Accessibility Tests
|
||
|
||
- [ ] **Create tests/frontend/e2e/test_accessibility.spec.js** - Accessibility tests
|
||
- Test keyboard navigation (Tab, Enter, Escape)
|
||
- Test screen reader compatibility (ARIA labels)
|
||
- Test focus management (modals, dropdowns)
|
||
- Test color contrast ratios (WCAG AA compliance)
|
||
- Test responsive design breakpoints (mobile, tablet, desktop)
|
||
- Target: WCAG 2.1 AA compliance
|
||
|
||
#### User Preferences Tests
|
||
|
||
- [ ] **Create tests/unit/test_user_preferences.py** - User preferences tests
|
||
- Test preferences saved to localStorage
|
||
- Test preferences loaded on page load
|
||
- Test preferences synced across tabs (BroadcastChannel)
|
||
- Test preferences reset to defaults
|
||
- Target: 80%+ coverage of preferences logic
|
||
|
||
#### Media Server Compatibility Tests
|
||
|
||
- [ ] **Create tests/integration/test_media_server_compatibility.py** - NFO format compatibility tests
|
||
- Test Kodi NFO parsing (manual validation with Kodi)
|
||
- Test Plex NFO parsing (manual validation with Plex)
|
||
- Test Jellyfin NFO parsing (manual validation with Jellyfin)
|
||
- Test Emby NFO parsing (manual validation with Emby)
|
||
- Test NFO XML schema validation
|
||
- Target: Compatibility verified with all major media servers
|
||
|
||
---
|
||
|
||
### 📊 Test Coverage Goals
|
||
|
||
**Current Coverage:** 36% overall
|
||
|
||
- NFO Service: 16% (Critical - needs improvement)
|
||
- TMDB Client: 30% (Critical - needs improvement)
|
||
- Scheduler: 0% (Critical - needs tests)
|
||
- Download Queue API: 6% (2/36 tests passing)
|
||
- Configuration API: 0% (0/18 tests passing)
|
||
|
||
**Target Coverage:**
|
||
|
||
- **Overall:** 80%+
|
||
- **Critical Services (Scheduler, NFO, Download):** 80%+
|
||
- **High Priority (Config, WebSocket):** 70%+
|
||
- **Medium Priority (Edge cases, Performance):** 60%+
|
||
- **Frontend JavaScript:** 70%+
|
||
|
||
---
|
||
|
||
### 🔄 Test Execution Priority Order
|
||
|
||
**Week 1 - Infrastructure & Critical:**
|
||
|
||
1. Fix test fixture conflicts (52 tests enabled)
|
||
2. Create scheduler endpoint tests (0% → 80%)
|
||
3. Enable NFO batch tests and add unit tests
|
||
4. Fix download queue tests (6% → 90%)
|
||
|
||
**Week 2 - Integration & UX:** 5. Add NFO auto-create integration tests 6. Set up JavaScript test framework 7. Add dark mode and WebSocket reconnection tests 8. Add setup page and settings modal E2E tests
|
||
|
||
**Week 3 - Performance & Edge Cases:** 9. Add large library performance tests 10. Add TMDB rate limiting tests 11. Add concurrent operation tests 12. Add download retry logic tests
|
||
|
||
**Week 4+ - Polish:** 13. Add i18n tests 14. Add accessibility tests 15. Add user preferences tests 16. Add media server compatibility tests
|
||
|
||
---
|