19 KiB
Aniworld Web Application Development Instructions
This document provides detailed tasks for AI agents to implement a modern web application for the Aniworld anime download manager. All tasks should follow the coding guidelines specified in the project's copilot instructions.
Project Overview
The goal is to create a FastAPI-based web application that provides a modern interface for the existing Aniworld anime download functionality. The core anime logic should remain in SeriesApp.py while the web layer provides REST API endpoints and a responsive UI.
Architecture Principles
- Single Responsibility: Each file/class has one clear purpose
- Dependency Injection: Use FastAPI's dependency system
- Clean Separation: Web layer calls core logic, never the reverse
- File Size Limit: Maximum 500 lines per file
- Type Hints: Use comprehensive type annotations
- Error Handling: Proper exception handling and logging
Additional Implementation Guidelines
Code Style and Standards
- Type Hints: Use comprehensive type annotations throughout all modules
- Docstrings: Follow PEP 257 for function and class documentation
- Error Handling: Implement custom exception classes with meaningful messages
- Logging: Use structured logging with appropriate log levels
- Security: Validate all inputs and sanitize outputs
- Performance: Use async/await patterns for I/O operations
📞 Escalation
If you encounter:
- Architecture issues requiring design decisions
- Tests that conflict with documented requirements
- Breaking changes needed
- Unclear requirements or expectations
Document the issue and escalate rather than guessing.
<EFBFBD> Credentials
Admin Login:
- Username:
admin - Password:
Hallo123!
<EFBFBD>📚 Helpful Commands
# Run all tests
conda run -n AniWorld python -m pytest tests/ -v --tb=short
# Run specific test file
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py -v
# Run specific test class
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py::TestWebSocketService -v
# Run specific test
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py::TestWebSocketService::test_broadcast_download_progress -v
# Run with extra verbosity
conda run -n AniWorld python -m pytest tests/ -vv
# Run with full traceback
conda run -n AniWorld python -m pytest tests/ -v --tb=long
# Run and stop at first failure
conda run -n AniWorld python -m pytest tests/ -v -x
# Run tests matching pattern
conda run -n AniWorld python -m pytest tests/ -v -k "auth"
# Show all print statements
conda run -n AniWorld python -m pytest tests/ -v -s
#Run app
conda run -n AniWorld python -m uvicorn src.server.fastapi_app:app --host 127.0.0.1 --port 8000 --reload
Implementation Notes
- Incremental Development: Implement features incrementally, testing each component thoroughly before moving to the next
- Code Review: Review all generated code for adherence to project standards
- Documentation: Document all public APIs and complex logic
- Testing: Maintain test coverage above 80% for all new code
- Performance: Profile and optimize critical paths, especially download and streaming operations
- Security: Regular security audits and dependency updates
- Monitoring: Implement comprehensive monitoring and alerting
- Maintenance: Plan for regular maintenance and updates
Task Completion Checklist
For each task completed:
- Implementation follows coding standards
- Unit tests written and passing
- Integration tests passing
- Documentation updated
- Error handling implemented
- Logging added
- Security considerations addressed
- Performance validated
- Code reviewed
- Task marked as complete in instructions.md
- Infrastructure.md updated and other docs
- Changes committed to git; keep your messages in git short and clear
- Take the next task
TODO List:
✅ Task 1: Provider System Integration Tests (Priority: CRITICAL) — COMPLETED (211 tests passing)
Objective: Create unit and integration tests for core provider orchestration system (6 files) that handles provider selection, failover, and health monitoring.
Target Files to Test:
src/core/providers/base_provider.py- Abstract base class and interfacesrc/core/providers/aniworld_provider.py- Main provider (664 lines, core functionality)src/core/providers/provider_factory.py- Provider instantiation logicsrc/core/providers/enhanced_provider.py- Enhanced features and cachingsrc/core/providers/monitored_provider.py- Monitoring wrapper with metricssrc/core/providers/config_manager.py- Provider configuration management
Create Test Files: Unit Tests:
tests/unit/test_base_provider.py- Abstract methods, interface contracts, inheritancetests/unit/test_aniworld_provider.py- Anime catalog scraping, episode listing, streaming link extraction (mock HTML responses)tests/unit/test_provider_factory.py- Factory instantiation, dependency injection, provider registrationtests/unit/test_enhanced_provider.py- Caching behavior, optimization features, decorator patternstests/unit/test_monitored_provider.py- Metrics collection, health checks, monitoring integrationtests/unit/test_provider_config_manager.py- Configuration loading, validation, defaults
Integration Tests:
tests/integration/test_provider_failover_scenarios.py- End-to-end provider switching when streaming failstests/integration/test_provider_selection.py- Provider selection based on availability, health status, priority
Test Coverage Requirements:
- Provider instantiation via factory pattern (all provider types)
- Failover from failed provider to healthy backup (3+ provider scenario)
- Health monitoring and circuit breaker patterns
- Configuration loading from config.json and validation
- Aniworld catalog scraping with mocked HTML responses
- Episode listing and metadata extraction
- Multi-provider scenarios with different health states
- Provider priority and selection algorithm
Expected Outcome: ~80 tests total, 90%+ coverage for provider system
Implementation Notes:
- Mock HTML responses for aniworld_provider tests using BeautifulSoup fixtures
- Test factory pattern returns correct provider instances
- Integration tests should test full failover workflow: healthy provider → fails → switches to backup → succeeds
- Use existing
test_provider_health.pyandtest_provider_failover.pyas reference - Mock external dependencies (HTTP, file system, database)
- Test concurrent provider usage scenarios
✅ Task 2: Security Infrastructure Tests (Priority: CRITICAL) — COMPLETED (75 tests passing)
Objective: Create comprehensive tests for security modules handling encryption and database integrity (2 critical files).
Target Files to Test:
src/infrastructure/security/config_encryption.py- Configuration encryption/decryptionsrc/infrastructure/security/database_integrity.py- Database integrity checks and validation
Create Test Files:
tests/unit/test_config_encryption.py:- Encryption/decryption of sensitive configuration values
- Key rotation and management lifecycle
- AES-256 encryption validation
- Decrypt failures with wrong key
- Empty/null value handling
- Multiple encryption rounds
- Performance of encryption operations
tests/unit/test_database_integrity.py:- Database checksum calculation and validation
- Corruption detection mechanisms
- Integrity verification on application startup
- Backup restoration on corruption detection
- Schema validation against expected structure
- Transaction integrity checks
tests/security/test_encryption_security.py:- Key strength validation (minimum bits)
- Timing attack prevention
- Secure key storage validation
- Environment variable security
- Encrypted data format validation
- Key compromise scenarios
Test Coverage Requirements:
- Encryption algorithm correctness (encrypt → decrypt → original value)
- Key management lifecycle (generation, rotation, revocation)
- Database integrity check mechanisms
- Corruption detection and recovery workflows
- Security edge cases (key compromise, brute force attempts)
- Performance testing for encryption operations (should not slow down app significantly)
Expected Outcome: ~40 tests total, 95%+ coverage for security modules
Implementation Notes:
- Read security module files first to understand cryptography library used
- Test both successful and failed encryption/decryption scenarios
- Mock file system for encrypted key storage tests
- Use in-memory databases for integrity testing
- Simulate database corruption scenarios
- Follow security testing best practices from
tests/security/directory - Ensure tests don't expose sensitive data in logs or output
✅ Task 3: Error Handling Tests (Priority: HIGH) — COMPLETED (74 tests passing)
Objective: Create comprehensive tests for error handling and recovery mechanisms (2 files) to ensure robust error management across the application.
Target Files to Test:
src/core/error_handler.py- Core error handling and retry logicsrc/server/middleware/error_handler.py- API error handling middleware
Create Test Files:
tests/unit/test_core_error_handler.py:- Retry logic with exponential backoff
- Maximum retry limits enforcement
- Error classification (transient vs permanent errors)
- Error recovery strategies
- Circuit breaker integration
- Timeout handling
- Resource cleanup on errors
tests/unit/test_middleware_error_handler.py:- HTTP error response formatting (JSON structure)
- Stack trace sanitization in production mode
- Error logging integration with structlog
- Custom exception handling (AnimeNotFound, ProviderError, etc.)
- 400/404/500 error responses
- Error context preservation
- CORS headers on error responses
tests/integration/test_error_recovery_workflows.py:- End-to-end error recovery: download fails → retry → success
- Provider failover on errors (primary fails → backup succeeds)
- Database transaction rollback on errors
- User notification on errors via WebSocket
- Cascading error handling (error in one service affects others)
- Error recovery after temporary outages
Test Coverage Requirements:
- Transient vs permanent error distinction
- Retry exhaustion scenarios (max retries reached)
- Error reporting to users (proper messages, no stack traces)
- Error logging with proper context
- Recovery workflows for common errors
- Error handling doesn't leak resources (connections, file handles)
Expected Outcome: ~50 tests total, 90%+ coverage for error handling
Implementation Notes:
- Test retry logic with controlled failure scenarios
- Mock external services to simulate errors
- Verify exponential backoff timing
- Test error message clarity and usefulness
- Integration tests should verify end-to-end recovery
- Use
pytest.raisesfor exception testing - Mock time.sleep for faster retry tests
✅ Task 4: Services & Utilities Tests (Priority: MEDIUM) — COMPLETED (64 tests passing)
Objective: Create tests for undertested service and utility modules to increase coverage of business logic and helper functions (5 files).
Target Files to Test:
src/core/services/series_manager_service.py- Series orchestration logicsrc/core/services/nfo_factory.py- NFO service factory patternsrc/server/utils/media.py- Media file validation utilitiessrc/server/utils/templates.py- Template rendering utilitiessrc/server/controllers/error_controller.py- Error page controller
Create Test Files:
tests/unit/test_series_manager_service.py:- Series orchestration and lifecycle management
- Episode management (add, remove, update)
- Season handling and organization
- Series state management
- Interaction with SeriesApp
tests/unit/test_nfo_factory.py:- Factory pattern instantiation of NFO services
- Dependency injection setup
- Service lifecycle (singleton vs transient)
- Configuration passing to services
tests/unit/test_media_utils.py:- Media file validation (video formats)
- Codec detection (H.264, H.265, etc.)
- Metadata extraction (duration, resolution)
- File size checks and validation
- Corrupt file detection
tests/unit/test_templates_utils.py:- Template rendering with Jinja2
- Context injection and variable passing
- Error page rendering
- Template caching behavior
- Custom filters and functions
tests/unit/test_error_controller.py:- 404 page rendering with context
- 500 error page with safe error info
- Error context passing to templates
- Static file errors
- API error responses
Test Coverage Requirements:
- Service initialization patterns and dependency setup
- Factory method correctness and proper instance types
- Media file operations with various formats
- Template rendering edge cases (missing variables, errors)
- Error controller response formatting
Expected Outcome: ~60 tests total, 85%+ coverage for each module
Implementation Notes:
- Mock file system for media utility tests
- Use temporary files for media validation tests
- Mock Jinja2 environment for template tests
- Test both success and error paths
- Verify proper resource cleanup
- Use existing service test patterns as reference
✅ Task 5: Infrastructure Logging Tests (Priority: MEDIUM) — COMPLETED (49 tests passing)
Objective: Create tests for logging infrastructure to ensure proper log configuration, formatting, and rotation (2 files).
Target Files to Test:
src/infrastructure/logging/logger.py- Main logger configurationsrc/infrastructure/logging/uvicorn_config.py- Uvicorn logging configuration
Create Test Files:
tests/unit/test_infrastructure_logger.py:- Logger initialization and setup
- Log level configuration (DEBUG, INFO, WARNING, ERROR)
- Log formatting (JSON, text formats)
- File rotation behavior
- Multiple handler setup (console, file, syslog)
- Structured logging with context
- Logger hierarchy and propagation
tests/unit/test_uvicorn_logging_config.py:- Uvicorn access log configuration
- Error log configuration
- Log format customization for HTTP requests
- Integration with main application logger
- Log level filtering for Uvicorn logs
- Performance logging (request timing)
Test Coverage Requirements:
- Logger configuration loading from settings
- Log output format validation (JSON structure, fields)
- Log level filtering works correctly
- File rotation behavior (size-based, time-based)
- Integration with structlog for structured logging
- Performance impact is minimal
Expected Outcome: ~30 tests total, 80%+ coverage for logging infrastructure
Implementation Notes:
- Use temporary log files for testing
- Capture log output using logging.handlers.MemoryHandler
- Test log rotation without waiting for actual rotation triggers
- Verify log format matches expected structure
- Mock file system for file handler tests
- Test various log levels and ensure filtering works
- Verify no sensitive data in logs
✅ Task 6: CLI Tool Tests (Priority: LOW) — COMPLETED (25 tests passing)
Objective: Create tests for NFO command-line interface tool used for DevOps and maintenance workflows (1 file).
Target File to Test:
src/cli/nfo_cli.py- NFO management CLI commands
Create Test Files:
tests/unit/test_nfo_cli.py:- Command parsing (argparse or click)
- Argument validation (required args, types)
- Batch operations (multiple NFO files)
- Error reporting and user-friendly messages
- Output formatting (table, JSON, text)
- Help text generation
- Exit codes (0 for success, non-zero for errors)
tests/integration/test_cli_workflows.py:- NFO creation via CLI end-to-end
- Batch NFO update workflow
- CLI + database integration
- CLI + API integration (if CLI calls API)
- Error handling in CLI workflows
- File system operations (read/write NFO files)
Test Coverage Requirements:
- CLI argument parsing for all commands
- Batch processing multiple files
- Error messages are clear and actionable
- Output formatting matches specification
- Integration with core services (NFO service)
- File operations work correctly
Expected Outcome: ~35 tests total, 80%+ coverage for CLI module
Implementation Notes:
- Read
src/cli/nfo_cli.pyfirst to understand commands - Use
subprocessorclick.testing.CliRunnerfor integration tests - Mock file system operations
- Test with various command-line arguments
- Verify exit codes are correct
- Test help text generation
- Use temporary directories for file operations
- Follow patterns from existing CLI tests if any exist
✅ Task 7: Edge Case & Regression Tests (Priority: MEDIUM) — COMPLETED (69 tests passing)
Objective: Add edge case coverage and regression tests across existing modules to catch rare bugs and prevent reintroduction of fixed bugs (4 new test files).
Create Test Files:
tests/unit/test_provider_edge_cases.py:- Malformed HTML responses from providers
- Missing episode data in provider responses
- Invalid streaming URLs (malformed, expired)
- Unicode characters in anime titles
- Special characters in filenames
- Empty responses from providers
- Partial data from providers
- Provider timeout scenarios
tests/integration/test_concurrent_operations.py:- Concurrent downloads from same provider
- Parallel NFO generation for multiple series
- Race conditions in queue management
- Database lock contention under load
- WebSocket broadcasts during concurrent operations
- Cache consistency with concurrent writes
tests/api/test_rate_limiting_edge_cases.py:- Rate limiting with multiple IP addresses
- Rate limit reset behavior
- Burst traffic handling
- Rate limit per-user vs per-IP
- Rate limit with authenticated vs anonymous users
- Rate limit bypass attempts
tests/integration/test_database_edge_cases.py:- Database lock contention scenarios
- Large transaction rollback (100+ operations)
- Connection pool exhaustion
- Slow query handling
- Database file growth and vacuum
- Concurrent write conflicts
- Foreign key constraint violations
Test Coverage Requirements:
- Edge cases that aren't covered by existing tests
- Known bugs that were fixed (regression tests)
- Concurrent operation safety
- Resource exhaustion scenarios
- Boundary conditions (empty data, very large data)
Expected Outcome: ~50 tests total, targeting known edge cases and regression scenarios
Implementation Notes:
- Review git history for bug fixes to create regression tests
- Test boundary conditions (0, 1, max values)
- Simulate resource exhaustion (disk full, memory limit)
- Test concurrent operations with threading/asyncio
- Use property-based testing with hypothesis if appropriate
- Mock external services to simulate edge cases
- Test error recovery from edge cases