8.6 KiB
API Test Documentation
This document describes the comprehensive API test suite for the Aniworld Flask application.
Overview
The test suite provides complete coverage for all API endpoints in the application, including:
- Authentication and session management
- Configuration management
- Series management and search
- Download operations
- System status and monitoring
- Logging and diagnostics
- Backup operations
- Error handling and recovery
Test Structure
Unit Tests (tests/unit/web/test_api_endpoints.py)
Unit tests focus on testing individual API endpoint logic in isolation using mocks:
- TestAuthenticationEndpoints: Authentication and session management
- TestConfigurationEndpoints: Configuration CRUD operations
- TestSeriesEndpoints: Series listing, search, and scanning
- TestDownloadEndpoints: Download management
- TestProcessManagementEndpoints: Process locks and status
- TestLoggingEndpoints: Logging configuration and file management
- TestBackupEndpoints: Configuration backup and restore
- TestDiagnosticsEndpoints: System diagnostics and monitoring
- TestErrorHandling: Error handling and edge cases
Integration Tests (tests/integration/test_api_integration.py)
Integration tests make actual HTTP requests to test the complete request/response cycle:
- TestAuthenticationAPI: Full authentication flow testing
- TestConfigurationAPI: Configuration persistence testing
- TestSeriesAPI: Series data flow testing
- TestDownloadAPI: Download workflow testing
- TestStatusAPI: System status reporting testing
- TestLoggingAPI: Logging system integration testing
- TestBackupAPI: Backup system integration testing
- TestDiagnosticsAPI: Diagnostics system integration testing
API Endpoints Covered
Authentication Endpoints
POST /api/auth/setup- Initial password setupPOST /api/auth/login- User authenticationPOST /api/auth/logout- Session terminationGET /api/auth/status- Authentication status check
Configuration Endpoints
POST /api/config/directory- Update anime directoryGET /api/scheduler/config- Get scheduler settingsPOST /api/scheduler/config- Update scheduler settingsGET /api/config/section/advanced- Get advanced settingsPOST /api/config/section/advanced- Update advanced settings
Series Management Endpoints
GET /api/series- List all seriesPOST /api/search- Search for series onlinePOST /api/rescan- Rescan series directory
Download Management Endpoints
POST /api/download- Start download process
System Status Endpoints
GET /api/process/locks/status- Get process lock statusGET /api/status- Get system status
Logging Endpoints
GET /api/logging/config- Get logging configurationPOST /api/logging/config- Update logging configurationGET /api/logging/files- List log filesPOST /api/logging/test- Test logging functionalityPOST /api/logging/cleanup- Clean up old logsGET /api/logging/files/<filename>/tail- Get log file tail
Backup Endpoints
POST /api/config/backup- Create configuration backupGET /api/config/backups- List available backupsPOST /api/config/backup/<filename>/restore- Restore backupGET /api/config/backup/<filename>/download- Download backup
Diagnostics Endpoints
GET /api/diagnostics/network- Network connectivity diagnosticsGET /api/diagnostics/errors- Get error historyPOST /api/recovery/clear-blacklist- Clear URL blacklistGET /api/recovery/retry-counts- Get retry statisticsGET /api/diagnostics/system-status- Comprehensive system status
Running the Tests
Option 1: Using the Custom Test Runner
cd tests/unit/web
python run_api_tests.py
This runs all tests and generates a comprehensive report including:
- Overall test statistics
- Per-suite breakdown
- API endpoint coverage report
- Recommendations for improvements
- Detailed JSON report file
Option 2: Using unittest
Run unit tests only:
cd tests/unit/web
python -m unittest test_api_endpoints.py -v
Run integration tests only:
cd tests/integration
python -m unittest test_api_integration.py -v
Option 3: Using pytest (if available)
# Run all API tests
pytest tests/ -k "test_api" -v
# Run only unit tests
pytest tests/unit/ -m unit -v
# Run only integration tests
pytest tests/integration/ -m integration -v
# Run only authentication tests
pytest tests/ -m auth -v
Test Features
Comprehensive Coverage
- Tests all 29+ API endpoints
- Covers both success and error scenarios
- Tests authentication and authorization
- Validates JSON request/response formats
- Tests edge cases and input validation
Robust Mocking
- Mocks complex dependencies (series_app, config, session_manager)
- Isolates test cases from external dependencies
- Provides consistent test environment
Detailed Reporting
- Success rate calculations
- Failure categorization
- Endpoint coverage mapping
- Performance recommendations
- JSON report generation for CI/CD
Error Handling Testing
- Tests API error decorator functionality
- Validates proper HTTP status codes
- Tests authentication error responses
- Tests invalid input handling
Mock Data and Fixtures
The tests use various mock objects and fixtures:
Mock Series Data
mock_serie.folder = 'test_anime'
mock_serie.name = 'Test Anime'
mock_serie.episodeDict = {'Season 1': [1, 2, 3, 4, 5]}
Mock Configuration
mock_config.anime_directory = '/test/anime'
mock_config.has_master_password.return_value = True
Mock Session Management
mock_session_manager.sessions = {'session-id': {...}}
mock_session_manager.login.return_value = {'success': True}
Extending the Tests
To add tests for new API endpoints:
- Add Unit Tests: Add test methods to appropriate test class in
test_api_endpoints.py - Add Integration Tests: Add test methods to appropriate test class in
test_api_integration.py - Update Coverage: Add new endpoints to the coverage report in
run_api_tests.py - Add Mock Data: Create appropriate mock objects for the new functionality
Example: Adding a New Endpoint Test
def test_new_endpoint(self):
"""Test the new API endpoint."""
test_data = {'param': 'value'}
with patch('src.server.app.optional_auth', lambda f: f):
response = self.client.post(
'/api/new/endpoint',
data=json.dumps(test_data),
content_type='application/json'
)
self.assertEqual(response.status_code, 200)
data = json.loads(response.data)
self.assertTrue(data['success'])
Continuous Integration
The test suite is designed to work in CI/CD environments:
- Returns proper exit codes (0 for success, 1 for failure)
- Generates machine-readable JSON reports
- Provides detailed failure information
- Handles missing dependencies gracefully
- Supports parallel test execution
Best Practices
- Always test both success and error cases
- Use proper HTTP status codes in assertions
- Validate JSON response structure
- Mock external dependencies consistently
- Add descriptive test names and docstrings
- Test authentication and authorization
- Include edge cases and input validation
- Keep tests independent and isolated
Troubleshooting
Common Issues
- Import Errors: Ensure all paths are correctly added to
sys.path - Mock Failures: Verify mock patches match actual code structure
- Authentication Issues: Use provided helper methods for session setup
- JSON Errors: Ensure proper Content-Type headers in requests
Debug Mode
To run tests with additional debug information:
# Add to test setup
import logging
logging.basicConfig(level=logging.DEBUG)
Test Isolation
Each test class uses setUp/tearDown methods to ensure clean test environment:
def setUp(self):
"""Set up test fixtures."""
# Initialize mocks and test data
def tearDown(self):
"""Clean up after test."""
# Stop patches and clean resources
Performance Considerations
- Tests use mocks to avoid slow operations
- Integration tests may be slower due to actual HTTP requests
- Consider running unit tests first for faster feedback
- Use test selection markers for focused testing
Security Testing
The test suite includes security-focused tests:
- Authentication bypass attempts
- Invalid session handling
- Input validation testing
- Authorization requirement verification
- Password security validation
This comprehensive test suite ensures the API is robust, secure, and reliable for production use.