refactoring
This commit is contained in:
@@ -1,290 +0,0 @@
|
||||
# API Test Documentation
|
||||
|
||||
This document describes the comprehensive API test suite for the Aniworld Flask application.
|
||||
|
||||
## Overview
|
||||
|
||||
The test suite provides complete coverage for all API endpoints in the application, including:
|
||||
|
||||
- Authentication and session management
|
||||
- Configuration management
|
||||
- Series management and search
|
||||
- Download operations
|
||||
- System status and monitoring
|
||||
- Logging and diagnostics
|
||||
- Backup operations
|
||||
- Error handling and recovery
|
||||
|
||||
## Test Structure
|
||||
|
||||
### Unit Tests (`tests/unit/web/test_api_endpoints.py`)
|
||||
|
||||
Unit tests focus on testing individual API endpoint logic in isolation using mocks:
|
||||
|
||||
- **TestAuthenticationEndpoints**: Authentication and session management
|
||||
- **TestConfigurationEndpoints**: Configuration CRUD operations
|
||||
- **TestSeriesEndpoints**: Series listing, search, and scanning
|
||||
- **TestDownloadEndpoints**: Download management
|
||||
- **TestProcessManagementEndpoints**: Process locks and status
|
||||
- **TestLoggingEndpoints**: Logging configuration and file management
|
||||
- **TestBackupEndpoints**: Configuration backup and restore
|
||||
- **TestDiagnosticsEndpoints**: System diagnostics and monitoring
|
||||
- **TestErrorHandling**: Error handling and edge cases
|
||||
|
||||
### Integration Tests (`tests/integration/test_api_integration.py`)
|
||||
|
||||
Integration tests make actual HTTP requests to test the complete request/response cycle:
|
||||
|
||||
- **TestAuthenticationAPI**: Full authentication flow testing
|
||||
- **TestConfigurationAPI**: Configuration persistence testing
|
||||
- **TestSeriesAPI**: Series data flow testing
|
||||
- **TestDownloadAPI**: Download workflow testing
|
||||
- **TestStatusAPI**: System status reporting testing
|
||||
- **TestLoggingAPI**: Logging system integration testing
|
||||
- **TestBackupAPI**: Backup system integration testing
|
||||
- **TestDiagnosticsAPI**: Diagnostics system integration testing
|
||||
|
||||
## API Endpoints Covered
|
||||
|
||||
### Authentication Endpoints
|
||||
- `POST /api/auth/setup` - Initial password setup
|
||||
- `POST /api/auth/login` - User authentication
|
||||
- `POST /api/auth/logout` - Session termination
|
||||
- `GET /api/auth/status` - Authentication status check
|
||||
|
||||
### Configuration Endpoints
|
||||
- `POST /api/config/directory` - Update anime directory
|
||||
- `GET /api/scheduler/config` - Get scheduler settings
|
||||
- `POST /api/scheduler/config` - Update scheduler settings
|
||||
- `GET /api/config/section/advanced` - Get advanced settings
|
||||
- `POST /api/config/section/advanced` - Update advanced settings
|
||||
|
||||
### Series Management Endpoints
|
||||
- `GET /api/series` - List all series
|
||||
- `POST /api/search` - Search for series online
|
||||
- `POST /api/rescan` - Rescan series directory
|
||||
|
||||
### Download Management Endpoints
|
||||
- `POST /api/download` - Start download process
|
||||
|
||||
### System Status Endpoints
|
||||
- `GET /api/process/locks/status` - Get process lock status
|
||||
- `GET /api/status` - Get system status
|
||||
|
||||
### Logging Endpoints
|
||||
- `GET /api/logging/config` - Get logging configuration
|
||||
- `POST /api/logging/config` - Update logging configuration
|
||||
- `GET /api/logging/files` - List log files
|
||||
- `POST /api/logging/test` - Test logging functionality
|
||||
- `POST /api/logging/cleanup` - Clean up old logs
|
||||
- `GET /api/logging/files/<filename>/tail` - Get log file tail
|
||||
|
||||
### Backup Endpoints
|
||||
- `POST /api/config/backup` - Create configuration backup
|
||||
- `GET /api/config/backups` - List available backups
|
||||
- `POST /api/config/backup/<filename>/restore` - Restore backup
|
||||
- `GET /api/config/backup/<filename>/download` - Download backup
|
||||
|
||||
### Diagnostics Endpoints
|
||||
- `GET /api/diagnostics/network` - Network connectivity diagnostics
|
||||
- `GET /api/diagnostics/errors` - Get error history
|
||||
- `POST /api/recovery/clear-blacklist` - Clear URL blacklist
|
||||
- `GET /api/recovery/retry-counts` - Get retry statistics
|
||||
- `GET /api/diagnostics/system-status` - Comprehensive system status
|
||||
|
||||
## Running the Tests
|
||||
|
||||
### Option 1: Using the Custom Test Runner
|
||||
|
||||
```bash
|
||||
cd tests/unit/web
|
||||
python run_api_tests.py
|
||||
```
|
||||
|
||||
This runs all tests and generates a comprehensive report including:
|
||||
- Overall test statistics
|
||||
- Per-suite breakdown
|
||||
- API endpoint coverage report
|
||||
- Recommendations for improvements
|
||||
- Detailed JSON report file
|
||||
|
||||
### Option 2: Using unittest
|
||||
|
||||
Run unit tests only:
|
||||
```bash
|
||||
cd tests/unit/web
|
||||
python -m unittest test_api_endpoints.py -v
|
||||
```
|
||||
|
||||
Run integration tests only:
|
||||
```bash
|
||||
cd tests/integration
|
||||
python -m unittest test_api_integration.py -v
|
||||
```
|
||||
|
||||
### Option 3: Using pytest (if available)
|
||||
|
||||
```bash
|
||||
# Run all API tests
|
||||
pytest tests/ -k "test_api" -v
|
||||
|
||||
# Run only unit tests
|
||||
pytest tests/unit/ -m unit -v
|
||||
|
||||
# Run only integration tests
|
||||
pytest tests/integration/ -m integration -v
|
||||
|
||||
# Run only authentication tests
|
||||
pytest tests/ -m auth -v
|
||||
```
|
||||
|
||||
## Test Features
|
||||
|
||||
### Comprehensive Coverage
|
||||
- Tests all 29+ API endpoints
|
||||
- Covers both success and error scenarios
|
||||
- Tests authentication and authorization
|
||||
- Validates JSON request/response formats
|
||||
- Tests edge cases and input validation
|
||||
|
||||
### Robust Mocking
|
||||
- Mocks complex dependencies (series_app, config, session_manager)
|
||||
- Isolates test cases from external dependencies
|
||||
- Provides consistent test environment
|
||||
|
||||
### Detailed Reporting
|
||||
- Success rate calculations
|
||||
- Failure categorization
|
||||
- Endpoint coverage mapping
|
||||
- Performance recommendations
|
||||
- JSON report generation for CI/CD
|
||||
|
||||
### Error Handling Testing
|
||||
- Tests API error decorator functionality
|
||||
- Validates proper HTTP status codes
|
||||
- Tests authentication error responses
|
||||
- Tests invalid input handling
|
||||
|
||||
## Mock Data and Fixtures
|
||||
|
||||
The tests use various mock objects and fixtures:
|
||||
|
||||
### Mock Series Data
|
||||
```python
|
||||
mock_serie.folder = 'test_anime'
|
||||
mock_serie.name = 'Test Anime'
|
||||
mock_serie.episodeDict = {'Season 1': [1, 2, 3, 4, 5]}
|
||||
```
|
||||
|
||||
### Mock Configuration
|
||||
```python
|
||||
mock_config.anime_directory = '/test/anime'
|
||||
mock_config.has_master_password.return_value = True
|
||||
```
|
||||
|
||||
### Mock Session Management
|
||||
```python
|
||||
mock_session_manager.sessions = {'session-id': {...}}
|
||||
mock_session_manager.login.return_value = {'success': True}
|
||||
```
|
||||
|
||||
## Extending the Tests
|
||||
|
||||
To add tests for new API endpoints:
|
||||
|
||||
1. **Add Unit Tests**: Add test methods to appropriate test class in `test_api_endpoints.py`
|
||||
2. **Add Integration Tests**: Add test methods to appropriate test class in `test_api_integration.py`
|
||||
3. **Update Coverage**: Add new endpoints to the coverage report in `run_api_tests.py`
|
||||
4. **Add Mock Data**: Create appropriate mock objects for the new functionality
|
||||
|
||||
### Example: Adding a New Endpoint Test
|
||||
|
||||
```python
|
||||
def test_new_endpoint(self):
|
||||
"""Test the new API endpoint."""
|
||||
test_data = {'param': 'value'}
|
||||
|
||||
with patch('src.server.app.optional_auth', lambda f: f):
|
||||
response = self.client.post(
|
||||
'/api/new/endpoint',
|
||||
data=json.dumps(test_data),
|
||||
content_type='application/json'
|
||||
)
|
||||
|
||||
self.assertEqual(response.status_code, 200)
|
||||
data = json.loads(response.data)
|
||||
self.assertTrue(data['success'])
|
||||
```
|
||||
|
||||
## Continuous Integration
|
||||
|
||||
The test suite is designed to work in CI/CD environments:
|
||||
|
||||
- Returns proper exit codes (0 for success, 1 for failure)
|
||||
- Generates machine-readable JSON reports
|
||||
- Provides detailed failure information
|
||||
- Handles missing dependencies gracefully
|
||||
- Supports parallel test execution
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always test both success and error cases**
|
||||
2. **Use proper HTTP status codes in assertions**
|
||||
3. **Validate JSON response structure**
|
||||
4. **Mock external dependencies consistently**
|
||||
5. **Add descriptive test names and docstrings**
|
||||
6. **Test authentication and authorization**
|
||||
7. **Include edge cases and input validation**
|
||||
8. **Keep tests independent and isolated**
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Import Errors**: Ensure all paths are correctly added to `sys.path`
|
||||
2. **Mock Failures**: Verify mock patches match actual code structure
|
||||
3. **Authentication Issues**: Use provided helper methods for session setup
|
||||
4. **JSON Errors**: Ensure proper Content-Type headers in requests
|
||||
|
||||
### Debug Mode
|
||||
|
||||
To run tests with additional debug information:
|
||||
|
||||
```python
|
||||
# Add to test setup
|
||||
import logging
|
||||
logging.basicConfig(level=logging.DEBUG)
|
||||
```
|
||||
|
||||
### Test Isolation
|
||||
|
||||
Each test class uses setUp/tearDown methods to ensure clean test environment:
|
||||
|
||||
```python
|
||||
def setUp(self):
|
||||
"""Set up test fixtures."""
|
||||
# Initialize mocks and test data
|
||||
|
||||
def tearDown(self):
|
||||
"""Clean up after test."""
|
||||
# Stop patches and clean resources
|
||||
```
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- Tests use mocks to avoid slow operations
|
||||
- Integration tests may be slower due to actual HTTP requests
|
||||
- Consider running unit tests first for faster feedback
|
||||
- Use test selection markers for focused testing
|
||||
|
||||
## Security Testing
|
||||
|
||||
The test suite includes security-focused tests:
|
||||
|
||||
- Authentication bypass attempts
|
||||
- Invalid session handling
|
||||
- Input validation testing
|
||||
- Authorization requirement verification
|
||||
- Password security validation
|
||||
|
||||
This comprehensive test suite ensures the API is robust, secure, and reliable for production use.
|
||||
@@ -23,7 +23,7 @@ sys.path.insert(0, os.path.join(os.path.dirname(__file__), '..', '..'))
|
||||
# Import core modules
|
||||
from src.server.core.entities.series import Serie
|
||||
from src.server.core.entities.SerieList import SerieList
|
||||
from src.server.infrastructure.file_system.SerieScanner import SerieScanner
|
||||
from src.server.core.SerieScanner import SerieScanner
|
||||
# TODO: Fix imports - these modules may not exist or may be in different locations
|
||||
# from database_manager import DatabaseManager, AnimeMetadata, EpisodeMetadata, BackupManager
|
||||
# from error_handler import ErrorRecoveryManager, RetryMechanism, NetworkHealthChecker
|
||||
|
||||
Reference in New Issue
Block a user