- Created QueueRepository adapter in src/server/services/queue_repository.py - Refactored DownloadService to use repository pattern instead of JSON - Updated application startup to initialize download service from database - Updated all test fixtures to use MockQueueRepository - All 1104 tests passing
Performance Testing Suite
This directory contains performance tests for the Aniworld API and download system.
Test Categories
API Load Testing (test_api_load.py)
Tests API endpoints under concurrent load to ensure acceptable performance:
- Load Testing: Concurrent requests to endpoints
- Sustained Load: Long-running load scenarios
- Concurrency Limits: Maximum connection handling
- Response Times: Performance benchmarks
Key Metrics:
- Requests per second (RPS)
- Average response time
- Success rate under load
- Graceful degradation behavior
Download Stress Testing (test_download_stress.py)
Tests the download queue and management system under stress:
- Queue Operations: Concurrent add/remove operations
- Capacity Testing: Queue behavior at limits
- Memory Usage: Memory leak detection
- Concurrency: Multiple simultaneous downloads
- Error Handling: Recovery from failures
Key Metrics:
- Queue operation success rate
- Concurrent download capacity
- Memory stability
- Error recovery time
Running Performance Tests
Run all performance tests:
conda run -n AniWorld python -m pytest tests/performance/ -v -m performance
Run specific test file:
conda run -n AniWorld python -m pytest tests/performance/test_api_load.py -v
Run with detailed output:
conda run -n AniWorld python -m pytest tests/performance/ -vv -s
Run specific test class:
conda run -n AniWorld python -m pytest \
tests/performance/test_api_load.py::TestAPILoadTesting -v
Performance Benchmarks
Expected Results
Health Endpoint:
- RPS: ≥ 50 requests/second
- Avg Response Time: < 0.1s
- Success Rate: ≥ 95%
Anime List Endpoint:
- Avg Response Time: < 1.0s
- Success Rate: ≥ 90%
Search Endpoint:
- Avg Response Time: < 2.0s
- Success Rate: ≥ 85%
Download Queue:
- Concurrent Additions: Handle 100+ simultaneous adds
- Queue Capacity: Support 1000+ queued items
- Operation Success Rate: ≥ 90%
Adding New Performance Tests
When adding new performance tests:
- Mark tests with
@pytest.mark.performancedecorator - Use
@pytest.mark.asynciofor async tests - Include clear performance expectations in assertions
- Document expected metrics in docstrings
- Use fixtures for setup/teardown
Example:
@pytest.mark.performance
class TestMyFeature:
@pytest.mark.asyncio
async def test_under_load(self, client):
\"\"\"Test feature under load.\"\"\"
# Your test implementation
metrics = await measure_performance(...)
assert metrics["success_rate"] >= 95.0
Continuous Performance Monitoring
These tests should be run:
- Before each release
- After significant changes to API or download system
- As part of CI/CD pipeline (if resources permit)
- Weekly as part of regression testing
Troubleshooting
Tests timeout:
- Increase timeout in pytest.ini
- Check system resources (CPU, memory)
- Verify no other heavy processes running
Low success rates:
- Check application logs for errors
- Verify database connectivity
- Ensure sufficient system resources
- Check for rate limiting issues
Inconsistent results:
- Run tests multiple times
- Check for background processes
- Verify stable network connection
- Consider running on dedicated test hardware
Performance Optimization Tips
Based on test results, consider:
- Caching: Add caching for frequently accessed data
- Connection Pooling: Optimize database connections
- Async Processing: Use async/await for I/O operations
- Load Balancing: Distribute load across multiple workers
- Rate Limiting: Implement rate limiting to prevent overload
- Query Optimization: Optimize database queries
- Resource Limits: Set appropriate resource limits
Integration with CI/CD
To include in CI/CD pipeline:
# Example GitHub Actions workflow
- name: Run Performance Tests
run: |
conda run -n AniWorld python -m pytest \
tests/performance/ \
-v \
-m performance \
--tb=short