Lukas 260b98e548 Fix authentication on /api/anime/ endpoint and update tests
- Add authentication requirement to list_anime endpoint using require_auth dependency
- Change from optional to required series_app dependency (get_series_app)
- Update test_anime_endpoints.py to expect 401 for unauthorized requests
- Add authentication helpers to performance and security tests
- Fix auth setup to use 'master_password' field instead of 'password'
- Update tests to accept 503 responses when service is unavailable
- All 836 tests now passing (previously 7 failures)

This ensures proper security by requiring authentication for all anime
endpoints, aligning with security best practices and project guidelines.
2025-10-24 19:25:16 +02:00
..

Performance Testing Suite

This directory contains performance tests for the Aniworld API and download system.

Test Categories

API Load Testing (test_api_load.py)

Tests API endpoints under concurrent load to ensure acceptable performance:

  • Load Testing: Concurrent requests to endpoints
  • Sustained Load: Long-running load scenarios
  • Concurrency Limits: Maximum connection handling
  • Response Times: Performance benchmarks

Key Metrics:

  • Requests per second (RPS)
  • Average response time
  • Success rate under load
  • Graceful degradation behavior

Download Stress Testing (test_download_stress.py)

Tests the download queue and management system under stress:

  • Queue Operations: Concurrent add/remove operations
  • Capacity Testing: Queue behavior at limits
  • Memory Usage: Memory leak detection
  • Concurrency: Multiple simultaneous downloads
  • Error Handling: Recovery from failures

Key Metrics:

  • Queue operation success rate
  • Concurrent download capacity
  • Memory stability
  • Error recovery time

Running Performance Tests

Run all performance tests:

conda run -n AniWorld python -m pytest tests/performance/ -v -m performance

Run specific test file:

conda run -n AniWorld python -m pytest tests/performance/test_api_load.py -v

Run with detailed output:

conda run -n AniWorld python -m pytest tests/performance/ -vv -s

Run specific test class:

conda run -n AniWorld python -m pytest \
    tests/performance/test_api_load.py::TestAPILoadTesting -v

Performance Benchmarks

Expected Results

Health Endpoint:

  • RPS: ≥ 50 requests/second
  • Avg Response Time: < 0.1s
  • Success Rate: ≥ 95%

Anime List Endpoint:

  • Avg Response Time: < 1.0s
  • Success Rate: ≥ 90%

Search Endpoint:

  • Avg Response Time: < 2.0s
  • Success Rate: ≥ 85%

Download Queue:

  • Concurrent Additions: Handle 100+ simultaneous adds
  • Queue Capacity: Support 1000+ queued items
  • Operation Success Rate: ≥ 90%

Adding New Performance Tests

When adding new performance tests:

  1. Mark tests with @pytest.mark.performance decorator
  2. Use @pytest.mark.asyncio for async tests
  3. Include clear performance expectations in assertions
  4. Document expected metrics in docstrings
  5. Use fixtures for setup/teardown

Example:

@pytest.mark.performance
class TestMyFeature:
    @pytest.mark.asyncio
    async def test_under_load(self, client):
        \"\"\"Test feature under load.\"\"\"
        # Your test implementation
        metrics = await measure_performance(...)
        assert metrics["success_rate"] >= 95.0

Continuous Performance Monitoring

These tests should be run:

  • Before each release
  • After significant changes to API or download system
  • As part of CI/CD pipeline (if resources permit)
  • Weekly as part of regression testing

Troubleshooting

Tests timeout:

  • Increase timeout in pytest.ini
  • Check system resources (CPU, memory)
  • Verify no other heavy processes running

Low success rates:

  • Check application logs for errors
  • Verify database connectivity
  • Ensure sufficient system resources
  • Check for rate limiting issues

Inconsistent results:

  • Run tests multiple times
  • Check for background processes
  • Verify stable network connection
  • Consider running on dedicated test hardware

Performance Optimization Tips

Based on test results, consider:

  1. Caching: Add caching for frequently accessed data
  2. Connection Pooling: Optimize database connections
  3. Async Processing: Use async/await for I/O operations
  4. Load Balancing: Distribute load across multiple workers
  5. Rate Limiting: Implement rate limiting to prevent overload
  6. Query Optimization: Optimize database queries
  7. Resource Limits: Set appropriate resource limits

Integration with CI/CD

To include in CI/CD pipeline:

# Example GitHub Actions workflow
- name: Run Performance Tests
  run: |
      conda run -n AniWorld python -m pytest \
        tests/performance/ \
        -v \
        -m performance \
        --tb=short

References