Lukas 731fd56768 feat: implement setup redirect middleware and fix test suite
- Created SetupRedirectMiddleware to redirect unconfigured apps to /setup
- Enhanced /api/auth/setup endpoint to save anime_directory to config
- Updated SetupRequest model to accept optional anime_directory parameter
- Modified setup.html to send anime_directory in setup API call
- Added @pytest.mark.requires_clean_auth marker for tests needing unconfigured state
- Modified conftest.py to conditionally setup auth based on test marker
- Fixed all test failures (846/846 tests now passing)
- Updated instructions.md to mark setup tasks as complete

This implementation ensures users are guided through initial setup
before accessing the application, while maintaining test isolation
and preventing auth state leakage between tests.
2025-10-24 19:55:26 +02:00
..

Performance Testing Suite

This directory contains performance tests for the Aniworld API and download system.

Test Categories

API Load Testing (test_api_load.py)

Tests API endpoints under concurrent load to ensure acceptable performance:

  • Load Testing: Concurrent requests to endpoints
  • Sustained Load: Long-running load scenarios
  • Concurrency Limits: Maximum connection handling
  • Response Times: Performance benchmarks

Key Metrics:

  • Requests per second (RPS)
  • Average response time
  • Success rate under load
  • Graceful degradation behavior

Download Stress Testing (test_download_stress.py)

Tests the download queue and management system under stress:

  • Queue Operations: Concurrent add/remove operations
  • Capacity Testing: Queue behavior at limits
  • Memory Usage: Memory leak detection
  • Concurrency: Multiple simultaneous downloads
  • Error Handling: Recovery from failures

Key Metrics:

  • Queue operation success rate
  • Concurrent download capacity
  • Memory stability
  • Error recovery time

Running Performance Tests

Run all performance tests:

conda run -n AniWorld python -m pytest tests/performance/ -v -m performance

Run specific test file:

conda run -n AniWorld python -m pytest tests/performance/test_api_load.py -v

Run with detailed output:

conda run -n AniWorld python -m pytest tests/performance/ -vv -s

Run specific test class:

conda run -n AniWorld python -m pytest \
    tests/performance/test_api_load.py::TestAPILoadTesting -v

Performance Benchmarks

Expected Results

Health Endpoint:

  • RPS: ≥ 50 requests/second
  • Avg Response Time: < 0.1s
  • Success Rate: ≥ 95%

Anime List Endpoint:

  • Avg Response Time: < 1.0s
  • Success Rate: ≥ 90%

Search Endpoint:

  • Avg Response Time: < 2.0s
  • Success Rate: ≥ 85%

Download Queue:

  • Concurrent Additions: Handle 100+ simultaneous adds
  • Queue Capacity: Support 1000+ queued items
  • Operation Success Rate: ≥ 90%

Adding New Performance Tests

When adding new performance tests:

  1. Mark tests with @pytest.mark.performance decorator
  2. Use @pytest.mark.asyncio for async tests
  3. Include clear performance expectations in assertions
  4. Document expected metrics in docstrings
  5. Use fixtures for setup/teardown

Example:

@pytest.mark.performance
class TestMyFeature:
    @pytest.mark.asyncio
    async def test_under_load(self, client):
        \"\"\"Test feature under load.\"\"\"
        # Your test implementation
        metrics = await measure_performance(...)
        assert metrics["success_rate"] >= 95.0

Continuous Performance Monitoring

These tests should be run:

  • Before each release
  • After significant changes to API or download system
  • As part of CI/CD pipeline (if resources permit)
  • Weekly as part of regression testing

Troubleshooting

Tests timeout:

  • Increase timeout in pytest.ini
  • Check system resources (CPU, memory)
  • Verify no other heavy processes running

Low success rates:

  • Check application logs for errors
  • Verify database connectivity
  • Ensure sufficient system resources
  • Check for rate limiting issues

Inconsistent results:

  • Run tests multiple times
  • Check for background processes
  • Verify stable network connection
  • Consider running on dedicated test hardware

Performance Optimization Tips

Based on test results, consider:

  1. Caching: Add caching for frequently accessed data
  2. Connection Pooling: Optimize database connections
  3. Async Processing: Use async/await for I/O operations
  4. Load Balancing: Distribute load across multiple workers
  5. Rate Limiting: Implement rate limiting to prevent overload
  6. Query Optimization: Optimize database queries
  7. Resource Limits: Set appropriate resource limits

Integration with CI/CD

To include in CI/CD pipeline:

# Example GitHub Actions workflow
- name: Run Performance Tests
  run: |
      conda run -n AniWorld python -m pytest \
        tests/performance/ \
        -v \
        -m performance \
        --tb=short

References