4.6 KiB
4.6 KiB
<EFBFBD>📋 General Instructions
Overview
This document lists all failed tests identified during the test run on October 19, 2025. Each test failure needs to be investigated and resolved. The failures are categorized by module/area for easier resolution.
Important Guidelines
-
Double-Check Before Fixing
- Always verify whether the test is wrong or the code is wrong
- Read the test implementation carefully
- Review the code being tested
- Check if the expected behavior in the test matches the actual requirements
- Consider if the test expectations are outdated or incorrect
-
Root Cause Analysis
- Understand why the test is failing before making changes
- Check if it's a:
- Logic error in production code
- Incorrect test expectations
- Mock/fixture setup issue
- Async/await issue
- Authentication/authorization issue
- Missing dependency or service
-
Fix Strategy
- Fix production code if the business logic is wrong
- Fix test code if the expectations are incorrect
- Update both if requirements have changed
- Document why you chose to fix test vs code
-
Testing Process
- Run the specific test after each fix to verify
- Run related tests to ensure no regression
- Run all tests after batch fixes to verify overall system health
-
Code Quality Standards
- Follow PEP8 and project coding standards
- Use type hints where applicable
- Write clear, self-documenting code
- Add comments for complex logic
- Update docstrings if behavior changes
🎯 Success Criteria
- All tests passing: 0 failures, 0 errors
- Warnings reduced: Aim for < 50 warnings (mostly from dependencies)
- Code quality maintained: No shortcuts or hacks
- Documentation updated: Any behavior changes documented
- Git commits: Logical, atomic commits with clear messages
📞 Escalation
If you encounter:
- Architecture issues requiring design decisions
- Tests that conflict with documented requirements
- Breaking changes needed
- Unclear requirements or expectations
Document the issue and escalate rather than guessing.
📚 Helpful Commands
# Run all tests
conda run -n AniWorld python -m pytest tests/ -v --tb=short
# Run specific test file
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py -v
# Run specific test class
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py::TestWebSocketService -v
# Run specific test
conda run -n AniWorld python -m pytest tests/unit/test_websocket_service.py::TestWebSocketService::test_broadcast_download_progress -v
# Run with extra verbosity
conda run -n AniWorld python -m pytest tests/ -vv
# Run with full traceback
conda run -n AniWorld python -m pytest tests/ -v --tb=long
# Run and stop at first failure
conda run -n AniWorld python -m pytest tests/ -v -x
# Run tests matching pattern
conda run -n AniWorld python -m pytest tests/ -v -k "auth"
# Show all print statements
conda run -n AniWorld python -m pytest tests/ -v -s
📖 Status Update
- Document Created: October 19, 2025
- Last Updated: October 22, 2025
- Test run time: ~10 seconds
- Python environment: AniWorld (conda)
- Framework: pytest with FastAPI TestClient
- Initial test failures: 200+ (many were errors)
- Current test failures: 0 ✅
- Tests Passing: 583 ✅
- Warnings: 41 (mostly deprecation warnings from datetime.utcnow())
- Overall progress: 100% - ALL TESTS PASSING! 🎉
Remember: The goal is not just to make tests pass, but to ensure the system works correctly and reliably!
🎯 Final Remaining Issues
✅ ALL TESTS PASSING! 🎉
All 583 tests are now passing with no failures!
Final Fixes Applied (Phase 10):
- ✅ Fixed
test_session_is_expiredtest- Issue: Timezone-aware/naive datetime comparison error
- Solution: Added timezone import to test file and modified
UserSession.is_expiredproperty to handle naive datetimes from database
- ✅ Fixed
test_added_at_auto_generatedtest- Issue: Test using deprecated
datetime.utcnow()(naive) vs model expecting timezone-aware datetime - Solution: Updated test to use
datetime.now(timezone.utc)for timezone-aware comparisons
- Issue: Test using deprecated
Phase 10: Final Fixes (Last 2 Tests) - COMPLETED ✅
- Fix
test_session_is_expiredtest - Fix
test_added_at_auto_generatedtest - Run and verify:
pytest tests/ -v --tb=short - Verify all 583 tests pass
- Reduce warnings (currently 41, target < 50) ✅