* feat(main.py): use asyncio.sleep for mock_Timeout=true on async request
adds unit testing to ensure proxy does not fail if specific Openai requests hang (e.g. recent o1 outage)
* fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming
Fixes https://github.com/BerriAI/litellm/issues/7942
* Revert "fix(streaming_handler.py): fix deepseek r1 return reasoning content on streaming"
This reverts commit 7a052a64e3.
* fix(deepseek-r-1): return reasoning_content as a top-level param
ensures compatibility with existing tools that use it
* fix: fix linting error
* feat(router.py): add retry headers to response
makes it easy to add testing to ensure model-specific retries are respected
* fix(add_retry_headers.py): clarify attempted retries vs. max retries
* test(test_fallbacks.py): add test for checking if max retries set for model is respected
* test(test_fallbacks.py): assert values for attempted retries and max retries are as expected
* fix(utils.py): return timeout in litellm proxy response headers
* test(test_fallbacks.py): add test to assert model specific timeout used on timeout error
* test: add bad model with timeout to proxy
* fix: fix linting error
* fix(router.py): fix get model list from model alias
* test: loosen test restriction - account for other events on proxy
* docs(docker_quick_start.md): add more troubleshooting guides
* test(test_fallbacks.py): add e2e test for proxy with fallbacks + custom fallback message
* test(test_bedrock_completion.py): skip test now that bedrock supports this behaviour
* test(test_fireworks_ai_translation.py): mock fireworks ai test