Litellm dev 01 20 2025 p3 (#7890)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s

* fix(router.py): pass stream timeout correctly for non openai / azure models

Fixes https://github.com/BerriAI/litellm/issues/7870

* test(test_router_timeout.py): add test for streaming

* test(test_router_timeout.py): add unit testing for new router functions

* docs(ollama.md): link to section on calling ollama within docker container

* test: remove redundant test

* test: fix test to include timeout value

* docs(config_settings.md): document new router settings param
This commit is contained in:
Krish Dholakia 2025-01-20 21:46:36 -08:00 committed by GitHub
parent 4b23420a20
commit 64e1df1f14
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
6 changed files with 197 additions and 9 deletions

View file

@ -381,6 +381,7 @@ def test_completions_streaming_with_sync_http_handler(monkeypatch):
},
data=ANY,
stream=True,
timeout=ANY,
)
actual_data = json.loads(