mirror of
https://github.com/BerriAI/litellm.git
synced 2025-04-25 18:54:30 +00:00
Litellm dev 01 20 2025 p3 (#7890)
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
* fix(router.py): pass stream timeout correctly for non openai / azure models Fixes https://github.com/BerriAI/litellm/issues/7870 * test(test_router_timeout.py): add test for streaming * test(test_router_timeout.py): add unit testing for new router functions * docs(ollama.md): link to section on calling ollama within docker container * test: remove redundant test * test: fix test to include timeout value * docs(config_settings.md): document new router settings param
This commit is contained in:
parent
4b23420a20
commit
64e1df1f14
6 changed files with 197 additions and 9 deletions
|
@ -381,6 +381,7 @@ def test_completions_streaming_with_sync_http_handler(monkeypatch):
|
|||
},
|
||||
data=ANY,
|
||||
stream=True,
|
||||
timeout=ANY,
|
||||
)
|
||||
|
||||
actual_data = json.loads(
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue