litellm-mirror/litellm/llms/openai_like
Krish Dholakia 64e1df1f14
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 13s
Litellm dev 01 20 2025 p3 (#7890)
* fix(router.py): pass stream timeout correctly for non openai / azure models

Fixes https://github.com/BerriAI/litellm/issues/7870

* test(test_router_timeout.py): add test for streaming

* test(test_router_timeout.py): add unit testing for new router functions

* docs(ollama.md): link to section on calling ollama within docker container

* test: remove redundant test

* test: fix test to include timeout value

* docs(config_settings.md): document new router settings param
2025-01-20 21:46:36 -08:00
..
chat Litellm dev 01 20 2025 p3 (#7890) 2025-01-20 21:46:36 -08:00
embedding Litellm dev 01 10 2025 p3 (#7682) 2025-01-10 21:56:42 -08:00
common_utils.py Litellm dev 12 25 2024 p1 (#7411) 2024-12-25 17:36:30 -08:00