..
test_llm_response_utils
(fix) get_response_headers for Azure OpenAI ( #6344 )
2024-10-21 20:41:35 +05:30
conftest.py
[Feat] Add max_completion_tokens
param ( #5691 )
2024-09-14 14:57:01 -07:00
dog.wav
(feat) Support audio param in responses streaming ( #6312 )
2024-10-18 19:16:14 +05:30
Readme.md
LiteLLM Minor Fixes & Improvements (09/16/2024) ( #5723 ) ( #5731 )
2024-09-17 08:05:52 -07:00
test_anthropic_completion.py
(fix) get_response_headers for Azure OpenAI ( #6344 )
2024-10-21 20:41:35 +05:30
test_azure_openai.py
(fix) get_response_headers for Azure OpenAI ( #6344 )
2024-10-21 20:41:35 +05:30
test_databricks.py
(feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response ( #6039 )
2024-10-03 23:31:10 +05:30
test_fireworks_ai_translation.py
LiteLLM Minor Fixes & Improvements (09/18/2024) ( #5772 )
2024-09-19 13:25:29 -07:00
test_gpt4o_audio.py
(feat) Support audio param in responses streaming ( #6312 )
2024-10-18 19:16:14 +05:30
test_max_completion_tokens.py
LiteLLM Minor Fixes & Improvements (10/15/2024) ( #6242 )
2024-10-16 07:32:06 -07:00
test_nvidia_nim.py
(feat) add nvidia nim embeddings ( #6032 )
2024-10-03 17:12:14 +05:30
test_openai_o1.py
[Fix] o1-mini causes pydantic warnings on reasoning_tokens
( #5754 )
2024-09-17 20:23:14 -07:00
test_optional_params.py
LiteLLM Minor Fixes & Improvements (10/18/2024) ( #6320 )
2024-10-19 22:23:27 -07:00
test_prompt_caching.py
(feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response ( #6039 )
2024-10-03 23:31:10 +05:30
test_supports_vision.py
[Feat] Allow setting supports_vision
for Custom OpenAI endpoints + Added testing ( #5821 )
2024-09-21 11:35:55 -07:00
test_vertex.py
LiteLLM Minor Fixes & Improvements (10/09/2024) ( #6139 )
2024-10-10 00:42:11 -07:00