.. |
test_llm_response_utils
|
LiteLLM Minor Fixes & Improvements (10/23/2024) (#6407)
|
2024-10-24 19:01:41 -07:00 |
base_llm_unit_tests.py
|
(fix) using Anthropic response_format={"type": "json_object"} (#6721)
|
2024-11-12 19:06:00 -08:00 |
conftest.py
|
[Feat] Add max_completion_tokens param (#5691)
|
2024-09-14 14:57:01 -07:00 |
dog.wav
|
(feat) Support audio param in responses streaming (#6312)
|
2024-10-18 19:16:14 +05:30 |
Readme.md
|
LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723) (#5731)
|
2024-09-17 08:05:52 -07:00 |
test_anthropic_completion.py
|
(fix) using Anthropic response_format={"type": "json_object"} (#6721)
|
2024-11-12 19:06:00 -08:00 |
test_azure_ai.py
|
LiteLLM Minor Fixes & Improvements (11/01/2024) (#6551)
|
2024-11-02 02:09:31 +05:30 |
test_azure_openai.py
|
LiteLLM Minor Fixes & Improvements (10/28/2024) (#6475)
|
2024-10-29 17:20:24 -07:00 |
test_bedrock_completion.py
|
fix(pattern_match_deployments.py): default to user input if unable to… (#6632)
|
2024-11-08 00:55:57 +05:30 |
test_convert_dict_to_image.py
|
fix ImageObject conversion (#6584)
|
2024-11-04 15:46:43 -08:00 |
test_databricks.py
|
LiteLLM Minor Fixes & Improvements (11/04/2024) (#6572)
|
2024-11-06 17:53:46 +05:30 |
test_deepseek_completion.py
|
Litellm dev 11 08 2024 (#6658)
|
2024-11-08 22:07:17 +05:30 |
test_fireworks_ai_translation.py
|
LiteLLM Minor Fixes & Improvements (09/18/2024) (#5772)
|
2024-09-19 13:25:29 -07:00 |
test_gpt4o_audio.py
|
(feat) Support audio param in responses streaming (#6312)
|
2024-10-18 19:16:14 +05:30 |
test_max_completion_tokens.py
|
Litellm dev 10 22 2024 (#6384)
|
2024-10-22 21:18:54 -07:00 |
test_mistral_api.py
|
(fix) OpenAI's optional messages[].name does not work with Mistral API (#6701)
|
2024-11-11 18:03:41 -08:00 |
test_nvidia_nim.py
|
(feat) add nvidia nim embeddings (#6032)
|
2024-10-03 17:12:14 +05:30 |
test_openai_o1.py
|
[Fix] o1-mini causes pydantic warnings on reasoning_tokens (#5754)
|
2024-09-17 20:23:14 -07:00 |
test_openai_prediction_param.py
|
(feat) add Predicted Outputs for OpenAI (#6594)
|
2024-11-04 21:16:57 -08:00 |
test_optional_params.py
|
LiteLLM Minor Fixes & Improvements (11/04/2024) (#6572)
|
2024-11-06 17:53:46 +05:30 |
test_prompt_caching.py
|
(feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039)
|
2024-10-03 23:31:10 +05:30 |
test_prompt_factory.py
|
fix(pattern_match_deployments.py): default to user input if unable to… (#6632)
|
2024-11-08 00:55:57 +05:30 |
test_supports_vision.py
|
[Feat] Allow setting supports_vision for Custom OpenAI endpoints + Added testing (#5821)
|
2024-09-21 11:35:55 -07:00 |
test_text_completion.py
|
(fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
|
2024-11-04 15:47:48 -08:00 |
test_text_completion_unit_tests.py
|
(fix) litellm.text_completion raises a non-blocking error on simple usage (#6546)
|
2024-11-04 15:47:48 -08:00 |
test_vertex.py
|
(fix) Vertex Improve Performance when using image_url (#6593)
|
2024-11-04 21:55:09 -08:00 |
test_xai.py
|
(feat) add XAI ChatCompletion Support (#6373)
|
2024-11-01 20:37:09 +05:30 |