litellm/tests/llm_translation
Krish Dholakia 7e5085dc7b
Litellm dev 11 21 2024 (#6837)
* Fix Vertex AI function calling invoke: use JSON format instead of protobuf text format. (#6702)

* test: test tool_call conversion when arguments is empty dict

Fixes https://github.com/BerriAI/litellm/issues/6833

* fix(openai_like/handler.py): return more descriptive error message

Fixes https://github.com/BerriAI/litellm/issues/6812

* test: skip overloaded model

* docs(anthropic.md): update anthropic docs to show how to route to any new model

* feat(groq/): fake stream when 'response_format' param is passed

Groq doesn't support streaming when response_format is set

* feat(groq/): add response_format support for groq

Closes https://github.com/BerriAI/litellm/issues/6845

* fix(o1_handler.py): remove fake streaming for o1

Closes https://github.com/BerriAI/litellm/issues/6801

* build(model_prices_and_context_window.json): add groq llama3.2b model pricing

Closes https://github.com/BerriAI/litellm/issues/6807

* fix(utils.py): fix handling ollama response format param

Fixes https://github.com/BerriAI/litellm/issues/6848#issuecomment-2491215485

* docs(sidebars.js): refactor chat endpoint placement

* fix: fix linting errors

* test: fix test

* test: fix test

* fix(openai_like/handler): handle max retries

* fix(streaming_handler.py): fix streaming check for openai-compatible providers

* test: update test

* test: correctly handle model is overloaded error

* test: update test

* test: fix test

* test: mark flaky test

---------

Co-authored-by: Guowang Li <Guowang@users.noreply.github.com>
2024-11-22 01:53:52 +05:30
..
test_llm_response_utils LiteLLM Minor Fixes & Improvements (10/23/2024) (#6407) 2024-10-24 19:01:41 -07:00
base_llm_unit_tests.py Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
base_rerank_unit_tests.py LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
conftest.py [Feat] Add max_completion_tokens param (#5691) 2024-09-14 14:57:01 -07:00
dog.wav (feat) Support audio param in responses streaming (#6312) 2024-10-18 19:16:14 +05:30
Readme.md LiteLLM Minor Fixes & Improvements (09/16/2024) (#5723) (#5731) 2024-09-17 08:05:52 -07:00
test_anthropic_completion.py Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
test_azure_ai.py (fix) Azure AI Studio - using image_url in content with both text and image_url (#6774) 2024-11-16 20:05:24 -08:00
test_azure_openai.py LiteLLM Minor Fixes & Improvements (10/28/2024) (#6475) 2024-10-29 17:20:24 -07:00
test_bedrock_completion.py fix(pattern_match_deployments.py): default to user input if unable to… (#6632) 2024-11-08 00:55:57 +05:30
test_convert_dict_to_image.py fix ImageObject conversion (#6584) 2024-11-04 15:46:43 -08:00
test_databricks.py LiteLLM Minor Fixes & Improvements (11/04/2024) (#6572) 2024-11-06 17:53:46 +05:30
test_deepseek_completion.py Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
test_fireworks_ai_translation.py LiteLLM Minor Fixes & Improvements (09/18/2024) (#5772) 2024-09-19 13:25:29 -07:00
test_gpt4o_audio.py (feat) Support audio param in responses streaming (#6312) 2024-10-18 19:16:14 +05:30
test_groq.py Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
test_jina_ai.py LiteLLM Minor Fixes & Improvement (11/14/2024) (#6730) 2024-11-15 01:02:54 +05:30
test_max_completion_tokens.py Litellm dev 10 22 2024 (#6384) 2024-10-22 21:18:54 -07:00
test_mistral_api.py Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
test_nvidia_nim.py (feat) add nvidia nim embeddings (#6032) 2024-10-03 17:12:14 +05:30
test_openai_o1.py [Fix] o1-mini causes pydantic warnings on reasoning_tokens (#5754) 2024-09-17 20:23:14 -07:00
test_openai_prediction_param.py (feat) add Predicted Outputs for OpenAI (#6594) 2024-11-04 21:16:57 -08:00
test_optional_params.py Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
test_prompt_caching.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
test_prompt_factory.py fix(pattern_match_deployments.py): default to user input if unable to… (#6632) 2024-11-08 00:55:57 +05:30
test_supports_vision.py [Feat] Allow setting supports_vision for Custom OpenAI endpoints + Added testing (#5821) 2024-09-21 11:35:55 -07:00
test_text_completion.py (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546) 2024-11-04 15:47:48 -08:00
test_text_completion_unit_tests.py (fix) litellm.text_completion raises a non-blocking error on simple usage (#6546) 2024-11-04 15:47:48 -08:00
test_vertex.py Litellm dev 11 21 2024 (#6837) 2024-11-22 01:53:52 +05:30
test_xai.py (feat) add XAI ChatCompletion Support (#6373) 2024-11-01 20:37:09 +05:30

More tests under litellm/litellm/tests/*.