litellm-mirror/tests/llm_translation
Ishaan Jaff b02af305de
All checks were successful
Read Version from pyproject.toml / read-version (push) Successful in 14s
[Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029)
* if merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* stash changes

* working merge_reasoning_content_in_choices with bedrock

* fix litellm_params accessor

* fix streaming handler

* merge_reasoning_content_in_choices

* _optional_combine_thinking_block_in_choices

* test_bedrock_stream_thinking_content_openwebui

* merge_reasoning_content_in_choices

* fix for _optional_combine_thinking_block_in_choices

* linting error fix
2025-03-06 18:32:58 -08:00
..
test_llm_response_utils Support caching on reasoning content + other fixes (#8973) 2025-03-04 21:12:16 -08:00
base_audio_transcription_unit_tests.py Litellm dev 12 25 2025 p2 (#7420) 2024-12-25 18:35:34 -08:00
base_embedding_unit_tests.py Litellm dev 12 25 2025 p2 (#7420) 2024-12-25 18:35:34 -08:00
base_llm_unit_tests.py build: merge branch 2025-03-02 08:31:57 -08:00
base_rerank_unit_tests.py Add cost tracking for rerank via bedrock (#8691) 2025-02-20 21:00:18 -08:00
conftest.py aiohttp_openai/ fixes - allow using aiohttp_openai/gpt-4o (#7598) 2025-01-06 21:39:11 -08:00
dog.wav (feat) Support audio param in responses streaming (#6312) 2024-10-18 19:16:14 +05:30
gettysburg.wav Litellm dev 12 25 2025 p2 (#7420) 2024-12-25 18:35:34 -08:00
Readme.md LiteLLM Minor Fixes & Improvements (11/29/2024) (#6965) 2024-12-01 05:24:11 -08:00
test_aiohttp_openai.py (proxy - RPS) - Get 2K RPS at 4 instances, minor fix aiohttp_openai/ (#7659) 2025-01-09 17:24:18 -08:00
test_anthropic_completion.py test - remove anthropic_adapter tests. no longer used 2025-03-06 06:47:35 -08:00
test_anthropic_text_completion.py Easier user onboarding via SSO (#8187) 2025-02-02 23:02:33 -08:00
test_aws_base_llm.py (fix) BaseAWSLLM - cache IAM role credentials when used (#7775) 2025-01-14 20:16:22 -08:00
test_azure_ai.py Add cost tracking for rerank via bedrock (#8691) 2025-02-20 21:00:18 -08:00
test_azure_o_series.py (Feat) - Add /bedrock/meta.llama3-3-70b-instruct-v1:0 tool calling support + cost tracking + base llm unit test for tool calling (#8545) 2025-02-14 14:15:25 -08:00
test_azure_openai.py Litellm dev 02 10 2025 p1 (#8438) 2025-02-10 16:25:04 -08:00
test_bedrock_completion.py [Feat] - Display thinking tokens on OpenWebUI (Bedrock, Anthropic, Deepseek) (#9029) 2025-03-06 18:32:58 -08:00
test_bedrock_dynamic_auth_params_unit_tests.py (Bug Fix) - Bedrock completions with aws_region_name (#8384) 2025-02-08 16:33:17 -08:00
test_bedrock_embedding.py fix: remove aws params from bedrock embedding request body (#8618) (#8696) 2025-02-24 10:04:58 -08:00
test_bedrock_invoke_tests.py Fix calling claude via invoke route + response_format support for claude on invoke route (#8908) 2025-02-28 17:56:26 -08:00
test_bedrock_llama.py (Feat) - Add /bedrock/meta.llama3-3-70b-instruct-v1:0 tool calling support + cost tracking + base llm unit test for tool calling (#8545) 2025-02-14 14:15:25 -08:00
test_bedrock_nova_json.py (Feat) - Add support for structured output on bedrock/nova models + add util litellm.supports_tool_choice (#8264) 2025-02-04 21:47:16 -08:00
test_clarifai_completion.py (Refactor) Code Quality improvement - Use Common base handler for clarifai/ (#7125) 2024-12-09 21:04:48 -08:00
test_cloudflare.py (Refactor) Code Quality improvement - Use Common base handler for cloudflare/ provider (#7127) 2024-12-10 10:12:22 -08:00
test_cohere.py (Refactor) Code Quality improvement - use Common base handler for Cohere (#7117) 2024-12-09 17:45:29 -08:00
test_cohere_generate_api.py fix(acompletion): support fallbacks on acompletion (#7184) 2024-12-11 19:20:54 -08:00
test_convert_dict_to_image.py fix ImageObject conversion (#6584) 2024-11-04 15:46:43 -08:00
test_databricks.py Litellm dev 01 20 2025 p3 (#7890) 2025-01-20 21:46:36 -08:00
test_deepgram.py Litellm dev 12 28 2024 p3 (#7464) 2024-12-28 19:18:58 -08:00
test_deepseek_completion.py [Bug]: Deepseek error on proxy after upgrading to 1.61.13-stable (#8860) 2025-02-26 21:11:06 -08:00
test_fireworks_ai_translation.py test: mock fireworks ai test - unstable api 2025-01-22 18:52:11 -08:00
test_gemini.py run ci/cd again 2025-01-29 20:55:49 -08:00
test_gpt4o_audio.py Add anthropic thinking + reasoning content support (#8778) 2025-02-24 21:54:30 -08:00
test_groq.py fix(spend_tracking_utils.py): revert api key pass through fix (#7977) 2025-01-24 21:04:36 -08:00
test_huggingface.py Litellm dev 12 25 2024 p1 (#7411) 2024-12-25 17:36:30 -08:00
test_infinity.py add bedrock llama vision support + cohere / infinity rerank - 'return_documents' support (#8684) 2025-02-20 21:23:54 -08:00
test_jina_ai.py Litellm 12 02 2024 (#6994) 2024-12-02 22:00:01 -08:00
test_litellm_proxy_provider.py (Bug Fix) Using LiteLLM Python SDK with model=litellm_proxy/ for embedding, image_generation, transcription, speech, rerank (#8815) 2025-02-25 16:22:37 -08:00
test_max_completion_tokens.py Fix calling claude via invoke route + response_format support for claude on invoke route (#8908) 2025-02-28 17:56:26 -08:00
test_mistral_api.py test_multilingual_requests 2024-12-03 20:52:19 -08:00
test_nvidia_nim.py fix(nvidia_nim/embed.py): add 'dimensions' support (#8302) 2025-02-07 16:19:32 -08:00
test_openai.py Fix deepseek 'reasoning_content' error (#8963) 2025-03-03 14:34:10 -08:00
test_openai_o1.py feat(openai/o_series_transformation.py): support native streaming for all openai o-series models (#8552) 2025-02-14 20:04:19 -08:00
test_optional_params.py Fix calling claude via invoke route + response_format support for claude on invoke route (#8908) 2025-02-28 17:56:26 -08:00
test_prompt_caching.py (feat) openai prompt caching (non streaming) - add prompt_tokens_details in usage response (#6039) 2024-10-03 23:31:10 +05:30
test_prompt_factory.py Support format param for specifying image type (#9019) 2025-03-05 19:52:53 -08:00
test_rerank.py Add cohere v2/rerank support (#8421) (#8605) 2025-02-22 22:25:29 -08:00
test_router_llm_translation_tests.py test_prompt_caching 2025-02-26 09:29:15 -08:00
test_text_completion.py (Bug fix) - Using include_usage for /completions requests + unit testing (#8484) 2025-02-11 20:29:04 -08:00
test_text_completion_unit_tests.py huggingface/mistralai/Mistral-7B-Instruct-v0.3 2025-01-13 18:42:36 -08:00
test_together_ai.py LiteLLM Minor Fixes & Improvements (12/05/2024) (#7037) 2024-12-05 00:02:31 -08:00
test_triton.py [Bug fix ]: Triton /infer handler incompatible with batch responses (#7337) 2024-12-20 20:59:40 -08:00
test_unit_test_bedrock_invoke.py (Refactor) - migrate bedrock invoke to BaseLLMHTTPHandler class (#8290) 2025-02-05 18:58:55 -08:00
test_vertex.py Support format param for specifying image type (#9019) 2025-03-05 19:52:53 -08:00
test_voyage_ai.py (fix) unable to pass input_type parameter to Voyage AI embedding mode (#7276) 2024-12-17 19:23:49 -08:00
test_watsonx.py Deepseek r1 support + watsonx qa improvements (#7907) 2025-01-21 23:13:15 -08:00
test_xai.py test commit on main 2025-01-16 20:52:55 -08:00

Unit tests for individual LLM providers.

Name of the test file is the name of the LLM provider - e.g. test_openai.py is for OpenAI.