litellm-mirror/tests/litellm/llms
Krish Dholakia e1f7bcb47d
Fix VertexAI Credential Caching issue (#9756)
* refactor(vertex_llm_base.py): Prevent credential misrouting for projects

Fixes https://github.com/BerriAI/litellm/issues/7904

* fix: passing unit tests

* fix(vertex_llm_base.py): common auth logic across sync + async vertex ai calls

prevents credential caching issue across both flows

* test: fix test

* fix(vertex_llm_base.py): handle project id in default cause

* fix(factory.py): don't pass cache control if not set

bedrock invoke does not support this

* test: fix test

* fix(vertex_llm_base.py): add .exception message in load_auth

* fix: fix ruff error
2025-04-04 16:38:08 -07:00
..
anthropic/chat test: add unit testing 2025-03-21 10:35:36 -07:00
azure get_openai_client_cache_key 2025-03-18 18:35:50 -07:00
azure_ai/chat test: refactor testing to handle routing correctly 2025-03-18 12:24:12 -07:00
bedrock fix(common_utils.py): handle cris only model 2025-03-18 23:35:43 -07:00
chat update test 2025-03-10 20:34:52 -07:00
cohere/chat Add support for max_completion_tokens to the Cohere chat transformation config (#9701) 2025-04-02 07:50:44 -07:00
custom_httpx Add OpenAI gpt-4o-transcribe support (#9517) 2025-03-26 23:10:25 -07:00
deepgram/audio_transcription Add OpenAI gpt-4o-transcribe support (#9517) 2025-03-26 23:10:25 -07:00
openai test_openai_client_reuse 2025-03-18 18:13:36 -07:00
openrouter/chat fix #8425, passthrough kwargs during acompletion, and unwrap extra_body for openrouter (#9747) 2025-04-03 22:19:40 -07:00
sagemaker ref issue 2025-03-31 16:05:10 -07:00
vertex_ai Fix VertexAI Credential Caching issue (#9756) 2025-04-04 16:38:08 -07:00