litellm-mirror/tests/litellm/llms/vertex_ai
Krish Dholakia a7db0df043
Gemini-2.5-flash improvements (#10198)
* fix(vertex_and_google_ai_studio_gemini.py): allow thinking budget = 0

Fixes https://github.com/BerriAI/litellm/issues/10121

* fix(vertex_and_google_ai_studio_gemini.py): handle nuance in counting exclusive vs. inclusive tokens

Addresses https://github.com/BerriAI/litellm/pull/10141#discussion_r2052272035
2025-04-21 22:48:00 -07:00
..
gemini Gemini-2.5-flash improvements (#10198) 2025-04-21 22:48:00 -07:00
multimodal_embeddings Add bedrock latency optimized inference support (#9623) 2025-03-29 00:23:09 -07:00
test_http_status_201.py add test code 2025-03-13 14:00:12 +09:00
test_vertex_ai_common_utils.py Add property ordering for vertex ai schema (#9828) + Fix combining multiple tool calls (#10040) 2025-04-15 22:29:25 -07:00
test_vertex_llm_base.py Fix VertexAI Credential Caching issue (#9756) 2025-04-04 16:38:08 -07:00