litellm-mirror/tests/litellm/llms/vertex_ai
Krish Dholakia 03b5399f86
test(utils.py): handle scenario where text tokens + reasoning tokens … (#10165)
* test(utils.py): handle scenario where text tokens + reasoning tokens set, but reasoning tokens not charged separately

Addresses https://github.com/BerriAI/litellm/pull/10141#discussion_r2051555332

* fix(vertex_and_google_ai_studio.py): only set content if non-empty str
2025-04-19 12:32:38 -07:00
..
gemini test(utils.py): handle scenario where text tokens + reasoning tokens … (#10165) 2025-04-19 12:32:38 -07:00
multimodal_embeddings Add bedrock latency optimized inference support (#9623) 2025-03-29 00:23:09 -07:00
test_http_status_201.py add test code 2025-03-13 14:00:12 +09:00
test_vertex_ai_common_utils.py Add property ordering for vertex ai schema (#9828) + Fix combining multiple tool calls (#10040) 2025-04-15 22:29:25 -07:00
test_vertex_llm_base.py Fix VertexAI Credential Caching issue (#9756) 2025-04-04 16:38:08 -07:00