litellm-mirror/litellm/llms/vertex_ai_and_google_ai_studio
Krish Dholakia 559a6ad826
fix(google_ai_studio): working context caching (#5421)
* fix(google_ai_studio): working context caching

* feat(vertex_ai_context_caching.py): support async cache check calls

* fix(vertex_and_google_ai_studio_gemini.py): fix setting headers

* fix(vertex_ai_parter_models): fix import

* fix(vertex_and_google_ai_studio_gemini.py): fix input

* test(test_amazing_vertex_completion.py): fix test
2024-08-29 07:00:30 -07:00
..
context_caching fix(google_ai_studio): working context caching (#5421) 2024-08-29 07:00:30 -07:00
embeddings fix(main.py): simplify to just use /batchEmbedContent 2024-08-27 21:46:05 -07:00
gemini fix(google_ai_studio): working context caching (#5421) 2024-08-29 07:00:30 -07:00
vertex_ai_partner_models fix(vertex_ai_partner_models.py): fix vertex import 2024-08-28 18:08:33 -07:00
common_utils.py feat(batch_embed_content_transformation.py): support google ai studio /batchEmbedContent endpoint 2024-08-27 19:23:50 -07:00
vertex_ai_anthropic.py fix: initial commit 2024-08-27 17:35:56 -07:00
vertex_ai_non_gemini.py feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format) 2024-08-26 22:19:01 -07:00