Krish Dholakia
|
559a6ad826
|
fix(google_ai_studio): working context caching (#5421)
* fix(google_ai_studio): working context caching
* feat(vertex_ai_context_caching.py): support async cache check calls
* fix(vertex_and_google_ai_studio_gemini.py): fix setting headers
* fix(vertex_ai_parter_models): fix import
* fix(vertex_and_google_ai_studio_gemini.py): fix input
* test(test_amazing_vertex_completion.py): fix test
|
2024-08-29 07:00:30 -07:00 |
|
Krrish Dholakia
|
dd9c5d10bd
|
fix(vertex_ai_partner_models.py): fix vertex import
|
2024-08-28 18:08:33 -07:00 |
|
Krish Dholakia
|
a857f4a8ee
|
Merge branch 'main' into litellm_main_staging
|
2024-08-28 18:05:27 -07:00 |
|
Krish Dholakia
|
d928220ed2
|
Merge pull request #5393 from BerriAI/litellm_gemini_embedding_support
feat(vertex_ai_and_google_ai_studio): Support Google AI Studio Embedding Endpoint
|
2024-08-28 13:46:28 -07:00 |
|
Ishaan Jaff
|
58506dbade
|
update validate_vertex_input
|
2024-08-28 12:52:26 -07:00 |
|
Ishaan Jaff
|
6d11b392f8
|
add ssml input on vertex tts
|
2024-08-28 12:17:53 -07:00 |
|
Krrish Dholakia
|
e1db58b8e5
|
fix(main.py): simplify to just use /batchEmbedContent
|
2024-08-27 21:46:05 -07:00 |
|
Krrish Dholakia
|
a6ce27ca29
|
feat(batch_embed_content_transformation.py): support google ai studio /batchEmbedContent endpoint
Allows for multiple strings to be given for embedding
|
2024-08-27 19:23:50 -07:00 |
|
Krrish Dholakia
|
bb42146ffe
|
feat(embeddings_handler.py): support async gemini embeddings
|
2024-08-27 18:31:57 -07:00 |
|
Ishaan Jaff
|
647504b462
|
add test for rerank on custom api base
|
2024-08-27 18:25:51 -07:00 |
|
Krrish Dholakia
|
5b29ddd2a6
|
fix(embeddings_handler.py): initial working commit for google ai studio text embeddings /embedContent endpoint
|
2024-08-27 18:14:56 -07:00 |
|
Krrish Dholakia
|
77e6da78a1
|
fix: initial commit
|
2024-08-27 17:35:56 -07:00 |
|
Ishaan Jaff
|
06529f19df
|
Merge pull request #5392 from BerriAI/litellm_add_native_cohere_rerank
[Feat] Add cohere rerank and together ai rerank
|
2024-08-27 17:29:37 -07:00 |
|
Ishaan Jaff
|
37ed201c50
|
fix install on 3.8
|
2024-08-27 17:09:16 -07:00 |
|
Krrish Dholakia
|
5b06ea136c
|
fix(openai.py): fix error re-raising
|
2024-08-27 17:06:25 -07:00 |
|
Ishaan Jaff
|
b3892b871d
|
add async support for rerank
|
2024-08-27 17:02:48 -07:00 |
|
Krrish Dholakia
|
d29a7087f1
|
feat(vertex_ai_and_google_ai_studio): Support Google AI Studio Embeddings endpoint
Closes https://github.com/BerriAI/litellm/issues/5385
|
2024-08-27 16:53:11 -07:00 |
|
Ishaan Jaff
|
f33dfe0b95
|
add rerank params
|
2024-08-27 16:45:39 -07:00 |
|
Ishaan Jaff
|
dc42ad0021
|
add tg ai rerank support
|
2024-08-27 16:25:54 -07:00 |
|
Krrish Dholakia
|
6431af0678
|
fix(bedrock_httpx.py): support 'Auth' header as extra_header
Fixes https://github.com/BerriAI/litellm/issues/5389#issuecomment-2313677977
|
2024-08-27 16:08:54 -07:00 |
|
Krrish Dholakia
|
1b2f73c220
|
fix(azure_text.py): fix streaming parsing
|
2024-08-27 15:52:55 -07:00 |
|
Ishaan Jaff
|
6ab8cbc105
|
Merge pull request #5391 from BerriAI/litellm_add_ai21_support
[Feat] Add Vertex AI21 support
|
2024-08-27 15:06:26 -07:00 |
|
Ishaan Jaff
|
33a3a01949
|
add mock test for ai21
|
2024-08-27 14:42:13 -07:00 |
|
Krrish Dholakia
|
b91e5d3887
|
fix(openai.py): fix post call error logging for aembedding calls
|
2024-08-27 14:26:06 -07:00 |
|
Krrish Dholakia
|
d43441ae5d
|
fix(anthropic.py): support setting cache control headers, automatically
Don't require user to manually pass in 'extra_headers' for anthropic cache control usage
|
2024-08-27 13:57:03 -07:00 |
|
Krrish Dholakia
|
63adb3f940
|
fix(azure.py): fix raw response dump
|
2024-08-27 13:44:38 -07:00 |
|
Ishaan Jaff
|
11c175a215
|
refactor partner models to include ai21
|
2024-08-27 13:35:22 -07:00 |
|
Krrish Dholakia
|
18731cf42b
|
fix: fix linting errors
|
2024-08-27 12:14:23 -07:00 |
|
Krish Dholakia
|
415abc86c6
|
Merge pull request #5358 from BerriAI/litellm_fix_retry_after
fix retry after - cooldown individual models based on their specific 'retry-after' header
|
2024-08-27 11:50:14 -07:00 |
|
Krrish Dholakia
|
18b67a455e
|
test: fix test
|
2024-08-27 10:46:57 -07:00 |
|
Krrish Dholakia
|
bf81b484c6
|
fix(sagemaker.py): fix streaming logic
|
2024-08-27 08:10:47 -07:00 |
|
Krrish Dholakia
|
2cf149fbad
|
perf(sagemaker.py): asyncify hf prompt template check
leads to 189% improvement in RPS @ 100 users
|
2024-08-27 07:37:06 -07:00 |
|
miraclebakelaser
|
97f714d2b0
|
fix(factory.py): handle missing 'content' in cohere assistant messages
Update cohere_messages_pt_v2 function to check for 'content' existence
|
2024-08-27 19:38:37 +09:00 |
|
Krish Dholakia
|
08bd4788dc
|
Merge branch 'main' into litellm_gemini_context_caching
|
2024-08-26 22:22:17 -07:00 |
|
Krrish Dholakia
|
5aad9d2db7
|
fix: fix imports
|
2024-08-26 22:19:01 -07:00 |
|
Krrish Dholakia
|
4868a6cf55
|
fix: fix unbound var
|
2024-08-26 22:19:01 -07:00 |
|
Krrish Dholakia
|
0eea01dae9
|
feat(vertex_ai_context_caching.py): check gemini cache, if key already exists
|
2024-08-26 22:19:01 -07:00 |
|
Krrish Dholakia
|
b0cc1df2d6
|
feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format)
Closes https://github.com/BerriAI/litellm/issues/5213
|
2024-08-26 22:19:01 -07:00 |
|
Krish Dholakia
|
c503ff435e
|
Merge pull request #5368 from BerriAI/litellm_vertex_function_support
feat(vertex_httpx.py): support 'functions' param for gemini google ai studio + vertex ai
|
2024-08-26 22:11:42 -07:00 |
|
Krish Dholakia
|
3a6412c9c3
|
Merge pull request #5376 from BerriAI/litellm_sagemaker_streaming_fix
fix(sagemaker.py): support streaming for messages api
|
2024-08-26 21:36:10 -07:00 |
|
Krrish Dholakia
|
75bb9ff7fe
|
fix: fix imports
|
2024-08-26 21:36:04 -07:00 |
|
Krrish Dholakia
|
592d8e933d
|
fix: fix unbound var
|
2024-08-26 20:49:08 -07:00 |
|
Krrish Dholakia
|
5d68f27a57
|
feat(vertex_ai_context_caching.py): check gemini cache, if key already exists
|
2024-08-26 20:28:18 -07:00 |
|
Krrish Dholakia
|
c83aa801f2
|
feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format)
Closes https://github.com/BerriAI/litellm/issues/5213
|
2024-08-26 18:47:45 -07:00 |
|
Ishaan Jaff
|
c02dc761e9
|
Merge pull request #5374 from BerriAI/litellm_refactor_cohere
[Refactor] Refactor cohere provider to be in a folder
|
2024-08-26 17:54:32 -07:00 |
|
Ishaan Jaff
|
4f8026f44d
|
fix refactor cohere
|
2024-08-26 16:33:04 -07:00 |
|
Krrish Dholakia
|
8e9acd117b
|
fix(sagemaker.py): support streaming for messages api
Fixes https://github.com/BerriAI/litellm/issues/5372
|
2024-08-26 15:08:08 -07:00 |
|
Ishaan Jaff
|
da63775371
|
use common folder for cohere
|
2024-08-26 14:28:50 -07:00 |
|
Ishaan Jaff
|
f9ea0d8fa9
|
refactor cohere to be in a folder
|
2024-08-26 14:16:25 -07:00 |
|
Ishaan Jaff
|
3d11b21726
|
add fine tuned vertex model support
|
2024-08-26 13:10:04 -07:00 |
|