Krish Dholakia
|
559a6ad826
|
fix(google_ai_studio): working context caching (#5421)
* fix(google_ai_studio): working context caching
* feat(vertex_ai_context_caching.py): support async cache check calls
* fix(vertex_and_google_ai_studio_gemini.py): fix setting headers
* fix(vertex_ai_parter_models): fix import
* fix(vertex_and_google_ai_studio_gemini.py): fix input
* test(test_amazing_vertex_completion.py): fix test
|
2024-08-29 07:00:30 -07:00 |
|
Krish Dholakia
|
a857f4a8ee
|
Merge branch 'main' into litellm_main_staging
|
2024-08-28 18:05:27 -07:00 |
|
Krrish Dholakia
|
f0fb8bdf45
|
fix(router.py): fix cooldown check
|
2024-08-28 16:38:42 -07:00 |
|
Krrish Dholakia
|
a6ce27ca29
|
feat(batch_embed_content_transformation.py): support google ai studio /batchEmbedContent endpoint
Allows for multiple strings to be given for embedding
|
2024-08-27 19:23:50 -07:00 |
|
Krrish Dholakia
|
5b29ddd2a6
|
fix(embeddings_handler.py): initial working commit for google ai studio text embeddings /embedContent endpoint
|
2024-08-27 18:14:56 -07:00 |
|
Krish Dholakia
|
415abc86c6
|
Merge pull request #5358 from BerriAI/litellm_fix_retry_after
fix retry after - cooldown individual models based on their specific 'retry-after' header
|
2024-08-27 11:50:14 -07:00 |
|
Krrish Dholakia
|
b0cc1df2d6
|
feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format)
Closes https://github.com/BerriAI/litellm/issues/5213
|
2024-08-26 22:19:01 -07:00 |
|
Krish Dholakia
|
c503ff435e
|
Merge pull request #5368 from BerriAI/litellm_vertex_function_support
feat(vertex_httpx.py): support 'functions' param for gemini google ai studio + vertex ai
|
2024-08-26 22:11:42 -07:00 |
|
Krrish Dholakia
|
1a97362b6e
|
fix(types/utils.py): map finish reason to openai compatible
|
2024-08-26 22:09:02 -07:00 |
|
Krrish Dholakia
|
8e9acd117b
|
fix(sagemaker.py): support streaming for messages api
Fixes https://github.com/BerriAI/litellm/issues/5372
|
2024-08-26 15:08:08 -07:00 |
|
Krrish Dholakia
|
d13d2e8a62
|
feat(vertex_httpx.py): support functions param for gemini google ai studio + vertex ai
Closes https://github.com/BerriAI/litellm/issues/5344
|
2024-08-26 10:59:01 -07:00 |
|
Krrish Dholakia
|
068aafdff9
|
fix(utils.py): correctly re-raise the headers from an exception, if present
Fixes issue where retry after on router was not using azure / openai numbers
|
2024-08-24 12:30:30 -07:00 |
|
Ishaan Jaff
|
228252b92d
|
Merge branch 'main' into litellm_allow_using_azure_ad_token_auth
|
2024-08-22 18:21:24 -07:00 |
|
Ishaan Jaff
|
7d55047ab9
|
add bedrock guardrails support
|
2024-08-22 16:09:55 -07:00 |
|
Ishaan Jaff
|
14a6ce367d
|
add types for BedrockMessage
|
2024-08-22 15:40:58 -07:00 |
|
Ishaan Jaff
|
08fa3f346a
|
add new litellm params for client_id, tenant_id etc
|
2024-08-22 11:37:30 -07:00 |
|
Krrish Dholakia
|
70bf8bd4f4
|
feat(factory.py): enable 'user_continue_message' for interweaving user/assistant messages when provider requires it
allows bedrock to be used with autogen
|
2024-08-22 11:03:33 -07:00 |
|
Ishaan Jaff
|
2a3bc8c190
|
add azure_ad_token_provider as all litellm params
|
2024-08-22 10:59:18 -07:00 |
|
Krrish Dholakia
|
f36e7e0754
|
fix(ollama_chat.py): fix passing assistant message with tool call param
Fixes https://github.com/BerriAI/litellm/issues/5319
|
2024-08-22 10:00:03 -07:00 |
|
Ishaan Jaff
|
dd524a4f50
|
Merge pull request #5326 from BerriAI/litellm_Add_vertex_multimodal_embedding
[Feat] add vertex multimodal embedding support
|
2024-08-21 17:06:43 -07:00 |
|
Krrish Dholakia
|
8a05ce77e9
|
feat(litellm_logging.py): add 'saved_cache_cost' to standard logging payload (s3)
|
2024-08-21 16:58:07 -07:00 |
|
Ishaan Jaff
|
dd00cf2a97
|
add VertexMultimodalEmbeddingRequest type
|
2024-08-21 14:25:47 -07:00 |
|
Ishaan Jaff
|
8d2c529e55
|
support lakera ai category thresholds
|
2024-08-20 17:19:24 -07:00 |
|
Ishaan Jaff
|
8cd1963c11
|
feat - guardrails v2
|
2024-08-19 18:24:20 -07:00 |
|
Krrish Dholakia
|
178139f18d
|
feat(litellm_logging.py): support logging model price information to s3 logs
|
2024-08-16 16:21:34 -07:00 |
|
Ishaan Jaff
|
740d1fdb1c
|
add provider_specific_fields to GenericStreamingChunk
|
2024-08-16 11:38:22 -07:00 |
|
Krish Dholakia
|
6c3f37f8b4
|
Merge pull request #5235 from BerriAI/litellm_fix_s3_logs
fix(s3.py): fix s3 logging payload to have valid json values
|
2024-08-15 23:00:18 -07:00 |
|
Ishaan Jaff
|
df4ea8fba6
|
refactor sagemaker to be async
|
2024-08-15 18:18:02 -07:00 |
|
Krrish Dholakia
|
f6dba82882
|
feat(litellm_logging.py): cleanup payload + add response cost to logged payload
|
2024-08-15 17:53:25 -07:00 |
|
Krrish Dholakia
|
3ddeb3297d
|
fix(litellm_logging.py): fix standard payload
|
2024-08-15 17:33:40 -07:00 |
|
Krrish Dholakia
|
cda50e5d47
|
fix(s3.py): fix s3 logging payload to have valid json values
Previously pydantic objects were being stringified, making them unparsable
|
2024-08-15 17:09:02 -07:00 |
|
Ishaan Jaff
|
78a2013e51
|
add test for large context in system message for anthropic
|
2024-08-14 17:03:10 -07:00 |
|
Ishaan Jaff
|
b0651bd481
|
add anthropic cache controls
|
2024-08-14 14:56:49 -07:00 |
|
Krrish Dholakia
|
19bb95f781
|
build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
Closes https://github.com/BerriAI/litellm/issues/4881
|
2024-08-10 14:15:12 -07:00 |
|
Krrish Dholakia
|
1553f7fa48
|
fix(types/utils.py): handle null completion tokens
Fixes https://github.com/BerriAI/litellm/issues/5096
|
2024-08-10 09:23:03 -07:00 |
|
Krrish Dholakia
|
6180c52cfe
|
fix(router.py): fix types
|
2024-08-09 12:24:48 -07:00 |
|
Krrish Dholakia
|
7b6db63d30
|
fix(router.py): fallback on 400-status code requests
|
2024-08-09 12:16:49 -07:00 |
|
Krrish Dholakia
|
4919cc4d25
|
fix(anthropic.py): handle scenario where anthropic returns invalid json string for tool call while streaming
Fixes https://github.com/BerriAI/litellm/issues/5063
|
2024-08-07 09:24:11 -07:00 |
|
Krish Dholakia
|
c82fc0cac2
|
Merge branch 'main' into litellm_support_lakera_config_thresholds
|
2024-08-06 22:47:13 -07:00 |
|
Krrish Dholakia
|
834b437eb4
|
fix(utils.py): fix types
|
2024-08-06 12:23:22 -07:00 |
|
Krrish Dholakia
|
3c4c78a71f
|
feat(caching.py): enable caching on provider-specific optional params
Closes https://github.com/BerriAI/litellm/issues/5049
|
2024-08-05 11:18:59 -07:00 |
|
Krrish Dholakia
|
cd94c3adc1
|
fix(types/router.py): remove model_info pydantic field
Fixes https://github.com/BerriAI/litellm/issues/5042
|
2024-08-05 09:58:44 -07:00 |
|
Krrish Dholakia
|
ac6c39c283
|
feat(anthropic_adapter.py): support streaming requests for /v1/messages endpoint
Fixes https://github.com/BerriAI/litellm/issues/5011
|
2024-08-03 20:16:19 -07:00 |
|
Krrish Dholakia
|
5add6687cc
|
fix(types/utils.py): fix linting errors
|
2024-08-03 11:48:33 -07:00 |
|
Krrish Dholakia
|
c982ec88d8
|
fix(bedrock.py): fix response format for bedrock image generation response
Fixes https://github.com/BerriAI/litellm/issues/5010
|
2024-08-03 09:46:49 -07:00 |
|
Ishaan Jaff
|
4917aaefab
|
fix vertex credentials
|
2024-08-03 08:40:35 -07:00 |
|
Ishaan Jaff
|
9dffe23108
|
Merge pull request #5030 from BerriAI/litellm_add_vertex_ft_proxy
[Feat] Add support for Vertex AI Fine tuning on LiteLLM Proxy
|
2024-08-03 08:29:11 -07:00 |
|
Ishaan Jaff
|
f840a5f6b4
|
Merge pull request #5028 from BerriAI/litellm_create_ft_job_gemini
[Feat] Add support for Vertex AI fine tuning endpoints
|
2024-08-03 08:22:55 -07:00 |
|
Ishaan Jaff
|
4fc27e87c5
|
add vertex ai ft on proxy
|
2024-08-02 18:26:36 -07:00 |
|
Ishaan Jaff
|
ac6224c2b1
|
translate response from vertex to openai
|
2024-08-02 18:02:24 -07:00 |
|