Ishaan Jaff
|
3a72197e77
|
Merge pull request #5455 from BerriAI/litellm_vtx_add_input_type_mapping
[Feat] Vertex embeddings - map `input_type` to `text_type`
|
2024-08-30 17:03:04 -07:00 |
|
Ishaan Jaff
|
518aa639fa
|
fix map input_type to task_type for vertex ai
|
2024-08-30 12:09:07 -07:00 |
|
Ishaan Jaff
|
570a5a2825
|
fix dir structure for tts
|
2024-08-30 11:44:23 -07:00 |
|
Ishaan Jaff
|
1bd2b2fc92
|
Merge pull request #5449 from BerriAI/litellm_Fix_vertex_multimodal
[Fix-Proxy] Allow running /health checks on vertex multimodal embedding requests
|
2024-08-30 10:21:42 -07:00 |
|
Ishaan Jaff
|
a6273a29fe
|
add test for test_vertexai_multimodal_embedding_text_input
|
2024-08-30 09:19:48 -07:00 |
|
Krish Dholakia
|
dd7b008161
|
fix: Minor LiteLLM Fixes + Improvements (29/08/2024) (#5436)
* fix(model_checks.py): support returning wildcard models on `/v1/models`
Fixes https://github.com/BerriAI/litellm/issues/4903
* fix(bedrock_httpx.py): support calling bedrock via api_base
Closes https://github.com/BerriAI/litellm/pull/4587
* fix(litellm_logging.py): only leave last 4 char of gemini key unmasked
Fixes https://github.com/BerriAI/litellm/issues/5433
* feat(router.py): support setting 'weight' param for models on router
Closes https://github.com/BerriAI/litellm/issues/5410
* test(test_bedrock_completion.py): add unit test for custom api base
* fix(model_checks.py): handle no "/" in model
|
2024-08-29 22:40:25 -07:00 |
|
Ishaan Jaff
|
5851a8f901
|
Merge pull request #5431 from BerriAI/litellm_Add_fireworks_ai_health_check
[Fix-Proxy] /health check for provider wildcard models (fireworks/*)
|
2024-08-29 14:25:05 -07:00 |
|
Ishaan Jaff
|
b576233e66
|
add support for fireworks ai health check
|
2024-08-29 09:29:16 -07:00 |
|
Krish Dholakia
|
d928220ed2
|
Merge pull request #5393 from BerriAI/litellm_gemini_embedding_support
feat(vertex_ai_and_google_ai_studio): Support Google AI Studio Embedding Endpoint
|
2024-08-28 13:46:28 -07:00 |
|
Krrish Dholakia
|
3cec00939e
|
test(test_embeddings.py): fix test
|
2024-08-28 07:51:00 -07:00 |
|
Krrish Dholakia
|
e1db58b8e5
|
fix(main.py): simplify to just use /batchEmbedContent
|
2024-08-27 21:46:05 -07:00 |
|
Krrish Dholakia
|
a6ce27ca29
|
feat(batch_embed_content_transformation.py): support google ai studio /batchEmbedContent endpoint
Allows for multiple strings to be given for embedding
|
2024-08-27 19:23:50 -07:00 |
|
Krrish Dholakia
|
5b29ddd2a6
|
fix(embeddings_handler.py): initial working commit for google ai studio text embeddings /embedContent endpoint
|
2024-08-27 18:14:56 -07:00 |
|
Krrish Dholakia
|
77e6da78a1
|
fix: initial commit
|
2024-08-27 17:35:56 -07:00 |
|
Krrish Dholakia
|
d29a7087f1
|
feat(vertex_ai_and_google_ai_studio): Support Google AI Studio Embeddings endpoint
Closes https://github.com/BerriAI/litellm/issues/5385
|
2024-08-27 16:53:11 -07:00 |
|
Ishaan Jaff
|
11c175a215
|
refactor partner models to include ai21
|
2024-08-27 13:35:22 -07:00 |
|
Krish Dholakia
|
415abc86c6
|
Merge pull request #5358 from BerriAI/litellm_fix_retry_after
fix retry after - cooldown individual models based on their specific 'retry-after' header
|
2024-08-27 11:50:14 -07:00 |
|
Krrish Dholakia
|
b0cc1df2d6
|
feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format)
Closes https://github.com/BerriAI/litellm/issues/5213
|
2024-08-26 22:19:01 -07:00 |
|
Krish Dholakia
|
c503ff435e
|
Merge pull request #5368 from BerriAI/litellm_vertex_function_support
feat(vertex_httpx.py): support 'functions' param for gemini google ai studio + vertex ai
|
2024-08-26 22:11:42 -07:00 |
|
Krish Dholakia
|
3a6412c9c3
|
Merge pull request #5376 from BerriAI/litellm_sagemaker_streaming_fix
fix(sagemaker.py): support streaming for messages api
|
2024-08-26 21:36:10 -07:00 |
|
Ishaan Jaff
|
95455c8849
|
fix entrypoint
|
2024-08-26 20:32:23 -07:00 |
|
Krrish Dholakia
|
8e9acd117b
|
fix(sagemaker.py): support streaming for messages api
Fixes https://github.com/BerriAI/litellm/issues/5372
|
2024-08-26 15:08:08 -07:00 |
|
Ishaan Jaff
|
da63775371
|
use common folder for cohere
|
2024-08-26 14:28:50 -07:00 |
|
Ishaan Jaff
|
f9ea0d8fa9
|
refactor cohere to be in a folder
|
2024-08-26 14:16:25 -07:00 |
|
Krrish Dholakia
|
8695cf186d
|
fix(main.py): fix linting errors
|
2024-08-26 11:44:37 -07:00 |
|
Krish Dholakia
|
f27abe0462
|
Merge branch 'main' into litellm_vertex_migration
|
2024-08-24 18:24:19 -07:00 |
|
Krrish Dholakia
|
87549a2391
|
fix(main.py): cover openai /v1/completions endpoint
|
2024-08-24 13:25:17 -07:00 |
|
Krrish Dholakia
|
5a2c9d5121
|
test(test_router.py): add test to ensure error is correctly re-raised
|
2024-08-24 10:08:14 -07:00 |
|
Krish Dholakia
|
cd61ddc610
|
Merge pull request #5343 from BerriAI/litellm_sagemaker_chat
feat(sagemaker.py): add sagemaker messages api support
|
2024-08-23 21:00:00 -07:00 |
|
Ishaan Jaff
|
80e95b4ccf
|
add mock testing for vertex tts
|
2024-08-23 18:18:37 -07:00 |
|
Ishaan Jaff
|
8fada93fff
|
docs on using vertex tts
|
2024-08-23 17:57:49 -07:00 |
|
Ishaan Jaff
|
755a0514f6
|
fix linting
|
2024-08-23 16:05:31 -07:00 |
|
Ishaan Jaff
|
c3987745fe
|
fix linting errors
|
2024-08-23 15:44:31 -07:00 |
|
Krrish Dholakia
|
3f116b25a9
|
feat(sagemaker.py): add sagemaker messages api support
Closes https://github.com/BerriAI/litellm/issues/2641
Closes https://github.com/BerriAI/litellm/pull/5178
|
2024-08-23 10:31:35 -07:00 |
|
Krish Dholakia
|
76b3db334b
|
Merge branch 'main' into litellm_azure_batch_apis
|
2024-08-22 19:07:54 -07:00 |
|
Ishaan Jaff
|
228252b92d
|
Merge branch 'main' into litellm_allow_using_azure_ad_token_auth
|
2024-08-22 18:21:24 -07:00 |
|
Krrish Dholakia
|
d7d3eee349
|
feat(azure.py): support health checking azure deployments
Fixes https://github.com/BerriAI/litellm/issues/5279
|
2024-08-22 16:11:14 -07:00 |
|
Ishaan Jaff
|
08fa3f346a
|
add new litellm params for client_id, tenant_id etc
|
2024-08-22 11:37:30 -07:00 |
|
Ishaan Jaff
|
8f657b40f5
|
use azure_ad_token_provider to init clients
|
2024-08-22 11:03:49 -07:00 |
|
Krrish Dholakia
|
70bf8bd4f4
|
feat(factory.py): enable 'user_continue_message' for interweaving user/assistant messages when provider requires it
allows bedrock to be used with autogen
|
2024-08-22 11:03:33 -07:00 |
|
Krrish Dholakia
|
11bfc1dca7
|
fix(cohere_chat.py): support passing 'extra_headers'
Fixes https://github.com/BerriAI/litellm/issues/4709
|
2024-08-22 10:17:36 -07:00 |
|
Krrish Dholakia
|
f36e7e0754
|
fix(ollama_chat.py): fix passing assistant message with tool call param
Fixes https://github.com/BerriAI/litellm/issues/5319
|
2024-08-22 10:00:03 -07:00 |
|
Ishaan Jaff
|
35781ab8d5
|
add multi modal vtx embedding
|
2024-08-21 15:05:59 -07:00 |
|
Ishaan Jaff
|
7e3dc83c0d
|
add initial support for multimodal_embedding vertex
|
2024-08-21 14:29:05 -07:00 |
|
Krish Dholakia
|
409306b266
|
Merge branch 'main' into litellm_fix_azure_api_version
|
2024-08-20 11:40:53 -07:00 |
|
Krrish Dholakia
|
89791d9285
|
fix(main.py): response_format typing for acompletion
Fixes https://github.com/BerriAI/litellm/issues/5239
|
2024-08-20 08:14:14 -07:00 |
|
Krrish Dholakia
|
49416e121c
|
feat(azure.py): support dynamic api versions
Closes https://github.com/BerriAI/litellm/issues/5228
|
2024-08-19 12:17:43 -07:00 |
|
Krish Dholakia
|
a8dd2b6910
|
Merge pull request #5244 from BerriAI/litellm_better_error_logging_sentry
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 19:16:20 -07:00 |
|
Krrish Dholakia
|
7fce6b0163
|
fix(health_check.py): return 'missing mode' error message, if error with health check, and mode is missing
|
2024-08-16 17:24:29 -07:00 |
|
Krrish Dholakia
|
61f4b71ef7
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|