Krish Dholakia
be3c7b401e
LiteLLM Minor fixes + improvements (08/03/2024) ( #5488 )
...
* fix(internal_user_endpoints.py): set budget_reset_at for /user/update
* fix(vertex_and_google_ai_studio_gemini.py): handle accumulated json
Fixes https://github.com/BerriAI/litellm/issues/5479
* fix(vertex_ai_and_gemini.py): fix assistant message function call when content is not None
Fixes https://github.com/BerriAI/litellm/issues/5490
* fix(proxy_server.py): generic state uuid for okta sso
* fix(lago.py): improve debug logs
Debugging for https://github.com/BerriAI/litellm/issues/5477
* docs(bedrock.md): add bedrock cross-region inferencing to docs
* fix(azure.py): return azure response headers on aembedding call
* feat(azure.py): return azure response headers for `/audio/transcription`
* fix(types/utils.py): standardize deepseek / anthropic prompt caching usage information
Closes https://github.com/BerriAI/litellm/issues/5285
* docs(usage.md): add docs on litellm usage object
* test(test_completion.py): mark flaky test
2024-09-03 21:21:34 -07:00
Krish Dholakia
6ccff1b13e
fix(router.py): fix inherited type ( #5485 )
2024-09-02 22:03:21 -07:00
Ishaan Jaff
c1adb0b7f2
Merge branch 'main' into litellm_track_imagen_spend_logs
2024-09-02 21:21:15 -07:00
Ishaan Jaff
4a0fdc40f1
add cost tracking for pass through imagen
2024-09-02 18:10:46 -07:00
Krish Dholakia
f9e6507cd1
LiteLLM Minor Fixes + Improvements ( #5474 )
...
* feat(proxy/_types.py): add lago billing to callbacks ui
Closes https://github.com/BerriAI/litellm/issues/5472
* fix(anthropic.py): return anthropic prompt caching information
Fixes https://github.com/BerriAI/litellm/issues/5364
* feat(bedrock/chat.py): support 'json_schema' for bedrock models
Closes https://github.com/BerriAI/litellm/issues/5434
* fix(bedrock/embed/embeddings.py): support async embeddings for amazon titan models
* fix: linting fixes
* fix: handle key errors
* fix(bedrock/chat.py): fix bedrock ai21 streaming object
* feat(bedrock/embed): support bedrock embedding optional params
* fix(databricks.py): fix usage chunk
* fix(internal_user_endpoints.py): apply internal user defaults, if user role updated
Fixes issue where user update wouldn't apply defaults
* feat(slack_alerting.py): provide multiple slack channels for a given alert type
multiple channels might be interested in receiving an alert for a given type
* docs(alerting.md): add multiple channel alerting to docs
2024-09-02 14:29:57 -07:00
Krish Dholakia
e0d81434ed
LiteLLM minor fixes + improvements (31/08/2024) ( #5464 )
...
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints
* test(test_streaming.py): skip model due to end of life
* feat(custom_logger.py): add special callback for model hitting tpm/rpm limits
Closes https://github.com/BerriAI/litellm/issues/4096
2024-09-01 13:31:42 -07:00
Krish Dholakia
37f9705d6e
Bedrock Embeddings refactor + model support ( #5462 )
...
* refactor(bedrock): initial commit to refactor bedrock to a folder
Improve code readability + maintainability
* refactor: more refactor work
* fix: fix imports
* feat(bedrock/embeddings.py): support translating embedding into amazon embedding formats
* fix: fix linting errors
* test: skip test on end of life model
* fix(cohere/embed.py): fix linting error
* fix(cohere/embed.py): fix typing
* fix(cohere/embed.py): fix post-call logging for cohere embedding call
* test(test_embeddings.py): fix error message assertion in test
2024-09-01 13:29:58 -07:00
Krish Dholakia
6fb82aaf75
Minor LiteLLM Fixes and Improvements ( #5456 )
...
* fix(utils.py): support 'drop_params' for embedding requests
Fixes https://github.com/BerriAI/litellm/issues/5444
* feat(vertex_ai_non_gemini.py): support function param in messages
* test: skip test - model end of life
* fix(vertex_ai_non_gemini.py): fix gemini history parsing
2024-08-31 17:58:10 -07:00
Krish Dholakia
9c8f1d7815
anthropic prompt caching cost tracking ( #5453 )
...
* fix(utils.py): support 'drop_params' for embedding requests
Fixes https://github.com/BerriAI/litellm/issues/5444
* feat(anthropic/cost_calculation.py): Support calculating cost for prompt caching on anthropic
* feat(types/utils.py): allows us to migrate to openai's equivalent, once that comes out
* fix: fix linting errors
* test: mark flaky test
2024-08-31 14:09:35 -07:00
Krish Dholakia
dd7b008161
fix: Minor LiteLLM Fixes + Improvements (29/08/2024) ( #5436 )
...
* fix(model_checks.py): support returning wildcard models on `/v1/models`
Fixes https://github.com/BerriAI/litellm/issues/4903
* fix(bedrock_httpx.py): support calling bedrock via api_base
Closes https://github.com/BerriAI/litellm/pull/4587
* fix(litellm_logging.py): only leave last 4 char of gemini key unmasked
Fixes https://github.com/BerriAI/litellm/issues/5433
* feat(router.py): support setting 'weight' param for models on router
Closes https://github.com/BerriAI/litellm/issues/5410
* test(test_bedrock_completion.py): add unit test for custom api base
* fix(model_checks.py): handle no "/" in model
2024-08-29 22:40:25 -07:00
Krish Dholakia
559a6ad826
fix(google_ai_studio): working context caching ( #5421 )
...
* fix(google_ai_studio): working context caching
* feat(vertex_ai_context_caching.py): support async cache check calls
* fix(vertex_and_google_ai_studio_gemini.py): fix setting headers
* fix(vertex_ai_parter_models): fix import
* fix(vertex_and_google_ai_studio_gemini.py): fix input
* test(test_amazing_vertex_completion.py): fix test
2024-08-29 07:00:30 -07:00
Krish Dholakia
a857f4a8ee
Merge branch 'main' into litellm_main_staging
2024-08-28 18:05:27 -07:00
Krrish Dholakia
f0fb8bdf45
fix(router.py): fix cooldown check
2024-08-28 16:38:42 -07:00
Krrish Dholakia
a6ce27ca29
feat(batch_embed_content_transformation.py): support google ai studio /batchEmbedContent endpoint
...
Allows for multiple strings to be given for embedding
2024-08-27 19:23:50 -07:00
Krrish Dholakia
5b29ddd2a6
fix(embeddings_handler.py): initial working commit for google ai studio text embeddings /embedContent endpoint
2024-08-27 18:14:56 -07:00
Krish Dholakia
415abc86c6
Merge pull request #5358 from BerriAI/litellm_fix_retry_after
...
fix retry after - cooldown individual models based on their specific 'retry-after' header
2024-08-27 11:50:14 -07:00
Krrish Dholakia
b0cc1df2d6
feat(vertex_ai_context_caching.py): support making context caching calls to vertex ai in a normal chat completion call (anthropic caching format)
...
Closes https://github.com/BerriAI/litellm/issues/5213
2024-08-26 22:19:01 -07:00
Krish Dholakia
c503ff435e
Merge pull request #5368 from BerriAI/litellm_vertex_function_support
...
feat(vertex_httpx.py): support 'functions' param for gemini google ai studio + vertex ai
2024-08-26 22:11:42 -07:00
Krrish Dholakia
1a97362b6e
fix(types/utils.py): map finish reason to openai compatible
2024-08-26 22:09:02 -07:00
Krrish Dholakia
8e9acd117b
fix(sagemaker.py): support streaming for messages api
...
Fixes https://github.com/BerriAI/litellm/issues/5372
2024-08-26 15:08:08 -07:00
Krrish Dholakia
d13d2e8a62
feat(vertex_httpx.py): support functions param for gemini google ai studio + vertex ai
...
Closes https://github.com/BerriAI/litellm/issues/5344
2024-08-26 10:59:01 -07:00
Krrish Dholakia
068aafdff9
fix(utils.py): correctly re-raise the headers from an exception, if present
...
Fixes issue where retry after on router was not using azure / openai numbers
2024-08-24 12:30:30 -07:00
Ishaan Jaff
228252b92d
Merge branch 'main' into litellm_allow_using_azure_ad_token_auth
2024-08-22 18:21:24 -07:00
Ishaan Jaff
7d55047ab9
add bedrock guardrails support
2024-08-22 16:09:55 -07:00
Ishaan Jaff
14a6ce367d
add types for BedrockMessage
2024-08-22 15:40:58 -07:00
Ishaan Jaff
08fa3f346a
add new litellm params for client_id, tenant_id etc
2024-08-22 11:37:30 -07:00
Krrish Dholakia
70bf8bd4f4
feat(factory.py): enable 'user_continue_message' for interweaving user/assistant messages when provider requires it
...
allows bedrock to be used with autogen
2024-08-22 11:03:33 -07:00
Ishaan Jaff
2a3bc8c190
add azure_ad_token_provider as all litellm params
2024-08-22 10:59:18 -07:00
Krrish Dholakia
f36e7e0754
fix(ollama_chat.py): fix passing assistant message with tool call param
...
Fixes https://github.com/BerriAI/litellm/issues/5319
2024-08-22 10:00:03 -07:00
Ishaan Jaff
dd524a4f50
Merge pull request #5326 from BerriAI/litellm_Add_vertex_multimodal_embedding
...
[Feat] add vertex multimodal embedding support
2024-08-21 17:06:43 -07:00
Krrish Dholakia
8a05ce77e9
feat(litellm_logging.py): add 'saved_cache_cost' to standard logging payload (s3)
2024-08-21 16:58:07 -07:00
Ishaan Jaff
dd00cf2a97
add VertexMultimodalEmbeddingRequest type
2024-08-21 14:25:47 -07:00
Ishaan Jaff
8d2c529e55
support lakera ai category thresholds
2024-08-20 17:19:24 -07:00
Ishaan Jaff
8cd1963c11
feat - guardrails v2
2024-08-19 18:24:20 -07:00
Krrish Dholakia
178139f18d
feat(litellm_logging.py): support logging model price information to s3 logs
2024-08-16 16:21:34 -07:00
Ishaan Jaff
740d1fdb1c
add provider_specific_fields to GenericStreamingChunk
2024-08-16 11:38:22 -07:00
Krish Dholakia
6c3f37f8b4
Merge pull request #5235 from BerriAI/litellm_fix_s3_logs
...
fix(s3.py): fix s3 logging payload to have valid json values
2024-08-15 23:00:18 -07:00
Ishaan Jaff
df4ea8fba6
refactor sagemaker to be async
2024-08-15 18:18:02 -07:00
Krrish Dholakia
f6dba82882
feat(litellm_logging.py): cleanup payload + add response cost to logged payload
2024-08-15 17:53:25 -07:00
Krrish Dholakia
3ddeb3297d
fix(litellm_logging.py): fix standard payload
2024-08-15 17:33:40 -07:00
Krrish Dholakia
cda50e5d47
fix(s3.py): fix s3 logging payload to have valid json values
...
Previously pydantic objects were being stringified, making them unparsable
2024-08-15 17:09:02 -07:00
Ishaan Jaff
78a2013e51
add test for large context in system message for anthropic
2024-08-14 17:03:10 -07:00
Ishaan Jaff
b0651bd481
add anthropic cache controls
2024-08-14 14:56:49 -07:00
Krrish Dholakia
19bb95f781
build(model_prices_and_context_window.json): add 'supports_assistant_prefill' to model info map
...
Closes https://github.com/BerriAI/litellm/issues/4881
2024-08-10 14:15:12 -07:00
Krrish Dholakia
1553f7fa48
fix(types/utils.py): handle null completion tokens
...
Fixes https://github.com/BerriAI/litellm/issues/5096
2024-08-10 09:23:03 -07:00
Krrish Dholakia
6180c52cfe
fix(router.py): fix types
2024-08-09 12:24:48 -07:00
Krrish Dholakia
7b6db63d30
fix(router.py): fallback on 400-status code requests
2024-08-09 12:16:49 -07:00
Krrish Dholakia
4919cc4d25
fix(anthropic.py): handle scenario where anthropic returns invalid json string for tool call while streaming
...
Fixes https://github.com/BerriAI/litellm/issues/5063
2024-08-07 09:24:11 -07:00
Krish Dholakia
c82fc0cac2
Merge branch 'main' into litellm_support_lakera_config_thresholds
2024-08-06 22:47:13 -07:00
Krrish Dholakia
834b437eb4
fix(utils.py): fix types
2024-08-06 12:23:22 -07:00