Commit graph

2067 commits

Author SHA1 Message Date
Krrish Dholakia
bbc56f7202 fix(utils.py): parse out aws specific params from openai call
Fixes https://github.com/BerriAI/litellm/issues/5009
2024-08-03 12:04:44 -07:00
Krrish Dholakia
acbc2917b8 feat(utils.py): Add github as a provider
Closes https://github.com/BerriAI/litellm/issues/4922#issuecomment-2266564469
2024-08-03 09:11:22 -07:00
Krish Dholakia
95175b0b34 Merge pull request #5029 from BerriAI/litellm_azure_ui_fix
fix(utils.py): Fix adding azure models on ui
2024-08-02 22:12:19 -07:00
Krrish Dholakia
e6bc7e938a fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
2024-08-02 17:48:53 -07:00
Ishaan Jaff
954dd95bdb Merge pull request #5026 from BerriAI/litellm_fix_whisper_caching
[Fix] Whisper Caching - Use correct cache keys for checking request in cache
2024-08-02 17:26:28 -07:00
Ishaan Jaff
8074d0d3f8 return cache hit True on cache hits 2024-08-02 15:07:05 -07:00
Ishaan Jaff
f5ec25248a log correct file name on langfuse 2024-08-02 14:49:25 -07:00
Krrish Dholakia
c1513bfe42 fix(types/utils.py): support passing prompt cache usage stats in usage object
Passes deepseek prompt caching values through to end user
2024-08-02 09:30:50 -07:00
Haadi Rakhangi
1adf7bc909 Merge branch 'BerriAI:main' into main 2024-08-02 21:08:48 +05:30
Haadi Rakhangi
a047df3825 qdrant semantic caching added 2024-08-02 21:07:19 +05:30
Krrish Dholakia
8204037975 fix(utils.py): fix codestral streaming 2024-08-02 07:38:06 -07:00
Krrish Dholakia
70afbafd94 fix(bedrock_httpx.py): fix ai21 streaming 2024-08-01 22:03:24 -07:00
Krish Dholakia
0fc50a69ee Merge branch 'main' into litellm_fix_streaming_usage_calc 2024-08-01 21:29:04 -07:00
Krish Dholakia
375a4049aa Merge branch 'main' into litellm_response_cost_logging 2024-08-01 21:28:22 -07:00
Krrish Dholakia
cb9b19e887 feat(vertex_ai_partner.py): add vertex ai codestral FIM support
Closes https://github.com/BerriAI/litellm/issues/4984
2024-08-01 17:10:27 -07:00
Krrish Dholakia
71aada78d6 fix(utils.py): fix togetherai streaming cost calculation 2024-08-01 15:03:08 -07:00
Krrish Dholakia
a502914f13 fix(utils.py): fix anthropic streaming usage calculation
Fixes https://github.com/BerriAI/litellm/issues/4965
2024-08-01 14:45:54 -07:00
Krrish Dholakia
08541d056c fix(litellm_logging.py): use 1 cost calc function across response headers + logging integrations
Ensures consistent cost calculation when azure base models are used
2024-08-01 10:26:59 -07:00
Krrish Dholakia
ac6bca2320 fix(utils.py): fix special keys list for provider-specific items in response object 2024-07-31 18:30:49 -07:00
Krrish Dholakia
1206b7626a fix(utils.py): return additional kwargs from openai-like response body
Closes https://github.com/BerriAI/litellm/issues/4981
2024-07-31 15:37:03 -07:00
Krrish Dholakia
b95feb16f1 fix(utils.py): map cohere timeout error 2024-07-31 15:15:18 -07:00
Krrish Dholakia
dc58b9f33e fix(utils.py): fix linting errors 2024-07-30 18:38:10 -07:00
Krrish Dholakia
0bcfdafc58 fix(utils.py): fix model registeration to model cost map
Fixes https://github.com/BerriAI/litellm/issues/4972
2024-07-30 18:15:00 -07:00
Krrish Dholakia
802e39b606 fix(utils.py): fix cost tracking for vertex ai partner models 2024-07-30 14:20:52 -07:00
Krish Dholakia
14c2aabf63 Merge pull request #4948 from dleen/response
fixes: #4947 Bedrock context exception does not have a response
2024-07-29 15:03:40 -07:00
David Leen
55cc3adbec fixes: #4947 Bedrock context exception does not have a response 2024-07-29 14:23:56 -07:00
Krrish Dholakia
00dde68001 fix(utils.py): fix trim_messages to handle tool calling
Fixes https://github.com/BerriAI/litellm/issues/4931
2024-07-29 13:04:41 -07:00
Krrish Dholakia
708b427a04 fix(utils.py): correctly re-raise azure api connection error
'
2024-07-29 12:28:25 -07:00
Krrish Dholakia
2a705dbb49 fix(utils.py): check if tools is iterable before indexing into it
Fixes https://github.com/BerriAI/litellm/issues/4933
2024-07-29 09:01:32 -07:00
Ravi N
5cf0667d38 Allow zero temperature for Sagemaker models based on config
Since Sagemaker can host any kind of model, some models allow
zero temperature. However, this is not enabled by default and
only allowed based on config
2024-07-28 21:55:53 -04:00
Krrish Dholakia
dc7df00581 fix: utils.py
fix supported openai params
2024-07-27 22:03:40 -07:00
Krish Dholakia
1c50339580 Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
2024-07-27 21:51:26 -07:00
Krish Dholakia
0525fb75f3 Merge branch 'main' into litellm_vertex_migration 2024-07-27 20:25:12 -07:00
Krrish Dholakia
fcac9bd2fa fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
2024-07-27 15:38:27 -07:00
Krrish Dholakia
70b281c0aa fix(utils.py): support fireworks ai finetuned models
Fixes https://github.com/BerriAI/litellm/issues/4923
2024-07-27 15:37:28 -07:00
Krrish Dholakia
56ba0c62f3 feat(utils.py): fix openai-like streaming 2024-07-27 15:32:57 -07:00
Krrish Dholakia
089539e21e fix(utils.py): add exception mapping for databricks errors 2024-07-27 13:13:31 -07:00
Krrish Dholakia
ce7257ec5e feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
2024-07-27 12:54:14 -07:00
Krrish Dholakia
3a1eedfbf3 feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
2024-07-26 21:51:54 -07:00
Krrish Dholakia
1562cba823 fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
2024-07-26 19:04:08 -07:00
Krrish Dholakia
d3ff21181c fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking 2024-07-25 22:12:07 -07:00
Ishaan Jaff
1103c614a0 Merge branch 'main' into litellm_proxy_support_all_providers 2024-07-25 20:15:37 -07:00
Krrish Dholakia
e7744177cb fix(utils.py): don't raise error on openai content filter during streaming - return as is
Fixes issue where we would raise an error vs. openai who return the chunk with finish reason as 'content_filter'
2024-07-25 19:50:52 -07:00
Krish Dholakia
a5cea7929d Merge branch 'main' into bedrock-llama3.1-405b 2024-07-25 19:29:10 -07:00
Ishaan Jaff
422b4d7e0f support using */* 2024-07-25 18:48:56 -07:00
Krrish Dholakia
9b1c7066b7 feat(utils.py): support async streaming for custom llm provider 2024-07-25 17:11:57 -07:00
Krrish Dholakia
bf23aac11d feat(utils.py): support sync streaming for custom llm provider 2024-07-25 16:47:32 -07:00
Krrish Dholakia
54e1ca29b7 feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675

 Also Addresses https://github.com/BerriAI/litellm/discussions/4677
2024-07-25 15:33:05 -07:00
David Manouchehri
5a7be22038 Check for converse support first. 2024-07-25 21:16:23 +00:00
Krrish Dholakia
5945da4a66 fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
2024-07-25 09:57:19 -07:00