Commit graph

15182 commits

Author SHA1 Message Date
Krish Dholakia
c2086300b7
Merge branch 'main' into litellm_redis_team_object 2024-07-25 19:31:52 -07:00
Krish Dholakia
80e2facf3d
Merge pull request #4883 from Manouchehri/bedrock-llama3.1-405b
Add Llama 3.1 405b & Tool Calling for Amazon Bedrock
2024-07-25 19:29:18 -07:00
Krish Dholakia
b6ca4406b6
Merge branch 'main' into bedrock-llama3.1-405b 2024-07-25 19:29:10 -07:00
Ishaan Jaff
a0655b4192
Merge pull request #4884 from Manouchehri/add-mistral-large-2407-bedrock-1
Add mistral.mistral-large-2407-v1:0 on Amazon Bedrock
2024-07-25 19:22:46 -07:00
Krish Dholakia
c0c934d9cf
Merge pull request #4879 from fracapuano/main
Add Single-Token predictions support for Replicate
2024-07-25 19:10:57 -07:00
Krish Dholakia
a306b83b2d
Merge pull request #4887 from BerriAI/litellm_custom_llm
feat(custom_llm.py): Support Custom LLM Handlers
2024-07-25 19:05:29 -07:00
Krrish Dholakia
41abd51240 fix(custom_llm.py): pass input params to custom llm 2024-07-25 19:03:52 -07:00
Ishaan Jaff
26a9f694e1
Merge pull request #4890 from BerriAI/docs_set_routing_strategies
docs - add info about routing strategy on load balancing docs
2024-07-25 18:55:51 -07:00
Ishaan Jaff
b497dbf0dd
Merge pull request #4889 from BerriAI/litellm_Fix_whisper_health_check
[Fix] OpenAI STT, TTS Health Checks on LiteLLM Proxy
2024-07-25 18:53:39 -07:00
Krrish Dholakia
bd7af04a72 feat(proxy_server.py): support custom llm handler on proxy 2024-07-25 17:56:34 -07:00
Krrish Dholakia
a2d07cfe64 docs(custom_llm_server.md): add calling custom llm server to docs 2024-07-25 17:41:19 -07:00
Ishaan Jaff
3814170ae1 docs - add info about routing strategy on load balancing docs 2024-07-25 17:41:16 -07:00
Ishaan Jaff
f2443996d8 feat support audio health checks for azure 2024-07-25 17:30:15 -07:00
Ishaan Jaff
3573b47098 docs add example on using text to speech models 2024-07-25 17:29:28 -07:00
Ishaan Jaff
2432c90515 feat - support health check audio_speech 2024-07-25 17:26:14 -07:00
Ishaan Jaff
e3142b4294 fix whisper health check with litellm 2024-07-25 17:22:57 -07:00
Krrish Dholakia
060249c7e0 feat(utils.py): support async streaming for custom llm provider 2024-07-25 17:11:57 -07:00
Krrish Dholakia
b4e3a77ad0 feat(utils.py): support sync streaming for custom llm provider 2024-07-25 16:47:32 -07:00
Krrish Dholakia
9f97436308 fix(custom_llm.py): support async completion calls 2024-07-25 15:51:39 -07:00
Krrish Dholakia
6bf1b9353b feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675

 Also Addresses https://github.com/BerriAI/litellm/discussions/4677
2024-07-25 15:33:05 -07:00
Krrish Dholakia
711496e260 fix(router.py): add support for diskcache to router 2024-07-25 14:30:46 -07:00
Krrish Dholakia
bfdda089c8 fix(proxy_server.py): check if input list > 0 before indexing into it
resolves 'list index out of range' error
2024-07-25 14:23:07 -07:00
David Manouchehri
64adae6e7f
Check for converse support first. 2024-07-25 21:16:23 +00:00
David Manouchehri
22c66991ed
Support tool calling for Llama 3.1 on Amazon bedrock. 2024-07-25 20:36:25 +00:00
David Manouchehri
5c4ee3ef3c
Add mistral.mistral-large-2407-v1:0 on Amazon Bedrock. 2024-07-25 20:04:03 +00:00
David Manouchehri
3293ad7458
Add Llama 3.1 405b for Bedrock 2024-07-25 19:30:13 +00:00
Krrish Dholakia
397451570e docs(enterprise.md): cleanup docs 2024-07-25 10:09:02 -07:00
Krrish Dholakia
d91b01a24b docs(enterprise.md): cleanup docs 2024-07-25 10:08:40 -07:00
fracapuano
5553f84d51 fix: now supports single tokens prediction 2024-07-25 19:06:07 +02:00
Krrish Dholakia
80800b9ec8 docs(caching.md): update caching docs to include ttl info 2024-07-25 10:01:47 -07:00
Krrish Dholakia
4e51f712f3 fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
2024-07-25 09:57:19 -07:00
Krrish Dholakia
b376ee71b0 fix(internal_user_endpoints.py): support updating budgets for /user/update 2024-07-24 21:51:46 -07:00
Ishaan Jaff
5cb58fc3c9
Merge pull request #4873 from BerriAI/litellm_add_mistral-large-2407
[Feat] -  Add `mistral/mistral-large 2`
2024-07-24 21:37:08 -07:00
Ishaan Jaff
a92a2ca382 docs add mistral api large 2 2024-07-24 21:35:34 -07:00
Ishaan Jaff
c77abaa07f feat - add mistral large 2 2024-07-24 21:31:41 -07:00
Ishaan Jaff
d5a7c654f1 bump: version 1.42.0 → 1.42.1 2024-07-24 21:25:31 -07:00
Ishaan Jaff
7a97bf3aa9
Merge pull request #4871 from BerriAI/litellm_addllama-3.1
[Feat] Add Groq/llama3.1
2024-07-24 20:54:59 -07:00
Ishaan Jaff
c08d4ca9ec docs groq models 2024-07-24 20:49:28 -07:00
Ishaan Jaff
4cd96976b3 feat - add groq/llama-3.1 2024-07-24 20:46:56 -07:00
Krrish Dholakia
3cd3491920 test: cleanup testing 2024-07-24 19:47:50 -07:00
Krish Dholakia
0ac7736b1f
Merge pull request #4638 from friendliai/feat/friendli-dedicated-endpoint
feat: add support for friendliai dedicated endpoint
2024-07-24 19:23:15 -07:00
wslee
40bb165108 support dynamic api base 2024-07-25 11:14:38 +09:00
wslee
dd10da4d46 add support for friendli dedicated endpoint 2024-07-25 11:14:35 +09:00
Krrish Dholakia
f35af3bf1c test(test_completion.py): update azure extra headers 2024-07-24 18:42:50 -07:00
Krrish Dholakia
6ab2527fdc feat(auth_check.py): support using redis cache for team objects
Allows team update / check logic to work across instances instantly
2024-07-24 18:14:49 -07:00
Ishaan Jaff
b93b2636a9
Update README.md 2024-07-24 16:51:40 -07:00
Krrish Dholakia
b5c5ed2209 fix(key_management_endpoints.py): if budget duration set, set budget_reset_at 2024-07-24 15:02:22 -07:00
Ishaan Jaff
dc3b39ca71
Merge pull request #4864 from BerriAI/docs_add_using_groq_with_proxy
doc example using litellm proxy with groq
2024-07-24 14:34:12 -07:00
Ishaan Jaff
fe0b0ddaaa doc example using litellm proxy with groq 2024-07-24 14:33:49 -07:00
Ishaan Jaff
53dd47c5cb
Merge pull request #4862 from BerriAI/litellm_fix_unsupported_params_Error
[Fix-litellm python] Raise correct error for UnsupportedParams Error
2024-07-24 14:26:25 -07:00