Commit graph

15194 commits

Author SHA1 Message Date
Ishaan Jaff
50bf488b58 read me link to using litellm 2024-07-25 20:10:02 -07:00
Ishaan Jaff
9247fc3c64 deploy link to using litellm 2024-07-25 20:09:49 -07:00
Ishaan Jaff
c2e309baf3 docs using litellm proxy 2024-07-25 20:05:28 -07:00
Ishaan Jaff
87cebdefd3
Merge pull request #4896 from BerriAI/docs_add_example_usage_with_mistral_python
Docs Proxy - add example usage with mistral SDK with Proxy
2024-07-25 20:02:26 -07:00
Krrish Dholakia
826bb125e8 test(test_router.py): handle azure api instability 2024-07-25 19:54:40 -07:00
Ishaan Jaff
646b2d50f9 docs -quick start 2024-07-25 19:52:53 -07:00
Krrish Dholakia
a2fd8459fc fix(utils.py): don't raise error on openai content filter during streaming - return as is
Fixes issue where we would raise an error vs. openai who return the chunk with finish reason as 'content_filter'
2024-07-25 19:50:52 -07:00
Ishaan Jaff
68e94f0976 example mistral sdk 2024-07-25 19:48:54 -07:00
Ishaan Jaff
bb6f72b315 add mistral sdk usage 2024-07-25 19:47:54 -07:00
Ishaan Jaff
5bec2bf513
Merge pull request #4894 from BerriAI/litellm_logfire_dotenv
fix logfire - don't load_dotenv
2024-07-25 19:34:35 -07:00
Krish Dholakia
53752100f6
Merge pull request #4870 from BerriAI/litellm_redis_team_object
feat(auth_check.py): support using redis cache for team objects
2024-07-25 19:32:04 -07:00
Krish Dholakia
c2086300b7
Merge branch 'main' into litellm_redis_team_object 2024-07-25 19:31:52 -07:00
Krish Dholakia
80e2facf3d
Merge pull request #4883 from Manouchehri/bedrock-llama3.1-405b
Add Llama 3.1 405b & Tool Calling for Amazon Bedrock
2024-07-25 19:29:18 -07:00
Krish Dholakia
b6ca4406b6
Merge branch 'main' into bedrock-llama3.1-405b 2024-07-25 19:29:10 -07:00
Ishaan Jaff
a0655b4192
Merge pull request #4884 from Manouchehri/add-mistral-large-2407-bedrock-1
Add mistral.mistral-large-2407-v1:0 on Amazon Bedrock
2024-07-25 19:22:46 -07:00
Ishaan Jaff
fcd834b277 fix logfire - don't load_dotenv 2024-07-25 19:22:26 -07:00
Krish Dholakia
c0c934d9cf
Merge pull request #4879 from fracapuano/main
Add Single-Token predictions support for Replicate
2024-07-25 19:10:57 -07:00
Krish Dholakia
a306b83b2d
Merge pull request #4887 from BerriAI/litellm_custom_llm
feat(custom_llm.py): Support Custom LLM Handlers
2024-07-25 19:05:29 -07:00
Krrish Dholakia
41abd51240 fix(custom_llm.py): pass input params to custom llm 2024-07-25 19:03:52 -07:00
Ishaan Jaff
26a9f694e1
Merge pull request #4890 from BerriAI/docs_set_routing_strategies
docs - add info about routing strategy on load balancing docs
2024-07-25 18:55:51 -07:00
Ishaan Jaff
b497dbf0dd
Merge pull request #4889 from BerriAI/litellm_Fix_whisper_health_check
[Fix] OpenAI STT, TTS Health Checks on LiteLLM Proxy
2024-07-25 18:53:39 -07:00
Krrish Dholakia
bd7af04a72 feat(proxy_server.py): support custom llm handler on proxy 2024-07-25 17:56:34 -07:00
Krrish Dholakia
a2d07cfe64 docs(custom_llm_server.md): add calling custom llm server to docs 2024-07-25 17:41:19 -07:00
Ishaan Jaff
3814170ae1 docs - add info about routing strategy on load balancing docs 2024-07-25 17:41:16 -07:00
Ishaan Jaff
f2443996d8 feat support audio health checks for azure 2024-07-25 17:30:15 -07:00
Ishaan Jaff
3573b47098 docs add example on using text to speech models 2024-07-25 17:29:28 -07:00
Ishaan Jaff
2432c90515 feat - support health check audio_speech 2024-07-25 17:26:14 -07:00
Ishaan Jaff
e3142b4294 fix whisper health check with litellm 2024-07-25 17:22:57 -07:00
Krrish Dholakia
060249c7e0 feat(utils.py): support async streaming for custom llm provider 2024-07-25 17:11:57 -07:00
Krrish Dholakia
b4e3a77ad0 feat(utils.py): support sync streaming for custom llm provider 2024-07-25 16:47:32 -07:00
Krrish Dholakia
9f97436308 fix(custom_llm.py): support async completion calls 2024-07-25 15:51:39 -07:00
Krrish Dholakia
6bf1b9353b feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675

 Also Addresses https://github.com/BerriAI/litellm/discussions/4677
2024-07-25 15:33:05 -07:00
Krrish Dholakia
711496e260 fix(router.py): add support for diskcache to router 2024-07-25 14:30:46 -07:00
Krrish Dholakia
bfdda089c8 fix(proxy_server.py): check if input list > 0 before indexing into it
resolves 'list index out of range' error
2024-07-25 14:23:07 -07:00
David Manouchehri
64adae6e7f
Check for converse support first. 2024-07-25 21:16:23 +00:00
David Manouchehri
22c66991ed
Support tool calling for Llama 3.1 on Amazon bedrock. 2024-07-25 20:36:25 +00:00
David Manouchehri
5c4ee3ef3c
Add mistral.mistral-large-2407-v1:0 on Amazon Bedrock. 2024-07-25 20:04:03 +00:00
David Manouchehri
3293ad7458
Add Llama 3.1 405b for Bedrock 2024-07-25 19:30:13 +00:00
Krrish Dholakia
397451570e docs(enterprise.md): cleanup docs 2024-07-25 10:09:02 -07:00
Krrish Dholakia
d91b01a24b docs(enterprise.md): cleanup docs 2024-07-25 10:08:40 -07:00
fracapuano
5553f84d51 fix: now supports single tokens prediction 2024-07-25 19:06:07 +02:00
Krrish Dholakia
80800b9ec8 docs(caching.md): update caching docs to include ttl info 2024-07-25 10:01:47 -07:00
Krrish Dholakia
4e51f712f3 fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
2024-07-25 09:57:19 -07:00
Krrish Dholakia
b376ee71b0 fix(internal_user_endpoints.py): support updating budgets for /user/update 2024-07-24 21:51:46 -07:00
Ishaan Jaff
5cb58fc3c9
Merge pull request #4873 from BerriAI/litellm_add_mistral-large-2407
[Feat] -  Add `mistral/mistral-large 2`
2024-07-24 21:37:08 -07:00
Ishaan Jaff
a92a2ca382 docs add mistral api large 2 2024-07-24 21:35:34 -07:00
Ishaan Jaff
c77abaa07f feat - add mistral large 2 2024-07-24 21:31:41 -07:00
Ishaan Jaff
d5a7c654f1 bump: version 1.42.0 → 1.42.1 2024-07-24 21:25:31 -07:00
Ishaan Jaff
7a97bf3aa9
Merge pull request #4871 from BerriAI/litellm_addllama-3.1
[Feat] Add Groq/llama3.1
2024-07-24 20:54:59 -07:00
Ishaan Jaff
c08d4ca9ec docs groq models 2024-07-24 20:49:28 -07:00