Krrish Dholakia
|
66dbd938e8
|
fix(exceptions.py): use correct status code for content policy exceptions
Fixes https://github.com/BerriAI/litellm/issues/4941#issuecomment-2256578732
|
2024-07-29 12:01:54 -07:00 |
|
Krish Dholakia
|
e3a94ac013
|
Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
|
2024-07-27 21:51:26 -07:00 |
|
Ishaan Jaff
|
0627468455
|
fix checking mode on health checks
|
2024-07-27 20:21:39 -07:00 |
|
Krrish Dholakia
|
c85ed01756
|
feat(utils.py): fix openai-like streaming
|
2024-07-27 15:32:57 -07:00 |
|
Krrish Dholakia
|
5b71421a7b
|
feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
|
2024-07-27 12:54:14 -07:00 |
|
Krrish Dholakia
|
41abd51240
|
fix(custom_llm.py): pass input params to custom llm
|
2024-07-25 19:03:52 -07:00 |
|
Krrish Dholakia
|
b4e3a77ad0
|
feat(utils.py): support sync streaming for custom llm provider
|
2024-07-25 16:47:32 -07:00 |
|
Krrish Dholakia
|
9f97436308
|
fix(custom_llm.py): support async completion calls
|
2024-07-25 15:51:39 -07:00 |
|
Krrish Dholakia
|
6bf1b9353b
|
feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675
Also Addresses https://github.com/BerriAI/litellm/discussions/4677
|
2024-07-25 15:33:05 -07:00 |
|
Krrish Dholakia
|
4e51f712f3
|
fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
|
2024-07-25 09:57:19 -07:00 |
|
Krrish Dholakia
|
83ef52e180
|
feat(vertex_ai_llama.py): vertex ai llama3.1 api support
Initial working commit for vertex ai llama 3.1 api support
|
2024-07-23 17:07:30 -07:00 |
|
Ishaan Jaff
|
1355932bf4
|
Merge pull request #3905 from giritatavarty-8451/litellm_triton_chatcompletion_support
Litellm triton chatcompletion support - Resubmit of #3895
|
2024-07-23 10:30:26 -07:00 |
|
Krrish Dholakia
|
0b9e93d863
|
fix(main.py): check if anthropic api base ends with required url
Fixes https://github.com/BerriAI/litellm/issues/4803
|
2024-07-22 20:36:35 -07:00 |
|
Krrish Dholakia
|
a27454b8e3
|
fix(openai.py): support completion, streaming, async_streaming
|
2024-07-20 15:23:42 -07:00 |
|
Krrish Dholakia
|
576cccaade
|
fix(main.py): check for ANTHROPIC_BASE_URL in environment
Fixes https://github.com/BerriAI/litellm/issues/4803
|
2024-07-20 14:38:31 -07:00 |
|
Ishaan Jaff
|
43e5890f77
|
fix health check
|
2024-07-19 15:56:35 -07:00 |
|
Sophia Loris
|
d779253949
|
resolve merge conflicts
|
2024-07-19 09:45:53 -05:00 |
|
Sophia Loris
|
d5c65c6be2
|
Add support for Triton streaming & triton async completions
|
2024-07-19 09:35:27 -05:00 |
|
Ishaan Jaff
|
f04397e19a
|
Merge pull request #4789 from BerriAI/litellm_router_refactor
[Feat-Router] - Tag based routing
|
2024-07-18 22:19:18 -07:00 |
|
Ishaan Jaff
|
071091fd8c
|
fix use tags as a litellm param
|
2024-07-18 19:34:45 -07:00 |
|
Krrish Dholakia
|
4d963ab789
|
feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls
allows passing response_schema for anthropic calls. supports schema validation.
|
2024-07-18 16:57:38 -07:00 |
|
Ishaan Jaff
|
bcc89a2c3a
|
fix testing exception mapping
|
2024-07-13 11:10:13 -07:00 |
|
Krrish Dholakia
|
389a51e05d
|
fix: fix linting errors
|
2024-07-11 13:36:55 -07:00 |
|
Krrish Dholakia
|
dd1048cb35
|
fix(main.py): fix linting errors
|
2024-07-11 12:11:50 -07:00 |
|
Krrish Dholakia
|
31829855c0
|
feat(proxy_server.py): working /v1/messages with config.yaml
Adds async router support for adapter_completion call
|
2024-07-10 18:53:54 -07:00 |
|
Krrish Dholakia
|
2f8dbbeb97
|
feat(proxy_server.py): working /v1/messages endpoint
Works with claude engineer
|
2024-07-10 18:15:38 -07:00 |
|
Krrish Dholakia
|
5d6e172d5c
|
feat(anthropic_adapter.py): support for translating anthropic params to openai format
|
2024-07-10 00:32:28 -07:00 |
|
Krrish Dholakia
|
a1986fab60
|
fix(vertex_httpx.py): add sync vertex image gen support
Fixes https://github.com/BerriAI/litellm/issues/4623
|
2024-07-09 13:33:54 -07:00 |
|
Ishaan Jaff
|
010f651268
|
fix params on acompletion
|
2024-07-08 12:56:54 -07:00 |
|
Krrish Dholakia
|
298505c47c
|
fix(whisper---handle-openai/azure-vtt-response-format): Fixes https://github.com/BerriAI/litellm/issues/4595
|
2024-07-08 09:10:40 -07:00 |
|
Krrish Dholakia
|
bb905d7243
|
fix(utils.py): support 'drop_params' for 'parallel_tool_calls'
Closes https://github.com/BerriAI/litellm/issues/4584
OpenAI-only param
|
2024-07-08 07:36:41 -07:00 |
|
Simon S. Viloria
|
d54d4b6734
|
Merge branch 'BerriAI:main' into main
|
2024-07-07 18:00:11 +02:00 |
|
Simon Sanchez Viloria
|
06e6f52358
|
(fix - watsonx) Fixed issues with watsonx embedding/async endpoints
|
2024-07-07 17:59:37 +02:00 |
|
Krish Dholakia
|
8661da1980
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-06 19:12:06 -07:00 |
|
Krrish Dholakia
|
27e9f96380
|
fix(main.py): fix stream_chunk_builder usage calc
Closes https://github.com/BerriAI/litellm/issues/4496
|
2024-07-06 14:52:59 -07:00 |
|
Krrish Dholakia
|
c425cba93a
|
fix(vertex_httpx.py): fix supported vertex params
|
2024-07-04 21:17:52 -07:00 |
|
Ishaan Jaff
|
1835032824
|
add groq whisper large
|
2024-07-04 16:43:40 -07:00 |
|
Krrish Dholakia
|
fd25117b67
|
fix(main.py): fix azure ai cohere tool calling
|
2024-07-04 11:46:14 -07:00 |
|
Krrish Dholakia
|
19c982d0f9
|
fix: linting fixes
|
2024-07-03 21:55:00 -07:00 |
|
Krish Dholakia
|
5e47970eed
|
Merge branch 'main' into litellm_anthropic_tool_calling_streaming_fix
|
2024-07-03 20:43:51 -07:00 |
|
Krrish Dholakia
|
2e5a81f280
|
fix(utils.py): stream_options working across all providers
|
2024-07-03 20:40:46 -07:00 |
|
Krrish Dholakia
|
ed5fc3d1f9
|
fix(utils.py): fix vertex anthropic streaming
|
2024-07-03 18:43:46 -07:00 |
|
Krish Dholakia
|
6c5c8bbb28
|
Revert "fix(vertex_anthropic.py): Vertex Anthropic tool calling - native params "
|
2024-07-03 17:55:37 -07:00 |
|
Krrish Dholakia
|
7007ace6c2
|
fix(vertex_anthropic.py): Updates the vertex anthropic endpoint to do tool calling with the anthropic api params
|
2024-07-03 15:28:31 -07:00 |
|
Krish Dholakia
|
d38f01e956
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-02 17:17:43 -07:00 |
|
Ishaan Jaff
|
90a0db5618
|
Merge pull request #4519 from BerriAI/litellm_re_use_openai_azure_clients_whisper
[Fix+Test] /audio/transcriptions - use initialized OpenAI / Azure OpenAI clients
|
2024-07-02 16:42:22 -07:00 |
|
Ishaan Jaff
|
2b5f3c6105
|
fix use router level client for OpenAI / Azure transcription calls
|
2024-07-02 12:33:31 -07:00 |
|
Krrish Dholakia
|
79670ab82e
|
fix(main.py): get the region name from boto3 client if dynamic var not set
|
2024-07-02 09:24:07 -07:00 |
|
Krrish Dholakia
|
ea74e01813
|
fix(router.py): disable cooldowns
allow admin to disable model cooldowns
|
2024-07-01 15:03:10 -07:00 |
|
Krrish Dholakia
|
be3c98bc16
|
fix(main.py): copy messages - prevent modifying user input
Fixes https://github.com/BerriAI/litellm/discussions/4489
|
2024-07-01 08:14:46 -07:00 |
|