Krrish Dholakia
|
a27454b8e3
|
fix(openai.py): support completion, streaming, async_streaming
|
2024-07-20 15:23:42 -07:00 |
|
Krrish Dholakia
|
576cccaade
|
fix(main.py): check for ANTHROPIC_BASE_URL in environment
Fixes https://github.com/BerriAI/litellm/issues/4803
|
2024-07-20 14:38:31 -07:00 |
|
Ishaan Jaff
|
43e5890f77
|
fix health check
|
2024-07-19 15:56:35 -07:00 |
|
Sophia Loris
|
d779253949
|
resolve merge conflicts
|
2024-07-19 09:45:53 -05:00 |
|
Sophia Loris
|
d5c65c6be2
|
Add support for Triton streaming & triton async completions
|
2024-07-19 09:35:27 -05:00 |
|
Ishaan Jaff
|
f04397e19a
|
Merge pull request #4789 from BerriAI/litellm_router_refactor
[Feat-Router] - Tag based routing
|
2024-07-18 22:19:18 -07:00 |
|
Ishaan Jaff
|
071091fd8c
|
fix use tags as a litellm param
|
2024-07-18 19:34:45 -07:00 |
|
Krrish Dholakia
|
4d963ab789
|
feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls
allows passing response_schema for anthropic calls. supports schema validation.
|
2024-07-18 16:57:38 -07:00 |
|
Ishaan Jaff
|
bcc89a2c3a
|
fix testing exception mapping
|
2024-07-13 11:10:13 -07:00 |
|
Krrish Dholakia
|
389a51e05d
|
fix: fix linting errors
|
2024-07-11 13:36:55 -07:00 |
|
Krrish Dholakia
|
dd1048cb35
|
fix(main.py): fix linting errors
|
2024-07-11 12:11:50 -07:00 |
|
Krrish Dholakia
|
31829855c0
|
feat(proxy_server.py): working /v1/messages with config.yaml
Adds async router support for adapter_completion call
|
2024-07-10 18:53:54 -07:00 |
|
Krrish Dholakia
|
2f8dbbeb97
|
feat(proxy_server.py): working /v1/messages endpoint
Works with claude engineer
|
2024-07-10 18:15:38 -07:00 |
|
Krrish Dholakia
|
5d6e172d5c
|
feat(anthropic_adapter.py): support for translating anthropic params to openai format
|
2024-07-10 00:32:28 -07:00 |
|
Krrish Dholakia
|
a1986fab60
|
fix(vertex_httpx.py): add sync vertex image gen support
Fixes https://github.com/BerriAI/litellm/issues/4623
|
2024-07-09 13:33:54 -07:00 |
|
Ishaan Jaff
|
010f651268
|
fix params on acompletion
|
2024-07-08 12:56:54 -07:00 |
|
Krrish Dholakia
|
298505c47c
|
fix(whisper---handle-openai/azure-vtt-response-format): Fixes https://github.com/BerriAI/litellm/issues/4595
|
2024-07-08 09:10:40 -07:00 |
|
Krrish Dholakia
|
bb905d7243
|
fix(utils.py): support 'drop_params' for 'parallel_tool_calls'
Closes https://github.com/BerriAI/litellm/issues/4584
OpenAI-only param
|
2024-07-08 07:36:41 -07:00 |
|
Simon S. Viloria
|
d54d4b6734
|
Merge branch 'BerriAI:main' into main
|
2024-07-07 18:00:11 +02:00 |
|
Simon Sanchez Viloria
|
06e6f52358
|
(fix - watsonx) Fixed issues with watsonx embedding/async endpoints
|
2024-07-07 17:59:37 +02:00 |
|
Krrish Dholakia
|
86596c53e9
|
refactor(main.py): migrate vertex gemini calls to vertex_httpx
Completes migration to vertex_httpx
|
2024-07-06 20:08:52 -07:00 |
|
Krish Dholakia
|
8661da1980
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-06 19:12:06 -07:00 |
|
Krrish Dholakia
|
27e9f96380
|
fix(main.py): fix stream_chunk_builder usage calc
Closes https://github.com/BerriAI/litellm/issues/4496
|
2024-07-06 14:52:59 -07:00 |
|
Krrish Dholakia
|
c425cba93a
|
fix(vertex_httpx.py): fix supported vertex params
|
2024-07-04 21:17:52 -07:00 |
|
Ishaan Jaff
|
1835032824
|
add groq whisper large
|
2024-07-04 16:43:40 -07:00 |
|
Krrish Dholakia
|
fd25117b67
|
fix(main.py): fix azure ai cohere tool calling
|
2024-07-04 11:46:14 -07:00 |
|
Krrish Dholakia
|
19c982d0f9
|
fix: linting fixes
|
2024-07-03 21:55:00 -07:00 |
|
Krish Dholakia
|
5e47970eed
|
Merge branch 'main' into litellm_anthropic_tool_calling_streaming_fix
|
2024-07-03 20:43:51 -07:00 |
|
Krrish Dholakia
|
2e5a81f280
|
fix(utils.py): stream_options working across all providers
|
2024-07-03 20:40:46 -07:00 |
|
Krrish Dholakia
|
ed5fc3d1f9
|
fix(utils.py): fix vertex anthropic streaming
|
2024-07-03 18:43:46 -07:00 |
|
Krish Dholakia
|
6c5c8bbb28
|
Revert "fix(vertex_anthropic.py): Vertex Anthropic tool calling - native params "
|
2024-07-03 17:55:37 -07:00 |
|
Krrish Dholakia
|
7007ace6c2
|
fix(vertex_anthropic.py): Updates the vertex anthropic endpoint to do tool calling with the anthropic api params
|
2024-07-03 15:28:31 -07:00 |
|
Krish Dholakia
|
d38f01e956
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-02 17:17:43 -07:00 |
|
Ishaan Jaff
|
90a0db5618
|
Merge pull request #4519 from BerriAI/litellm_re_use_openai_azure_clients_whisper
[Fix+Test] /audio/transcriptions - use initialized OpenAI / Azure OpenAI clients
|
2024-07-02 16:42:22 -07:00 |
|
Ishaan Jaff
|
2b5f3c6105
|
fix use router level client for OpenAI / Azure transcription calls
|
2024-07-02 12:33:31 -07:00 |
|
Krrish Dholakia
|
79670ab82e
|
fix(main.py): get the region name from boto3 client if dynamic var not set
|
2024-07-02 09:24:07 -07:00 |
|
Krrish Dholakia
|
ea74e01813
|
fix(router.py): disable cooldowns
allow admin to disable model cooldowns
|
2024-07-01 15:03:10 -07:00 |
|
Krrish Dholakia
|
be3c98bc16
|
fix(main.py): copy messages - prevent modifying user input
Fixes https://github.com/BerriAI/litellm/discussions/4489
|
2024-07-01 08:14:46 -07:00 |
|
Ishaan Jaff
|
f9ba3cf668
|
fix bedrock claude test
|
2024-06-29 18:46:06 -07:00 |
|
Brian Schultheiss
|
632b7ce17d
|
Resolve merge conflicts
|
2024-06-29 15:53:02 -07:00 |
|
Krrish Dholakia
|
d10912beeb
|
fix(main.py): pass in openrouter as custom provider for openai client call
Fixes https://github.com/BerriAI/litellm/issues/4414
|
2024-06-28 21:26:42 -07:00 |
|
Krish Dholakia
|
1223b2b111
|
Merge pull request #4449 from BerriAI/litellm_azure_tts
feat(azure.py): azure tts support
|
2024-06-27 21:33:38 -07:00 |
|
Ishaan Jaff
|
a012f231b6
|
azure - fix custom logger on post call
|
2024-06-27 17:37:02 -07:00 |
|
Krrish Dholakia
|
c14cc35e52
|
feat(azure.py): azure tts support
Closes https://github.com/BerriAI/litellm/issues/4002
|
2024-06-27 16:59:25 -07:00 |
|
Krrish Dholakia
|
98daedaf60
|
fix(router.py): fix setting httpx mounts
|
2024-06-26 17:22:04 -07:00 |
|
Ishaan Jaff
|
d213f81b4c
|
add initial support for volcengine
|
2024-06-26 16:53:44 -07:00 |
|
Brian Schultheiss
|
eeedfceee4
|
Merge branch 'main' of https://github.com/BerriAI/litellm into litellm_ftr_bedrock_aws_session_token
|
2024-06-26 08:11:34 -07:00 |
|
Krrish Dholakia
|
d98e00d1e0
|
fix(router.py): set cooldown_time: per model
|
2024-06-25 16:51:55 -07:00 |
|
Ishaan Jaff
|
2bd993039b
|
Merge pull request #4405 from BerriAI/litellm_update_mock_completion
[Fix] - use `n` in mock completion responses
|
2024-06-25 11:20:30 -07:00 |
|
Ishaan Jaff
|
ccf1bbc5d7
|
fix using mock completion
|
2024-06-25 11:14:40 -07:00 |
|