Krish Dholakia
|
25ac9c2d75
|
Merge branch 'main' into litellm_fix_streaming_usage_calc
|
2024-08-01 21:29:04 -07:00 |
|
Krrish Dholakia
|
6e163d3c8a
|
fix(vertex_ai_partner.py): add /chat/completion codestral support
Closes https://github.com/BerriAI/litellm/issues/4984
|
2024-08-01 18:06:40 -07:00 |
|
Krrish Dholakia
|
c6eabe0253
|
fix(main.py): fix linting error
|
2024-08-01 17:33:29 -07:00 |
|
Krrish Dholakia
|
010d5ed81d
|
feat(vertex_ai_partner.py): add vertex ai codestral FIM support
Closes https://github.com/BerriAI/litellm/issues/4984
|
2024-08-01 17:10:27 -07:00 |
|
Krrish Dholakia
|
246b3227a9
|
fix(vertex_ai_partner.py): add /chat/completion codestral support
Closes https://github.com/BerriAI/litellm/issues/4984
|
2024-08-01 16:12:05 -07:00 |
|
Krrish Dholakia
|
ca0a0bed46
|
fix(utils.py): fix anthropic streaming usage calculation
Fixes https://github.com/BerriAI/litellm/issues/4965
|
2024-08-01 14:45:54 -07:00 |
|
Krish Dholakia
|
653aefde40
|
Merge branch 'main' into litellm_async_cohere_calls
|
2024-07-30 15:35:20 -07:00 |
|
Krrish Dholakia
|
9b2eb1702b
|
fix(cohere.py): support async cohere embedding calls
|
2024-07-30 14:49:07 -07:00 |
|
Krrish Dholakia
|
99dc7d2e97
|
fix(main.py): fix linting error
|
2024-07-30 13:55:04 -07:00 |
|
Krrish Dholakia
|
69afbc6091
|
feat(huggingface_restapi.py): Support multiple hf embedding types + async hf embeddings
Closes https://github.com/BerriAI/litellm/issues/3261
|
2024-07-30 13:32:03 -07:00 |
|
Krrish Dholakia
|
7f0daafc56
|
docs(main.py): update acompletion_with_retries docstring
Closes https://github.com/BerriAI/litellm/issues/4908
|
2024-07-29 15:50:12 -07:00 |
|
Krish Dholakia
|
63531a9824
|
Merge pull request #4943 from dleen/logs
Fix: #4942. Remove verbose logging when exception can be handled
|
2024-07-29 12:12:28 -07:00 |
|
David Leen
|
452441ae03
|
Fix: #4942. Remove verbose logging when exception can be handled
|
2024-07-29 12:05:10 -07:00 |
|
Krrish Dholakia
|
66dbd938e8
|
fix(exceptions.py): use correct status code for content policy exceptions
Fixes https://github.com/BerriAI/litellm/issues/4941#issuecomment-2256578732
|
2024-07-29 12:01:54 -07:00 |
|
Krish Dholakia
|
e3a94ac013
|
Merge pull request #4925 from BerriAI/litellm_vertex_mistral
feat(vertex_ai_partner.py): Vertex AI Mistral Support
|
2024-07-27 21:51:26 -07:00 |
|
Krish Dholakia
|
b854d2100c
|
Merge branch 'main' into litellm_vertex_migration
|
2024-07-27 20:25:12 -07:00 |
|
Ishaan Jaff
|
0627468455
|
fix checking mode on health checks
|
2024-07-27 20:21:39 -07:00 |
|
Krrish Dholakia
|
c85ed01756
|
feat(utils.py): fix openai-like streaming
|
2024-07-27 15:32:57 -07:00 |
|
Krrish Dholakia
|
5b71421a7b
|
feat(vertex_ai_partner.py): initial working commit for calling vertex ai mistral
Closes https://github.com/BerriAI/litellm/issues/4874
|
2024-07-27 12:54:14 -07:00 |
|
Krrish Dholakia
|
41abd51240
|
fix(custom_llm.py): pass input params to custom llm
|
2024-07-25 19:03:52 -07:00 |
|
Krrish Dholakia
|
b4e3a77ad0
|
feat(utils.py): support sync streaming for custom llm provider
|
2024-07-25 16:47:32 -07:00 |
|
Krrish Dholakia
|
9f97436308
|
fix(custom_llm.py): support async completion calls
|
2024-07-25 15:51:39 -07:00 |
|
Krrish Dholakia
|
6bf1b9353b
|
feat(custom_llm.py): initial working commit for writing your own custom LLM handler
Fixes https://github.com/BerriAI/litellm/issues/4675
Also Addresses https://github.com/BerriAI/litellm/discussions/4677
|
2024-07-25 15:33:05 -07:00 |
|
Krrish Dholakia
|
4e51f712f3
|
fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
|
2024-07-25 09:57:19 -07:00 |
|
Krrish Dholakia
|
83ef52e180
|
feat(vertex_ai_llama.py): vertex ai llama3.1 api support
Initial working commit for vertex ai llama 3.1 api support
|
2024-07-23 17:07:30 -07:00 |
|
Ishaan Jaff
|
1355932bf4
|
Merge pull request #3905 from giritatavarty-8451/litellm_triton_chatcompletion_support
Litellm triton chatcompletion support - Resubmit of #3895
|
2024-07-23 10:30:26 -07:00 |
|
Krrish Dholakia
|
0b9e93d863
|
fix(main.py): check if anthropic api base ends with required url
Fixes https://github.com/BerriAI/litellm/issues/4803
|
2024-07-22 20:36:35 -07:00 |
|
Krrish Dholakia
|
a27454b8e3
|
fix(openai.py): support completion, streaming, async_streaming
|
2024-07-20 15:23:42 -07:00 |
|
Krrish Dholakia
|
576cccaade
|
fix(main.py): check for ANTHROPIC_BASE_URL in environment
Fixes https://github.com/BerriAI/litellm/issues/4803
|
2024-07-20 14:38:31 -07:00 |
|
Ishaan Jaff
|
43e5890f77
|
fix health check
|
2024-07-19 15:56:35 -07:00 |
|
Sophia Loris
|
d779253949
|
resolve merge conflicts
|
2024-07-19 09:45:53 -05:00 |
|
Sophia Loris
|
d5c65c6be2
|
Add support for Triton streaming & triton async completions
|
2024-07-19 09:35:27 -05:00 |
|
Ishaan Jaff
|
f04397e19a
|
Merge pull request #4789 from BerriAI/litellm_router_refactor
[Feat-Router] - Tag based routing
|
2024-07-18 22:19:18 -07:00 |
|
Ishaan Jaff
|
071091fd8c
|
fix use tags as a litellm param
|
2024-07-18 19:34:45 -07:00 |
|
Krrish Dholakia
|
4d963ab789
|
feat(vertex_ai_anthropic.py): support response_schema for vertex ai anthropic calls
allows passing response_schema for anthropic calls. supports schema validation.
|
2024-07-18 16:57:38 -07:00 |
|
Ishaan Jaff
|
bcc89a2c3a
|
fix testing exception mapping
|
2024-07-13 11:10:13 -07:00 |
|
Krrish Dholakia
|
389a51e05d
|
fix: fix linting errors
|
2024-07-11 13:36:55 -07:00 |
|
Krrish Dholakia
|
dd1048cb35
|
fix(main.py): fix linting errors
|
2024-07-11 12:11:50 -07:00 |
|
Krrish Dholakia
|
31829855c0
|
feat(proxy_server.py): working /v1/messages with config.yaml
Adds async router support for adapter_completion call
|
2024-07-10 18:53:54 -07:00 |
|
Krrish Dholakia
|
2f8dbbeb97
|
feat(proxy_server.py): working /v1/messages endpoint
Works with claude engineer
|
2024-07-10 18:15:38 -07:00 |
|
Krrish Dholakia
|
5d6e172d5c
|
feat(anthropic_adapter.py): support for translating anthropic params to openai format
|
2024-07-10 00:32:28 -07:00 |
|
Krrish Dholakia
|
a1986fab60
|
fix(vertex_httpx.py): add sync vertex image gen support
Fixes https://github.com/BerriAI/litellm/issues/4623
|
2024-07-09 13:33:54 -07:00 |
|
Ishaan Jaff
|
010f651268
|
fix params on acompletion
|
2024-07-08 12:56:54 -07:00 |
|
Krrish Dholakia
|
298505c47c
|
fix(whisper---handle-openai/azure-vtt-response-format): Fixes https://github.com/BerriAI/litellm/issues/4595
|
2024-07-08 09:10:40 -07:00 |
|
Krrish Dholakia
|
bb905d7243
|
fix(utils.py): support 'drop_params' for 'parallel_tool_calls'
Closes https://github.com/BerriAI/litellm/issues/4584
OpenAI-only param
|
2024-07-08 07:36:41 -07:00 |
|
Simon S. Viloria
|
d54d4b6734
|
Merge branch 'BerriAI:main' into main
|
2024-07-07 18:00:11 +02:00 |
|
Simon Sanchez Viloria
|
06e6f52358
|
(fix - watsonx) Fixed issues with watsonx embedding/async endpoints
|
2024-07-07 17:59:37 +02:00 |
|
Krrish Dholakia
|
86596c53e9
|
refactor(main.py): migrate vertex gemini calls to vertex_httpx
Completes migration to vertex_httpx
|
2024-07-06 20:08:52 -07:00 |
|
Krish Dholakia
|
8661da1980
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-06 19:12:06 -07:00 |
|
Krrish Dholakia
|
27e9f96380
|
fix(main.py): fix stream_chunk_builder usage calc
Closes https://github.com/BerriAI/litellm/issues/4496
|
2024-07-06 14:52:59 -07:00 |
|