Commit graph

1058 commits

Author SHA1 Message Date
Ishaan Jaff
15591d0978 Merge pull request #2474 from BerriAI/litellm_support_command_r
[New-Model] Cohere/command-r
2024-03-12 11:11:56 -07:00
Krrish Dholakia
4dd28e9646 fix(main.py): trigger new build 2024-03-12 11:07:14 -07:00
Krrish Dholakia
e94c4f818c fix(openai.py): return model name with custom llm provider for openai compatible endpoints 2024-03-12 10:30:10 -07:00
ishaan-jaff
f398d6e48f (feat) cohere_chat provider 2024-03-12 10:29:26 -07:00
Krrish Dholakia
1c6438c267 fix(anthropic.py): support streaming with function calling 2024-03-12 09:52:11 -07:00
ishaan-jaff
c5ebbd1868 (feat) support azure/gpt-instruct models 2024-03-12 09:30:15 -07:00
Krrish Dholakia
4586ba554c refactor(main.py): trigger new build 2024-03-11 13:57:40 -07:00
Krrish Dholakia
d4dc14a5d4 fix(main.py): trigger new build 2024-03-10 09:48:06 -07:00
Krish Dholakia
f461352908 Merge branch 'main' into litellm_load_balancing_transcription_endpoints 2024-03-08 23:08:47 -08:00
Krish Dholakia
75bc854294 Merge pull request #2401 from BerriAI/litellm_transcription_endpoints
feat(main.py): support openai transcription endpoints
2024-03-08 23:07:48 -08:00
Krrish Dholakia
93615682fe feat(proxy_server.py): working /audio/transcription endpoint 2024-03-08 18:20:27 -08:00
ishaan-jaff
e749521c0b (feat) use no-log as a litellm param 2024-03-08 16:46:38 -08:00
ishaan-jaff
feefdd631c (feat) disable logging per request 2024-03-08 16:25:54 -08:00
Krrish Dholakia
93e9781d37 feat(router.py): add load balancing for async transcription calls 2024-03-08 13:58:15 -08:00
Krrish Dholakia
e084b877f5 feat(azure.py): add support for calling whisper endpoints on azure 2024-03-08 13:48:38 -08:00
Krrish Dholakia
bdf8e2d3c7 feat(main.py): support openai transcription endpoints
enable user to load balance between openai + azure transcription endpoints
2024-03-08 10:25:19 -08:00
Krrish Dholakia
0210a3e5ba refactor(main.py): trigger new build 2024-03-08 08:12:22 -08:00
Krrish Dholakia
4185a262ed test: increase time before checking budget reset - avoid deadlocking 2024-03-06 22:16:59 -08:00
Krrish Dholakia
7f4dd734c1 fix(vertex_ai.py): correctly parse optional params and pass vertex ai project 2024-03-06 14:00:50 -08:00
Krrish Dholakia
b5861fb661 fix(main.py): trigger new build 2024-03-05 15:50:40 -08:00
Krrish Dholakia
ef42b2056a refactor(main.py): trigger new build 2024-03-05 07:40:41 -08:00
ishaan-jaff
963313412d (feat) maintain anthropic text completion 2024-03-04 11:16:34 -08:00
Krrish Dholakia
f830e2ba68 refactor(main.py): trigger new build 2024-03-04 09:33:44 -08:00
ishaan-jaff
26eea94404 (feat) - add claude 3 2024-03-04 07:13:08 -08:00
Krrish Dholakia
e2f6845234 fix(main.py): trigger new build 2024-03-02 21:22:05 -08:00
Krrish Dholakia
8fc1f12fe8 refactor(main.py): trigger new build 2024-03-02 20:09:10 -08:00
Krrish Dholakia
7b9c8280d4 refactor(main.py): trigger new build 2024-03-01 20:51:07 -08:00
Krrish Dholakia
d9879bd8af refactor(main.py): trigger new build 2024-02-28 20:59:52 -08:00
ishaan-jaff
1ed4707c73 (fix) maintain backwards compat with vertex_ai_project 2024-02-28 11:35:29 -08:00
ishaan-jaff
355993f260 (fix) vertex ai project/location 2024-02-28 08:13:13 -08:00
Krrish Dholakia
d9b34034f7 refactor(main.py): trigger new build 2024-02-26 21:35:30 -08:00
Krish Dholakia
b0f96411f5 Merge pull request #2203 from BerriAI/litellm_streaming_caching_fix
fix(utils.py): support returning caching streaming response for function calling streaming calls
2024-02-26 19:58:00 -08:00
Krrish Dholakia
4ba18f9932 fix(utils.py): fix streaming logic 2024-02-26 14:26:58 -08:00
Krrish Dholakia
45f85f75e0 fix(main.py): refactor 2024-02-26 10:47:01 -08:00
Krrish Dholakia
2bb9cd0ca6 build(main.py): trigger new build 2024-02-26 10:44:24 -08:00
Krrish Dholakia
adb4443ea6 test(test_amazing_vertex_completion.py): fix test 2024-02-26 10:42:05 -08:00
Krrish Dholakia
c88d5c10ff refactor(main.py): trigger new build with bundled ui 2024-02-25 02:15:05 -08:00
ishaan-jaff
1ff241e4b3 (fix) use api_base in health checks 2024-02-24 18:39:20 -08:00
ishaan-jaff
24fb50ff29 (feat) add groq ai 2024-02-23 10:42:51 -08:00
ishaan-jaff
fd2ab2fb00 (docs) setting extra_headers 2024-02-23 08:56:09 -08:00
ishaan-jaff
de8283dac4 (feat) support extra_headers 2024-02-23 08:48:21 -08:00
Krrish Dholakia
b54dae9754 refactor(main.py): trigger new build 2024-02-22 22:08:05 -08:00
Krrish Dholakia
2fe09094cf refactor(main.py): trigger new build 2024-02-21 22:08:44 -08:00
Krrish Dholakia
d5d26a3872 refactor(main.py): trigger new build 2024-02-20 20:36:33 -08:00
Krish Dholakia
a9c3aeb9fa Merge pull request #2090 from BerriAI/litellm_gemini_streaming_fixes
fix(gemini.py): fix async streaming + add native async completions
2024-02-20 19:07:58 -08:00
Krrish Dholakia
d61ba8023f refactor(main.py): trigger new build 2024-02-19 22:53:38 -08:00
Krrish Dholakia
11c12e7381 fix(gemini.py): fix async streaming + add native async completions 2024-02-19 22:41:36 -08:00
Krrish Dholakia
833e523ec0 refactor(main.py): trigger new build 2024-02-17 08:25:58 -08:00
Krrish Dholakia
2a5a14d612 fix(utils.py): support image gen logging to langfuse 2024-02-16 16:12:52 -08:00
Krrish Dholakia
a1aeb7b404 fix(main.py): map list input to ollama prompt input format 2024-02-16 09:57:51 -08:00