Krrish Dholakia
|
9909f44015
|
feat(utils.py): add native fireworks ai support
addresses - https://github.com/BerriAI/litellm/issues/777, https://github.com/BerriAI/litellm/issues/2486
|
2024-03-15 09:09:59 -07:00 |
|
Krrish Dholakia
|
0b6cf3d5cf
|
refactor(main.py): trigger new build
|
2024-03-14 13:01:18 -07:00 |
|
Krrish Dholakia
|
bdd2004691
|
refactor(main.py): trigger new build
|
2024-03-14 12:10:39 -07:00 |
|
Krrish Dholakia
|
7876aa2d75
|
fix(parallel_request_limiter.py): handle metadata being none
|
2024-03-14 10:02:41 -07:00 |
|
Krrish Dholakia
|
16e3aaced5
|
docs(enterprise.md): add prompt injection detection to docs
|
2024-03-13 12:37:32 -07:00 |
|
Krish Dholakia
|
9f2d540ebf
|
Merge pull request #2472 from BerriAI/litellm_anthropic_streaming_tool_calling
fix(anthropic.py): support claude-3 streaming with function calling
|
2024-03-12 21:36:01 -07:00 |
|
Dmitry Supranovich
|
57ebb9582e
|
Fixed azure ad token not being processed properly in embedding models
|
2024-03-12 21:29:24 -04:00 |
|
Krish Dholakia
|
0d18f3c0ca
|
Merge pull request #2473 from BerriAI/litellm_fix_compatible_provider_model_name
fix(openai.py): return model name with custom llm provider for openai-compatible endpoints (e.g. mistral, together ai, etc.)
|
2024-03-12 12:58:29 -07:00 |
|
Ishaan Jaff
|
5172fb1de9
|
Merge pull request #2474 from BerriAI/litellm_support_command_r
[New-Model] Cohere/command-r
|
2024-03-12 11:11:56 -07:00 |
|
Krrish Dholakia
|
d2286fb93c
|
fix(main.py): trigger new build
|
2024-03-12 11:07:14 -07:00 |
|
Krrish Dholakia
|
0033613b9e
|
fix(openai.py): return model name with custom llm provider for openai compatible endpoints
|
2024-03-12 10:30:10 -07:00 |
|
ishaan-jaff
|
7635c764cf
|
(feat) cohere_chat provider
|
2024-03-12 10:29:26 -07:00 |
|
Krrish Dholakia
|
86ed0aaba8
|
fix(anthropic.py): support streaming with function calling
|
2024-03-12 09:52:11 -07:00 |
|
ishaan-jaff
|
b193b01f40
|
(feat) support azure/gpt-instruct models
|
2024-03-12 09:30:15 -07:00 |
|
Krrish Dholakia
|
e07174736f
|
refactor(main.py): trigger new build
|
2024-03-11 13:57:40 -07:00 |
|
Krrish Dholakia
|
942b5e4145
|
fix(main.py): trigger new build
|
2024-03-10 09:48:06 -07:00 |
|
Krish Dholakia
|
caa99f43bf
|
Merge branch 'main' into litellm_load_balancing_transcription_endpoints
|
2024-03-08 23:08:47 -08:00 |
|
Krish Dholakia
|
e245b1c98a
|
Merge pull request #2401 from BerriAI/litellm_transcription_endpoints
feat(main.py): support openai transcription endpoints
|
2024-03-08 23:07:48 -08:00 |
|
Krrish Dholakia
|
0fb7afe820
|
feat(proxy_server.py): working /audio/transcription endpoint
|
2024-03-08 18:20:27 -08:00 |
|
ishaan-jaff
|
ddd231a8c2
|
(feat) use no-log as a litellm param
|
2024-03-08 16:46:38 -08:00 |
|
ishaan-jaff
|
986a526790
|
(feat) disable logging per request
|
2024-03-08 16:25:54 -08:00 |
|
Krrish Dholakia
|
ae54b398d2
|
feat(router.py): add load balancing for async transcription calls
|
2024-03-08 13:58:15 -08:00 |
|
Krrish Dholakia
|
6b1049217e
|
feat(azure.py): add support for calling whisper endpoints on azure
|
2024-03-08 13:48:38 -08:00 |
|
Krrish Dholakia
|
696eb54455
|
feat(main.py): support openai transcription endpoints
enable user to load balance between openai + azure transcription endpoints
|
2024-03-08 10:25:19 -08:00 |
|
Krrish Dholakia
|
2f9a39f30c
|
refactor(main.py): trigger new build
|
2024-03-08 08:12:22 -08:00 |
|
Krrish Dholakia
|
b9854a99d2
|
test: increase time before checking budget reset - avoid deadlocking
|
2024-03-06 22:16:59 -08:00 |
|
Krrish Dholakia
|
cdb960eb34
|
fix(vertex_ai.py): correctly parse optional params and pass vertex ai project
|
2024-03-06 14:00:50 -08:00 |
|
Krrish Dholakia
|
387864662e
|
fix(main.py): trigger new build
|
2024-03-05 15:50:40 -08:00 |
|
Krrish Dholakia
|
072500e314
|
refactor(main.py): trigger new build
|
2024-03-05 07:40:41 -08:00 |
|
ishaan-jaff
|
1183e5f2e5
|
(feat) maintain anthropic text completion
|
2024-03-04 11:16:34 -08:00 |
|
Krrish Dholakia
|
a52169e04d
|
refactor(main.py): trigger new build
|
2024-03-04 09:33:44 -08:00 |
|
ishaan-jaff
|
19eb9063fb
|
(feat) - add claude 3
|
2024-03-04 07:13:08 -08:00 |
|
Krrish Dholakia
|
f17c0230c8
|
fix(main.py): trigger new build
|
2024-03-02 21:22:05 -08:00 |
|
Krrish Dholakia
|
b85a6304c4
|
refactor(main.py): trigger new build
|
2024-03-02 20:09:10 -08:00 |
|
Krrish Dholakia
|
c1ba512ae8
|
refactor(main.py): trigger new build
|
2024-03-01 20:51:07 -08:00 |
|
Krrish Dholakia
|
f6275f1d9b
|
refactor(main.py): trigger new build
|
2024-02-28 20:59:52 -08:00 |
|
ishaan-jaff
|
192c970be6
|
(fix) maintain backwards compat with vertex_ai_project
|
2024-02-28 11:35:29 -08:00 |
|
ishaan-jaff
|
9078707672
|
(fix) vertex ai project/location
|
2024-02-28 08:13:13 -08:00 |
|
Krrish Dholakia
|
86982d0045
|
refactor(main.py): trigger new build
|
2024-02-26 21:35:30 -08:00 |
|
Krish Dholakia
|
95b5b7f1fc
|
Merge pull request #2203 from BerriAI/litellm_streaming_caching_fix
fix(utils.py): support returning caching streaming response for function calling streaming calls
|
2024-02-26 19:58:00 -08:00 |
|
Krrish Dholakia
|
788e24bd83
|
fix(utils.py): fix streaming logic
|
2024-02-26 14:26:58 -08:00 |
|
Krrish Dholakia
|
6cce9213d8
|
fix(main.py): refactor
|
2024-02-26 10:47:01 -08:00 |
|
Krrish Dholakia
|
a1c6e6d52b
|
build(main.py): trigger new build
|
2024-02-26 10:44:24 -08:00 |
|
Krrish Dholakia
|
28f4b5809c
|
test(test_amazing_vertex_completion.py): fix test
|
2024-02-26 10:42:05 -08:00 |
|
Krrish Dholakia
|
cad2523b04
|
refactor(main.py): trigger new build with bundled ui
|
2024-02-25 02:15:05 -08:00 |
|
ishaan-jaff
|
c315c18695
|
(fix) use api_base in health checks
|
2024-02-24 18:39:20 -08:00 |
|
ishaan-jaff
|
30aa5eaa34
|
(feat) add groq ai
|
2024-02-23 10:42:51 -08:00 |
|
ishaan-jaff
|
aa9164e2ce
|
(docs) setting extra_headers
|
2024-02-23 08:56:09 -08:00 |
|
ishaan-jaff
|
3c8b58bd80
|
(feat) support extra_headers
|
2024-02-23 08:48:21 -08:00 |
|
Krrish Dholakia
|
1de9dda278
|
refactor(main.py): trigger new build
|
2024-02-22 22:08:05 -08:00 |
|