Ishaan Jaff
|
f65828db26
|
Merge pull request #2861 from BerriAI/litellm_add_azure_command_r_plust
[FEAT] add azure command-r-plus
|
2024-04-05 15:13:35 -07:00 |
|
Ishaan Jaff
|
c74d43fcbc
|
fix support azure/mistral models
|
2024-04-05 09:32:39 -07:00 |
|
Krrish Dholakia
|
a8d0aace83
|
fix(vertex_ai_anthropic.py): support streaming, async completion, async streaming for vertex ai anthropic
|
2024-04-05 09:27:48 -07:00 |
|
Krish Dholakia
|
5ea9946925
|
Merge pull request #2665 from BerriAI/litellm_claude_vertex_ai
[WIP] feat(vertex_ai_anthropic.py): Add support for claude 3 on vertex ai
|
2024-04-05 07:06:04 -07:00 |
|
Krrish Dholakia
|
bd7040969b
|
feat(vertex_ai_anthropic.py): add claude 3 on vertex ai support - working .completions call
.completions() call works
|
2024-04-02 22:07:39 -07:00 |
|
RaGe
|
d7f4d38414
|
(fix) add vertex_language_models to model_list
|
2024-04-02 20:02:46 -04:00 |
|
RaGe
|
a0ad1267d5
|
(fix) restore missing comma
|
2024-04-02 20:02:28 -04:00 |
|
Krrish Dholakia
|
aa5f88ec45
|
fix(proxy_server.py): allow user to set in-memory + redis ttl
addresses - https://github.com/BerriAI/litellm/issues/2700
|
2024-04-01 19:13:23 -07:00 |
|
Krrish Dholakia
|
49e2624240
|
fix(proxy_server.py): enforce end user budgets with 'litellm.max_end_user_budget' param
|
2024-03-29 17:14:40 -07:00 |
|
Krrish Dholakia
|
f3a56c5af2
|
fix(llm_guard.py): working llm-guard 'key-specific' mode
|
2024-03-26 17:47:20 -07:00 |
|
Krrish Dholakia
|
7bc76ddbc3
|
feat(llm_guard.py): enable key-specific llm guard check
|
2024-03-26 17:21:51 -07:00 |
|
Krrish Dholakia
|
c667e437b9
|
fix(utils.py): allow user to disable streaming logging
fixes event loop issue for litellm.disable_streaming_logging
|
2024-03-25 14:28:46 -07:00 |
|
Krrish Dholakia
|
bad2327b88
|
fix(utils.py): fix aws secret manager + support key_management_settings
fixes the aws secret manager implementation and allows the user to set which keys they want to check thr
ough it
|
2024-03-16 16:47:50 -07:00 |
|
Krrish Dholakia
|
0783a3f247
|
feat(utils.py): add native fireworks ai support
addresses - https://github.com/BerriAI/litellm/issues/777, https://github.com/BerriAI/litellm/issues/2486
|
2024-03-15 09:09:59 -07:00 |
|
Krrish Dholakia
|
1e1190745f
|
fix(utils.py): move to using litellm.modify_params to enable max output token trimming fix
|
2024-03-14 12:17:56 -07:00 |
|
Ishaan Jaff
|
15591d0978
|
Merge pull request #2474 from BerriAI/litellm_support_command_r
[New-Model] Cohere/command-r
|
2024-03-12 11:11:56 -07:00 |
|
ishaan-jaff
|
5e5f9f1a3e
|
(feat) use model json to get cohere_models
|
2024-03-12 10:53:26 -07:00 |
|
ishaan-jaff
|
98864b7ed9
|
(feat) v0 support command-r
|
2024-03-12 10:26:58 -07:00 |
|
ishaan-jaff
|
c5ebbd1868
|
(feat) support azure/gpt-instruct models
|
2024-03-12 09:30:15 -07:00 |
|
Krrish Dholakia
|
744fe7232d
|
fix(utils.py): add support for anthropic params in get_supported_openai_params
|
2024-03-08 23:06:40 -08:00 |
|
Krrish Dholakia
|
69ca9cf0fa
|
fix(utils.py): return function name for ollama_chat function calls
|
2024-03-08 08:01:10 -08:00 |
|
Krish Dholakia
|
d53257b7f7
|
Merge branch 'main' into litellm_claude_3_bedrock_access
|
2024-03-05 07:10:45 -08:00 |
|
Krrish Dholakia
|
b042a28b91
|
fix(init.py): expose 'get_model_params' function
|
2024-03-04 21:22:09 -08:00 |
|
Krrish Dholakia
|
6c486c2461
|
fix(bedrock.py): add claude 3 support
|
2024-03-04 17:15:47 -08:00 |
|
ishaan-jaff
|
91ce6d25b1
|
(feat) maintain support to Anthropic text completion
|
2024-03-04 11:13:14 -08:00 |
|
Ishaan Jaff
|
fc34999168
|
Merge pull request #2290 from ti3x/bedrock_mistral
Add support for Bedrock Mistral models
|
2024-03-04 08:42:47 -08:00 |
|
ishaan-jaff
|
ad496aac10
|
(docs) setting soft budgets
|
2024-03-02 13:05:00 -08:00 |
|
Tim Xia
|
78a93e40ed
|
Add AmazonMistralConfig
|
2024-03-01 23:14:00 -05:00 |
|
ishaan-jaff
|
86d1aabf22
|
(test) supports_function_calling
|
2024-02-28 17:36:15 -08:00 |
|
Krrish Dholakia
|
7fac7c47c6
|
fix(proxy_server.py): enable default new user params
|
2024-02-23 16:39:50 -08:00 |
|
ishaan-jaff
|
c6bfcabccd
|
(feat) add groq ai
|
2024-02-23 10:40:46 -08:00 |
|
Krrish Dholakia
|
d00773c2b1
|
feat(proxy_server.py): enable admin to set banned keywords on proxy
|
2024-02-22 18:30:42 -08:00 |
|
Krrish Dholakia
|
d1dd8854c2
|
feat(proxy_server.py): add support for blocked user lists (enterprise-only)
|
2024-02-22 17:51:31 -08:00 |
|
Krrish Dholakia
|
aa93b02562
|
fix(presidio_pii_masking.py): enable user to pass their own ad hoc recognizers to presidio
|
2024-02-20 15:19:31 -08:00 |
|
Krish Dholakia
|
f485e778cb
|
Merge branch 'main' into litellm_google_text_moderation
|
2024-02-17 22:10:26 -08:00 |
|
Krrish Dholakia
|
ea2632d9f3
|
feat(google_text_moderation.py): allow user to use google text moderation for content mod on proxy
|
2024-02-17 18:36:29 -08:00 |
|
Krrish Dholakia
|
f52b3c5f84
|
feat(llama_guard.py): allow user to define custom unsafe content categories
|
2024-02-17 17:42:47 -08:00 |
|
Krrish Dholakia
|
67cd9b1c63
|
feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint
|
2024-02-16 18:45:25 -08:00 |
|
ishaan-jaff
|
4234b9fd13
|
(feat) support headers for generic API logger
|
2024-02-15 13:50:01 -08:00 |
|
Krrish Dholakia
|
5d21da021f
|
fix(vertex_ai.py): map finish reason
|
2024-02-14 11:42:13 -08:00 |
|
Krrish Dholakia
|
9936427669
|
feat(presidio_pii_masking.py): enable output parsing for pii masking
|
2024-02-13 21:36:57 -08:00 |
|
ishaan-jaff
|
ef20536aa0
|
(Feat) support max_user_budget
|
2024-02-06 15:13:59 -08:00 |
|
ishaan-jaff
|
0ca4f962d9
|
(feat) upperbound_key_generate_params
|
2024-02-05 22:38:47 -08:00 |
|
Krrish Dholakia
|
85a3515d83
|
fix(utils.py): route together ai calls to openai client
together ai is now openai-compatible
n
|
2024-02-03 19:22:48 -08:00 |
|
Krish Dholakia
|
dbaad8ae56
|
Merge pull request #1799 from BerriAI/litellm_bedrock_stable_diffusion_support
feat(bedrock.py): add stable diffusion image generation support
|
2024-02-03 12:59:00 -08:00 |
|
Krrish Dholakia
|
5994c1e7ef
|
feat(bedrock.py): add stable diffusion image generation support
|
2024-02-03 12:08:38 -08:00 |
|
Krrish Dholakia
|
3f23b18dad
|
feat(vertex_ai.py): vertex ai gecko text embedding support
|
2024-02-03 09:48:29 -08:00 |
|
Krish Dholakia
|
f01dce02d4
|
Merge branch 'main' into litellm_team_id_support
|
2024-02-01 21:40:22 -08:00 |
|
ishaan-jaff
|
d884fd50a3
|
(fix) bug with LITELLM_LOCAL_MODEL_COST_MAP
|
2024-02-01 21:11:05 -08:00 |
|
Krrish Dholakia
|
99678147df
|
feat(utils.py): support dynamic langfuse params and team settings on proxy
|
2024-02-01 21:08:24 -08:00 |
|