ishaan-jaff
|
9094be7fbd
|
(feat) maintain support to Anthropic text completion
|
2024-03-04 11:13:14 -08:00 |
|
Ishaan Jaff
|
84415ef7b5
|
Merge pull request #2290 from ti3x/bedrock_mistral
Add support for Bedrock Mistral models
|
2024-03-04 08:42:47 -08:00 |
|
ishaan-jaff
|
fd9f8b7010
|
(docs) setting soft budgets
|
2024-03-02 13:05:00 -08:00 |
|
Tim Xia
|
b4dc7f0f17
|
Add AmazonMistralConfig
|
2024-03-01 23:14:00 -05:00 |
|
ishaan-jaff
|
56d20cd073
|
(test) supports_function_calling
|
2024-02-28 17:36:15 -08:00 |
|
Krrish Dholakia
|
0806aa8da1
|
fix(proxy_server.py): enable default new user params
|
2024-02-23 16:39:50 -08:00 |
|
ishaan-jaff
|
e80fcc2762
|
(feat) add groq ai
|
2024-02-23 10:40:46 -08:00 |
|
Krrish Dholakia
|
acae98fd50
|
feat(proxy_server.py): enable admin to set banned keywords on proxy
|
2024-02-22 18:30:42 -08:00 |
|
Krrish Dholakia
|
028f455ad0
|
feat(proxy_server.py): add support for blocked user lists (enterprise-only)
|
2024-02-22 17:51:31 -08:00 |
|
Krrish Dholakia
|
72bcd5a4af
|
fix(presidio_pii_masking.py): enable user to pass their own ad hoc recognizers to presidio
|
2024-02-20 15:19:31 -08:00 |
|
Krish Dholakia
|
b41cdf598b
|
Merge branch 'main' into litellm_google_text_moderation
|
2024-02-17 22:10:26 -08:00 |
|
Krrish Dholakia
|
ddf0911c46
|
feat(google_text_moderation.py): allow user to use google text moderation for content mod on proxy
|
2024-02-17 18:36:29 -08:00 |
|
Krrish Dholakia
|
074d93cc97
|
feat(llama_guard.py): allow user to define custom unsafe content categories
|
2024-02-17 17:42:47 -08:00 |
|
Krrish Dholakia
|
2a4a6995ac
|
feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint
|
2024-02-16 18:45:25 -08:00 |
|
ishaan-jaff
|
3e90acb750
|
(feat) support headers for generic API logger
|
2024-02-15 13:50:01 -08:00 |
|
Krrish Dholakia
|
fe1fe70c64
|
fix(vertex_ai.py): map finish reason
|
2024-02-14 11:42:13 -08:00 |
|
Krrish Dholakia
|
f68b656040
|
feat(presidio_pii_masking.py): enable output parsing for pii masking
|
2024-02-13 21:36:57 -08:00 |
|
ishaan-jaff
|
047b2f9b1a
|
(Feat) support max_user_budget
|
2024-02-06 15:13:59 -08:00 |
|
ishaan-jaff
|
a712596d46
|
(feat) upperbound_key_generate_params
|
2024-02-05 22:38:47 -08:00 |
|
Krrish Dholakia
|
c49c88c8e5
|
fix(utils.py): route together ai calls to openai client
together ai is now openai-compatible
n
|
2024-02-03 19:22:48 -08:00 |
|
Krish Dholakia
|
6408af11b6
|
Merge pull request #1799 from BerriAI/litellm_bedrock_stable_diffusion_support
feat(bedrock.py): add stable diffusion image generation support
|
2024-02-03 12:59:00 -08:00 |
|
Krrish Dholakia
|
36416360c4
|
feat(bedrock.py): add stable diffusion image generation support
|
2024-02-03 12:08:38 -08:00 |
|
Krrish Dholakia
|
d9ba8668f4
|
feat(vertex_ai.py): vertex ai gecko text embedding support
|
2024-02-03 09:48:29 -08:00 |
|
Krish Dholakia
|
9a3fa243b8
|
Merge branch 'main' into litellm_team_id_support
|
2024-02-01 21:40:22 -08:00 |
|
ishaan-jaff
|
87709ef77f
|
(fix) bug with LITELLM_LOCAL_MODEL_COST_MAP
|
2024-02-01 21:11:05 -08:00 |
|
Krrish Dholakia
|
a301d8aa4b
|
feat(utils.py): support dynamic langfuse params and team settings on proxy
|
2024-02-01 21:08:24 -08:00 |
|
ishaan-jaff
|
f8e8c1f900
|
(fix) import verbose_logger
|
2024-02-01 20:25:16 -08:00 |
|
Krrish Dholakia
|
ec427ae322
|
fix(__init__.py): allow model_cost_map to be loaded locally
|
2024-02-01 18:00:30 -08:00 |
|
ishaan-jaff
|
72b9e539c8
|
(feat) proxy set default_key_generate_params
|
2024-01-29 14:29:54 -08:00 |
|
Krrish Dholakia
|
159e54d8be
|
feat(proxy_server.py): support global budget and resets
|
2024-01-24 14:27:13 -08:00 |
|
Krrish Dholakia
|
fd4d65adcd
|
fix(__init__.py): enable logging.debug to true if set verbose is true
|
2024-01-23 07:32:30 -08:00 |
|
ishaan-jaff
|
0b20ab7d2b
|
(feat) proxy - support s3_callback_params
|
2024-01-11 09:57:47 +05:30 |
|
Krrish Dholakia
|
3080f27b54
|
fix(utils.py): raise correct error for azure content blocked error
|
2024-01-10 23:31:51 +05:30 |
|
Ishaan Jaff
|
4cfa010dbd
|
Merge pull request #1381 from BerriAI/litellm_content_policy_violation_exception
[Feat] Add litellm.ContentPolicyViolationError
|
2024-01-09 17:18:29 +05:30 |
|
ishaan-jaff
|
248e5f3d92
|
(chore) remove deprecated completion_with_config() tests
|
2024-01-09 17:13:06 +05:30 |
|
ishaan-jaff
|
09874cc83f
|
(v0) add ContentPolicyViolationError
|
2024-01-09 16:33:03 +05:30 |
|
ishaan-jaff
|
f681f0f2b2
|
(feat) completion_cost - embeddings + raise Exception
|
2024-01-05 13:11:23 +05:30 |
|
ishaan-jaff
|
790dcff5e0
|
(feat) add xinference as an embedding provider
|
2024-01-02 15:32:26 +05:30 |
|
fatih
|
6566ebd815
|
update azure turbo namings
|
2024-01-01 13:03:08 +03:00 |
|
ishaan-jaff
|
037dcbbe10
|
(fix) use openai token counter for azure llms
|
2023-12-29 15:37:46 +05:30 |
|
ishaan-jaff
|
367e9913dc
|
(feat) v0 adding cloudflare
|
2023-12-29 09:32:29 +05:30 |
|
ishaan-jaff
|
95e6d2fbba
|
(feat) add voyage ai embeddings
|
2023-12-28 17:10:15 +05:30 |
|
Krrish Dholakia
|
e516cfe9f5
|
fix(utils.py): allow text completion input to be either model or engine
|
2023-12-27 17:24:16 +05:30 |
|
Krrish Dholakia
|
9ba520cc8b
|
fix(google_kms.py): support enums for key management system
|
2023-12-27 13:19:33 +05:30 |
|
Krrish Dholakia
|
2070a785a4
|
feat(utils.py): support google kms for secret management
https://github.com/BerriAI/litellm/issues/1235
|
2023-12-26 15:39:40 +05:30 |
|
ishaan-jaff
|
3e97a766a6
|
(feat) add ollama_chat as a provider
|
2023-12-25 23:04:17 +05:30 |
|
Krrish Dholakia
|
4905929de3
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
|
Krrish Dholakia
|
1262d89ab3
|
feat(gemini.py): add support for completion calls for gemini-pro (google ai studio)
|
2023-12-24 09:42:58 +05:30 |
|
Krrish Dholakia
|
13d088b72e
|
feat(main.py): add support for image generation endpoint
|
2023-12-16 21:07:29 -08:00 |
|
ishaan-jaff
|
0bf29a14e8
|
init vertex_vision_models
|
2023-12-16 18:37:00 +05:30 |
|