Krrish Dholakia
|
aa93b02562
|
fix(presidio_pii_masking.py): enable user to pass their own ad hoc recognizers to presidio
|
2024-02-20 15:19:31 -08:00 |
|
Krish Dholakia
|
f485e778cb
|
Merge branch 'main' into litellm_google_text_moderation
|
2024-02-17 22:10:26 -08:00 |
|
Krrish Dholakia
|
ea2632d9f3
|
feat(google_text_moderation.py): allow user to use google text moderation for content mod on proxy
|
2024-02-17 18:36:29 -08:00 |
|
Krrish Dholakia
|
f52b3c5f84
|
feat(llama_guard.py): allow user to define custom unsafe content categories
|
2024-02-17 17:42:47 -08:00 |
|
Krrish Dholakia
|
67cd9b1c63
|
feat(llama_guard.py): add llama guard support for content moderation + new async_moderation_hook endpoint
|
2024-02-16 18:45:25 -08:00 |
|
ishaan-jaff
|
4234b9fd13
|
(feat) support headers for generic API logger
|
2024-02-15 13:50:01 -08:00 |
|
Krrish Dholakia
|
5d21da021f
|
fix(vertex_ai.py): map finish reason
|
2024-02-14 11:42:13 -08:00 |
|
Krrish Dholakia
|
9936427669
|
feat(presidio_pii_masking.py): enable output parsing for pii masking
|
2024-02-13 21:36:57 -08:00 |
|
ishaan-jaff
|
ef20536aa0
|
(Feat) support max_user_budget
|
2024-02-06 15:13:59 -08:00 |
|
ishaan-jaff
|
0ca4f962d9
|
(feat) upperbound_key_generate_params
|
2024-02-05 22:38:47 -08:00 |
|
Krrish Dholakia
|
85a3515d83
|
fix(utils.py): route together ai calls to openai client
together ai is now openai-compatible
n
|
2024-02-03 19:22:48 -08:00 |
|
Krish Dholakia
|
dbaad8ae56
|
Merge pull request #1799 from BerriAI/litellm_bedrock_stable_diffusion_support
feat(bedrock.py): add stable diffusion image generation support
|
2024-02-03 12:59:00 -08:00 |
|
Krrish Dholakia
|
5994c1e7ef
|
feat(bedrock.py): add stable diffusion image generation support
|
2024-02-03 12:08:38 -08:00 |
|
Krrish Dholakia
|
3f23b18dad
|
feat(vertex_ai.py): vertex ai gecko text embedding support
|
2024-02-03 09:48:29 -08:00 |
|
Krish Dholakia
|
f01dce02d4
|
Merge branch 'main' into litellm_team_id_support
|
2024-02-01 21:40:22 -08:00 |
|
ishaan-jaff
|
d884fd50a3
|
(fix) bug with LITELLM_LOCAL_MODEL_COST_MAP
|
2024-02-01 21:11:05 -08:00 |
|
Krrish Dholakia
|
99678147df
|
feat(utils.py): support dynamic langfuse params and team settings on proxy
|
2024-02-01 21:08:24 -08:00 |
|
ishaan-jaff
|
1cd2bcf576
|
(fix) import verbose_logger
|
2024-02-01 20:25:16 -08:00 |
|
Krrish Dholakia
|
d8c1fcb61a
|
fix(__init__.py): allow model_cost_map to be loaded locally
|
2024-02-01 18:00:30 -08:00 |
|
ishaan-jaff
|
2702141434
|
(feat) proxy set default_key_generate_params
|
2024-01-29 14:29:54 -08:00 |
|
Krrish Dholakia
|
f21e003f5b
|
feat(proxy_server.py): support global budget and resets
|
2024-01-24 14:27:13 -08:00 |
|
Krrish Dholakia
|
3636558e31
|
fix(__init__.py): enable logging.debug to true if set verbose is true
|
2024-01-23 07:32:30 -08:00 |
|
ishaan-jaff
|
4a47b17ba2
|
(feat) proxy - support s3_callback_params
|
2024-01-11 09:57:47 +05:30 |
|
Krrish Dholakia
|
3ed296e2dd
|
fix(utils.py): raise correct error for azure content blocked error
|
2024-01-10 23:31:51 +05:30 |
|
Ishaan Jaff
|
b09f38e835
|
Merge pull request #1381 from BerriAI/litellm_content_policy_violation_exception
[Feat] Add litellm.ContentPolicyViolationError
|
2024-01-09 17:18:29 +05:30 |
|
ishaan-jaff
|
650a6a8640
|
(chore) remove deprecated completion_with_config() tests
|
2024-01-09 17:13:06 +05:30 |
|
ishaan-jaff
|
66b23ecbb5
|
(v0) add ContentPolicyViolationError
|
2024-01-09 16:33:03 +05:30 |
|
ishaan-jaff
|
9313bda4c8
|
(feat) completion_cost - embeddings + raise Exception
|
2024-01-05 13:11:23 +05:30 |
|
ishaan-jaff
|
0e8809abf2
|
(feat) add xinference as an embedding provider
|
2024-01-02 15:32:26 +05:30 |
|
fatih
|
783f5378f4
|
update azure turbo namings
|
2024-01-01 13:03:08 +03:00 |
|
ishaan-jaff
|
806551ff99
|
(fix) use openai token counter for azure llms
|
2023-12-29 15:37:46 +05:30 |
|
ishaan-jaff
|
796e735881
|
(feat) v0 adding cloudflare
|
2023-12-29 09:32:29 +05:30 |
|
ishaan-jaff
|
2a147579ec
|
(feat) add voyage ai embeddings
|
2023-12-28 17:10:15 +05:30 |
|
Krrish Dholakia
|
606de01ac0
|
fix(utils.py): allow text completion input to be either model or engine
|
2023-12-27 17:24:16 +05:30 |
|
Krrish Dholakia
|
85549c3d66
|
fix(google_kms.py): support enums for key management system
|
2023-12-27 13:19:33 +05:30 |
|
Krrish Dholakia
|
6f695838e5
|
feat(utils.py): support google kms for secret management
https://github.com/BerriAI/litellm/issues/1235
|
2023-12-26 15:39:40 +05:30 |
|
ishaan-jaff
|
c3aff30464
|
(feat) add ollama_chat as a provider
|
2023-12-25 23:04:17 +05:30 |
|
Krrish Dholakia
|
79978c44ba
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
|
Krrish Dholakia
|
70f4dabff6
|
feat(gemini.py): add support for completion calls for gemini-pro (google ai studio)
|
2023-12-24 09:42:58 +05:30 |
|
Krrish Dholakia
|
51cb16a015
|
feat(main.py): add support for image generation endpoint
|
2023-12-16 21:07:29 -08:00 |
|
ishaan-jaff
|
3fd00393be
|
init vertex_vision_models
|
2023-12-16 18:37:00 +05:30 |
|
ishaan-jaff
|
a5540bf24d
|
(feat) proxy logs: dynamodb - set table name
|
2023-12-15 21:38:44 +05:30 |
|
ishaan-jaff
|
0530d16595
|
(feat) add openai.NotFoundError
|
2023-12-15 10:18:02 +05:30 |
|
Krrish Dholakia
|
bb5b883316
|
fix(main.py): support async streaming for text completions endpoint
|
2023-12-14 13:56:32 -08:00 |
|
ishaan-jaff
|
072fdac48c
|
(feat) caching - add supported call types
|
2023-12-14 22:27:14 +05:30 |
|
ishaan-jaff
|
95454e5176
|
(feat) mistral - add exception mapping
|
2023-12-14 18:57:39 +05:30 |
|
ishaan-jaff
|
303d9aa286
|
(feat) add mistral api
|
2023-12-14 18:17:48 +05:30 |
|
Krrish Dholakia
|
853508e8c0
|
fix(utils.py): support caching for embedding + log cache hits
n
n
|
2023-12-13 18:37:30 -08:00 |
|
Krrish Dholakia
|
72b9d4c5e8
|
test(test_amazing_vertex_completion.py): fix testing
|
2023-12-13 16:41:26 -08:00 |
|
Krrish Dholakia
|
43b160d70d
|
feat(vertex_ai.py): adds support for gemini-pro on vertex ai
|
2023-12-13 10:26:30 -08:00 |
|