ishaan-jaff
|
047b2f9b1a
|
(Feat) support max_user_budget
|
2024-02-06 15:13:59 -08:00 |
|
ishaan-jaff
|
a712596d46
|
(feat) upperbound_key_generate_params
|
2024-02-05 22:38:47 -08:00 |
|
Krrish Dholakia
|
c49c88c8e5
|
fix(utils.py): route together ai calls to openai client
together ai is now openai-compatible
n
|
2024-02-03 19:22:48 -08:00 |
|
Krish Dholakia
|
6408af11b6
|
Merge pull request #1799 from BerriAI/litellm_bedrock_stable_diffusion_support
feat(bedrock.py): add stable diffusion image generation support
|
2024-02-03 12:59:00 -08:00 |
|
Krrish Dholakia
|
36416360c4
|
feat(bedrock.py): add stable diffusion image generation support
|
2024-02-03 12:08:38 -08:00 |
|
Krrish Dholakia
|
d9ba8668f4
|
feat(vertex_ai.py): vertex ai gecko text embedding support
|
2024-02-03 09:48:29 -08:00 |
|
Krish Dholakia
|
9a3fa243b8
|
Merge branch 'main' into litellm_team_id_support
|
2024-02-01 21:40:22 -08:00 |
|
ishaan-jaff
|
87709ef77f
|
(fix) bug with LITELLM_LOCAL_MODEL_COST_MAP
|
2024-02-01 21:11:05 -08:00 |
|
Krrish Dholakia
|
a301d8aa4b
|
feat(utils.py): support dynamic langfuse params and team settings on proxy
|
2024-02-01 21:08:24 -08:00 |
|
ishaan-jaff
|
f8e8c1f900
|
(fix) import verbose_logger
|
2024-02-01 20:25:16 -08:00 |
|
Krrish Dholakia
|
ec427ae322
|
fix(__init__.py): allow model_cost_map to be loaded locally
|
2024-02-01 18:00:30 -08:00 |
|
ishaan-jaff
|
72b9e539c8
|
(feat) proxy set default_key_generate_params
|
2024-01-29 14:29:54 -08:00 |
|
Krrish Dholakia
|
159e54d8be
|
feat(proxy_server.py): support global budget and resets
|
2024-01-24 14:27:13 -08:00 |
|
Krrish Dholakia
|
fd4d65adcd
|
fix(__init__.py): enable logging.debug to true if set verbose is true
|
2024-01-23 07:32:30 -08:00 |
|
ishaan-jaff
|
0b20ab7d2b
|
(feat) proxy - support s3_callback_params
|
2024-01-11 09:57:47 +05:30 |
|
Krrish Dholakia
|
3080f27b54
|
fix(utils.py): raise correct error for azure content blocked error
|
2024-01-10 23:31:51 +05:30 |
|
Ishaan Jaff
|
4cfa010dbd
|
Merge pull request #1381 from BerriAI/litellm_content_policy_violation_exception
[Feat] Add litellm.ContentPolicyViolationError
|
2024-01-09 17:18:29 +05:30 |
|
ishaan-jaff
|
248e5f3d92
|
(chore) remove deprecated completion_with_config() tests
|
2024-01-09 17:13:06 +05:30 |
|
ishaan-jaff
|
09874cc83f
|
(v0) add ContentPolicyViolationError
|
2024-01-09 16:33:03 +05:30 |
|
ishaan-jaff
|
f681f0f2b2
|
(feat) completion_cost - embeddings + raise Exception
|
2024-01-05 13:11:23 +05:30 |
|
ishaan-jaff
|
790dcff5e0
|
(feat) add xinference as an embedding provider
|
2024-01-02 15:32:26 +05:30 |
|
fatih
|
6566ebd815
|
update azure turbo namings
|
2024-01-01 13:03:08 +03:00 |
|
ishaan-jaff
|
037dcbbe10
|
(fix) use openai token counter for azure llms
|
2023-12-29 15:37:46 +05:30 |
|
ishaan-jaff
|
367e9913dc
|
(feat) v0 adding cloudflare
|
2023-12-29 09:32:29 +05:30 |
|
ishaan-jaff
|
95e6d2fbba
|
(feat) add voyage ai embeddings
|
2023-12-28 17:10:15 +05:30 |
|
Krrish Dholakia
|
e516cfe9f5
|
fix(utils.py): allow text completion input to be either model or engine
|
2023-12-27 17:24:16 +05:30 |
|
Krrish Dholakia
|
9ba520cc8b
|
fix(google_kms.py): support enums for key management system
|
2023-12-27 13:19:33 +05:30 |
|
Krrish Dholakia
|
2070a785a4
|
feat(utils.py): support google kms for secret management
https://github.com/BerriAI/litellm/issues/1235
|
2023-12-26 15:39:40 +05:30 |
|
ishaan-jaff
|
3e97a766a6
|
(feat) add ollama_chat as a provider
|
2023-12-25 23:04:17 +05:30 |
|
Krrish Dholakia
|
4905929de3
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
|
Krrish Dholakia
|
1262d89ab3
|
feat(gemini.py): add support for completion calls for gemini-pro (google ai studio)
|
2023-12-24 09:42:58 +05:30 |
|
Krrish Dholakia
|
13d088b72e
|
feat(main.py): add support for image generation endpoint
|
2023-12-16 21:07:29 -08:00 |
|
ishaan-jaff
|
0bf29a14e8
|
init vertex_vision_models
|
2023-12-16 18:37:00 +05:30 |
|
ishaan-jaff
|
9bdd6e73bb
|
(feat) proxy logs: dynamodb - set table name
|
2023-12-15 21:38:44 +05:30 |
|
ishaan-jaff
|
353433e5ce
|
(feat) add openai.NotFoundError
|
2023-12-15 10:18:02 +05:30 |
|
Krrish Dholakia
|
1608dd7e0b
|
fix(main.py): support async streaming for text completions endpoint
|
2023-12-14 13:56:32 -08:00 |
|
ishaan-jaff
|
9ee16bc962
|
(feat) caching - add supported call types
|
2023-12-14 22:27:14 +05:30 |
|
ishaan-jaff
|
c0cc78b943
|
(feat) mistral - add exception mapping
|
2023-12-14 18:57:39 +05:30 |
|
ishaan-jaff
|
7945664e61
|
(feat) add mistral api
|
2023-12-14 18:17:48 +05:30 |
|
Krrish Dholakia
|
8d688b6217
|
fix(utils.py): support caching for embedding + log cache hits
n
n
|
2023-12-13 18:37:30 -08:00 |
|
Krrish Dholakia
|
0f29cda8d9
|
test(test_amazing_vertex_completion.py): fix testing
|
2023-12-13 16:41:26 -08:00 |
|
Krrish Dholakia
|
ef7a6e3ae1
|
feat(vertex_ai.py): adds support for gemini-pro on vertex ai
|
2023-12-13 10:26:30 -08:00 |
|
Krrish Dholakia
|
4bf875d3ed
|
fix(router.py): fix least-busy routing
|
2023-12-08 20:29:49 -08:00 |
|
ishaan-jaff
|
ee70c4e822
|
(feat) router - add model_group_alias_map
|
2023-12-06 20:13:33 -08:00 |
|
ishaan-jaff
|
b3f039627e
|
(feat) litellm - add _async_failure_callback
|
2023-12-06 14:43:47 -08:00 |
|
Krrish Dholakia
|
d962d5d4c0
|
fix(bedrock.py): adding support for cohere embeddings
|
2023-12-06 13:25:18 -08:00 |
|
Frank Colson
|
95e5331090
|
Use litellm logging convention
|
2023-12-05 22:28:23 -07:00 |
|
Krrish Dholakia
|
e0ccb281d8
|
feat(utils.py): add async success callbacks for custom functions
|
2023-12-04 16:42:40 -08:00 |
|
Krrish Dholakia
|
eae5b3ce50
|
fix(__init__.py): fix linting error
|
2023-12-01 20:08:08 -08:00 |
|
Krrish Dholakia
|
328113a28e
|
fix(proxy_server.py): fix linting errors
|
2023-12-01 19:45:09 -08:00 |
|