Ishaan Jaff
|
4038b3dcea
|
router - use verbose logger when using litellm.Router
|
2024-07-20 17:36:25 -07:00 |
|
Ishaan Jaff
|
08adda7091
|
control using enable_tag_filtering
|
2024-07-18 19:39:04 -07:00 |
|
Ishaan Jaff
|
4d0fbfea83
|
router - refactor to tag based routing
|
2024-07-18 19:22:09 -07:00 |
|
Ishaan Jaff
|
4b96cd46b2
|
Merge pull request #4786 from BerriAI/litellm_use_model_tier_keys
[Feat-Enterprise] Use free/paid tiers for Virtual Keys
|
2024-07-18 18:07:09 -07:00 |
|
Krrish Dholakia
|
b23a633cf1
|
fix(utils.py): fix status code in exception mapping
|
2024-07-18 18:04:59 -07:00 |
|
Ishaan Jaff
|
64e38562d9
|
router - use free paid tier routing
|
2024-07-18 17:09:42 -07:00 |
|
Krrish Dholakia
|
0a94953896
|
fix(router.py): check for request_timeout in acompletion
support 'request_timeout' param in router acompletion
|
2024-07-17 17:19:06 -07:00 |
|
Ishaan Jaff
|
e65daef572
|
router return get_deployment_by_model_group_name
|
2024-07-15 19:27:12 -07:00 |
|
Krish Dholakia
|
dacce3d78b
|
Merge pull request #4635 from BerriAI/litellm_anthropic_adapter
Anthropic `/v1/messages` endpoint support
|
2024-07-10 22:41:53 -07:00 |
|
Krrish Dholakia
|
31829855c0
|
feat(proxy_server.py): working /v1/messages with config.yaml
Adds async router support for adapter_completion call
|
2024-07-10 18:53:54 -07:00 |
|
Ishaan Jaff
|
62f475919b
|
feat - add DELETE assistants endpoint
|
2024-07-10 11:37:37 -07:00 |
|
Ishaan Jaff
|
f5eb862635
|
router - add acreate_assistants
|
2024-07-09 09:46:28 -07:00 |
|
Krish Dholakia
|
8661da1980
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-06 19:12:06 -07:00 |
|
Ishaan Jaff
|
2609de43d0
|
use helper for init client + check if we should init sync clients
|
2024-07-06 12:52:41 -07:00 |
|
Krrish Dholakia
|
86632f6da0
|
fix(types/router.py): add custom pricing info to 'model_info'
Fixes https://github.com/BerriAI/litellm/issues/4542
|
2024-07-04 16:07:58 -07:00 |
|
Krrish Dholakia
|
3d61a316cb
|
fix(router.py): bump azure default api version
Allows 'tool_choice' to be passed to azure
|
2024-07-03 12:00:00 -07:00 |
|
Krrish Dholakia
|
892ba62730
|
fix(router.py): fix mounting logic
|
2024-07-02 17:54:32 -07:00 |
|
Krish Dholakia
|
21d3a28e51
|
Merge branch 'main' into litellm_support_dynamic_rpm_limiting
|
2024-07-02 17:51:18 -07:00 |
|
Krrish Dholakia
|
0647278a69
|
refactor: remove custom transport logic
Not needed after azure dall-e-2 refactor
|
2024-07-02 17:35:27 -07:00 |
|
Krish Dholakia
|
d38f01e956
|
Merge branch 'main' into litellm_fix_httpx_transport
|
2024-07-02 17:17:43 -07:00 |
|
Krrish Dholakia
|
f23b17091d
|
fix(dynamic_rate_limiter.py): support dynamic rate limiting on rpm
|
2024-07-01 17:45:10 -07:00 |
|
Krrish Dholakia
|
ea74e01813
|
fix(router.py): disable cooldowns
allow admin to disable model cooldowns
|
2024-07-01 15:03:10 -07:00 |
|
Krrish Dholakia
|
c9a424d28d
|
fix(router.py): fix get_router_model_info for azure models
|
2024-06-28 22:13:29 -07:00 |
|
Ishaan Jaff
|
d172a3ef6b
|
fix python3.8 install
|
2024-06-28 16:58:57 -07:00 |
|
Krrish Dholakia
|
aa6f7665c4
|
fix(router.py): only return 'max_tokens', 'input_cost_per_token', etc. in 'get_router_model_info' if base_model is set
|
2024-06-28 10:45:31 -07:00 |
|
Krrish Dholakia
|
98daedaf60
|
fix(router.py): fix setting httpx mounts
|
2024-06-26 17:22:04 -07:00 |
|
Krrish Dholakia
|
d98e00d1e0
|
fix(router.py): set cooldown_time: per model
|
2024-06-25 16:51:55 -07:00 |
|
Krrish Dholakia
|
cccc55213b
|
fix(router.py): improve error message returned for fallbacks
|
2024-06-25 11:27:20 -07:00 |
|
Krrish Dholakia
|
0396d484fb
|
feat(router.py): support mock testing content policy + context window fallbacks
|
2024-06-25 10:58:19 -07:00 |
|
Krrish Dholakia
|
a4bea47a2d
|
fix(router.py): log rejected router requests to langfuse
Fixes issue where rejected requests weren't being logged
|
2024-06-24 17:52:01 -07:00 |
|
Krrish Dholakia
|
f5fbdf0fee
|
fix(router.py): use user-defined model_input_tokens for pre-call filter checks
|
2024-06-24 17:25:26 -07:00 |
|
Krish Dholakia
|
0454c0781a
|
Merge branch 'main' into litellm_azure_content_filter_fallbacks
|
2024-06-22 21:28:29 -07:00 |
|
Krish Dholakia
|
961e7ac95d
|
Merge branch 'main' into litellm_dynamic_tpm_limits
|
2024-06-22 19:14:59 -07:00 |
|
Krrish Dholakia
|
2c7a80d08d
|
fix(router.py): check if azure returns 'content_filter' response + fallback available -> fallback
Exception maps azure content filter response exceptions
|
2024-06-22 19:10:15 -07:00 |
|
Krrish Dholakia
|
068e8dff5b
|
feat(dynamic_rate_limiter.py): passing base case
|
2024-06-21 22:46:46 -07:00 |
|
Steven Osborn
|
0ab6a18516
|
Print content window fallbacks on startup to help verify configuration
|
2024-06-21 19:43:26 -07:00 |
|
Krrish Dholakia
|
2545da777b
|
feat(dynamic_rate_limiter.py): initial commit for dynamic rate limiting
Closes https://github.com/BerriAI/litellm/issues/4124
|
2024-06-21 18:41:31 -07:00 |
|
Krrish Dholakia
|
06b297a6e8
|
fix(router.py): fix set_client init to check if custom_llm_provider is azure not if in model name
fixes issue where 'azure_ai/' was being init as azureopenai client
|
2024-06-21 17:09:20 -07:00 |
|
Krish Dholakia
|
f86290584a
|
Merge pull request #4290 from BerriAI/litellm_specific_deployment
feat(router.py): allow user to call specific deployment via id
|
2024-06-20 20:36:13 -07:00 |
|
Krrish Dholakia
|
5729eb5168
|
fix(user_api_key_auth.py): ensure user has access to fallback models
for client side fallbacks, checks if user has access to fallback models
|
2024-06-20 16:02:19 -07:00 |
|
Ishaan Jaff
|
cdc1e952ac
|
router - add doc string
|
2024-06-20 14:36:51 -07:00 |
|
Ishaan Jaff
|
b6066d1ece
|
feat - set custom routing strategy
|
2024-06-20 13:49:44 -07:00 |
|
Krrish Dholakia
|
16889b8478
|
feat(router.py): allow user to call specific deployment via id
Allows easier health checks for specific deployments by just passing in model id
|
2024-06-19 13:02:46 -07:00 |
|
Krrish Dholakia
|
14b66c3daa
|
fix(router.py): support multiple orgs in 1 model definition
Closes https://github.com/BerriAI/litellm/issues/3949
|
2024-06-18 19:36:58 -07:00 |
|
Krrish Dholakia
|
3d9ef689e7
|
fix(vertex_httpx.py): check if model supports system messages before sending separately
|
2024-06-17 17:32:38 -07:00 |
|
Krish Dholakia
|
28a52fe5fb
|
Merge pull request #4207 from BerriAI/litellm_content_policy_fallbacks
feat(router.py): support content policy fallbacks
|
2024-06-14 18:55:11 -07:00 |
|
Krrish Dholakia
|
6f715b4782
|
feat(router.py): support content policy fallbacks
Closes https://github.com/BerriAI/litellm/issues/2632
|
2024-06-14 17:15:44 -07:00 |
|
Ishaan Jaff
|
bd5d1be1f6
|
feat - send llm exception alert on acompletion, aembedding etc
|
2024-06-14 10:11:24 -07:00 |
|
Ishaan Jaff
|
a0ecc6f414
|
fix - send alert on router level exceptions
|
2024-06-14 08:41:12 -07:00 |
|
Ishaan Jaff
|
490f5265ac
|
fix model hub not loading
|
2024-06-12 19:38:31 -07:00 |
|