Ishaan Jaff
|
2501b4eccd
|
feat link to model cost map on swagger
|
2024-07-26 21:34:42 -07:00 |
|
Ishaan Jaff
|
548adea8cf
|
add litellm_header_name endpoint
|
2024-07-26 21:04:31 -07:00 |
|
Ishaan Jaff
|
a7f964b869
|
Merge pull request #4913 from BerriAI/litellm_fix_error_limit
[Proxy-Fix] - raise more descriptive errors when crossing tpm / rpm limits on keys, user, global limits
|
2024-07-26 20:25:28 -07:00 |
|
Ishaan Jaff
|
3c463ccbe6
|
Merge pull request #4914 from BerriAI/litellm_fix_batches
[Proxy-Fix + Test] - /batches endpoint
|
2024-07-26 20:12:03 -07:00 |
|
Krrish Dholakia
|
fe0b55f2ca
|
fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
|
2024-07-26 19:04:08 -07:00 |
|
Ishaan Jaff
|
f627fa9b40
|
fix for GET /v1/batches{batch_id:path}
|
2024-07-26 18:23:15 -07:00 |
|
Ishaan Jaff
|
56ce7e892d
|
fix batches inserting metadata
|
2024-07-26 18:08:54 -07:00 |
|
Ishaan Jaff
|
159a880dcc
|
fix /v1/batches POST
|
2024-07-26 18:06:00 -07:00 |
|
Ishaan Jaff
|
c4e4b4675c
|
fix raise better error when crossing tpm / rpm limits
|
2024-07-26 17:35:08 -07:00 |
|
Krrish Dholakia
|
9943c6d607
|
fix(proxy_server.py): fix get secret for environment_variables
|
2024-07-26 13:33:02 -07:00 |
|
Krrish Dholakia
|
84482703b8
|
docs(config.md): update wildcard docs
|
2024-07-26 08:59:53 -07:00 |
|
Krrish Dholakia
|
1d6c39a607
|
feat(proxy_server.py): handle pydantic mockselvar error
Fixes https://github.com/BerriAI/litellm/issues/4898#issuecomment-2252105485
|
2024-07-26 08:38:51 -07:00 |
|
Krrish Dholakia
|
2f773d9cb6
|
fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking
|
2024-07-25 22:12:07 -07:00 |
|
Ishaan Jaff
|
079a41fbe1
|
Merge branch 'main' into litellm_proxy_support_all_providers
|
2024-07-25 20:15:37 -07:00 |
|
Ishaan Jaff
|
68e94f0976
|
example mistral sdk
|
2024-07-25 19:48:54 -07:00 |
|
Ishaan Jaff
|
693bcfac39
|
fix using pass_through_all_models
|
2024-07-25 19:32:49 -07:00 |
|
Krish Dholakia
|
c2086300b7
|
Merge branch 'main' into litellm_redis_team_object
|
2024-07-25 19:31:52 -07:00 |
|
Krish Dholakia
|
a306b83b2d
|
Merge pull request #4887 from BerriAI/litellm_custom_llm
feat(custom_llm.py): Support Custom LLM Handlers
|
2024-07-25 19:05:29 -07:00 |
|
Ishaan Jaff
|
9863520376
|
support using */*
|
2024-07-25 18:48:56 -07:00 |
|
Ishaan Jaff
|
8f4c5437b8
|
router support setting pass_through_all_models
|
2024-07-25 18:34:12 -07:00 |
|
Krrish Dholakia
|
bd7af04a72
|
feat(proxy_server.py): support custom llm handler on proxy
|
2024-07-25 17:56:34 -07:00 |
|
Ishaan Jaff
|
2432c90515
|
feat - support health check audio_speech
|
2024-07-25 17:26:14 -07:00 |
|
Krrish Dholakia
|
bfdda089c8
|
fix(proxy_server.py): check if input list > 0 before indexing into it
resolves 'list index out of range' error
|
2024-07-25 14:23:07 -07:00 |
|
Krrish Dholakia
|
4e51f712f3
|
fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
|
2024-07-25 09:57:19 -07:00 |
|
Krrish Dholakia
|
b376ee71b0
|
fix(internal_user_endpoints.py): support updating budgets for /user/update
|
2024-07-24 21:51:46 -07:00 |
|
Krrish Dholakia
|
6ab2527fdc
|
feat(auth_check.py): support using redis cache for team objects
Allows team update / check logic to work across instances instantly
|
2024-07-24 18:14:49 -07:00 |
|
Krrish Dholakia
|
b5c5ed2209
|
fix(key_management_endpoints.py): if budget duration set, set budget_reset_at
|
2024-07-24 15:02:22 -07:00 |
|
Krrish Dholakia
|
99d8b0ad68
|
fix(bedrock_httpx.py): fix async client check
|
2024-07-24 12:43:52 -07:00 |
|
Marc Abramowitz
|
642f1a7bcc
|
Check existence of multiple views in 1 query
instead of multiple queries. This is more efficient because it lets us check for
all views in one query instead of multiple queries.
|
2024-07-23 21:55:02 -07:00 |
|
Krrish Dholakia
|
fb0a13c8bb
|
fix(anthropic.py): support openai system message being a list
|
2024-07-23 21:45:56 -07:00 |
|
Ishaan Jaff
|
92708a2737
|
Merge pull request #4841 from BerriAI/litellm_anthropic_cost_tracking
[Feat] - /v1/messages support usage tracking on spendLogs
|
2024-07-23 16:37:42 -07:00 |
|
Ishaan Jaff
|
78eb5164df
|
fix DB accept null values for api_base, user, etc
|
2024-07-23 16:33:04 -07:00 |
|
Ishaan Jaff
|
a71b60d005
|
Pass litellm proxy specific metadata
|
2024-07-23 15:31:30 -07:00 |
|
Ishaan Jaff
|
4c1ee1e282
|
fix add better debugging _PROXY_track_cost_callback
|
2024-07-23 15:25:46 -07:00 |
|
Krrish Dholakia
|
f64a3309d1
|
fix(utils.py): support raw response headers for streaming requests
|
2024-07-23 11:58:58 -07:00 |
|
Krrish Dholakia
|
dcb974dd1e
|
feat(utils.py): support passing openai response headers to client, if enabled
Allows openai/openai-compatible provider response headers to be sent to client, if 'return_response_headers' is enabled
|
2024-07-23 11:30:52 -07:00 |
|
Ishaan Jaff
|
b137207ae6
|
doc alert_to_webhook_url
|
2024-07-23 11:04:15 -07:00 |
|
Ishaan Jaff
|
0ec2e9aa27
|
feat alert_to_webhook_url
|
2024-07-23 10:09:24 -07:00 |
|
Ishaan Jaff
|
d116ff280e
|
feat - set alert_to_webhook_url
|
2024-07-23 10:08:21 -07:00 |
|
Ishaan Jaff
|
c1593c0cd1
|
update alert_to_webhook_url
|
2024-07-23 10:07:08 -07:00 |
|
Ishaan Jaff
|
69091f31df
|
feat - add success_Callback per request
|
2024-07-23 08:43:01 -07:00 |
|
Ishaan Jaff
|
24ae0119d1
|
add debug logging for team callback settings
|
2024-07-23 08:41:05 -07:00 |
|
Ishaan Jaff
|
1a33c40713
|
add endpoint to disable logging for a team
|
2024-07-23 08:38:23 -07:00 |
|
Krish Dholakia
|
e770ac48a2
|
Merge branch 'main' into litellm_braintrust_integration
|
2024-07-22 22:40:39 -07:00 |
|
Krish Dholakia
|
c4db6aa15e
|
Merge pull request #4810 from BerriAI/litellm_team_modify_guardrails
feat(auth_checks.py): Allow admin to disable team from turning on/off guardrails
|
2024-07-22 22:32:24 -07:00 |
|
Krrish Dholakia
|
1a83935aa4
|
fix(proxy/utils.py): add stronger typing for litellm params in failure call logging
|
2024-07-22 21:31:39 -07:00 |
|
Ishaan Jaff
|
290628d66b
|
Merge pull request #4831 from BerriAI/litellm_api_control_team_logging
[Feat] - API Endpoints to control logging callbacks per Team
|
2024-07-22 21:22:14 -07:00 |
|
Krrish Dholakia
|
548e4f53f8
|
feat(redact_messages.py): allow remove sensitive key information before passing to logging integration
|
2024-07-22 20:58:02 -07:00 |
|
Ishaan Jaff
|
2e5da5ea18
|
GET endpoint to get team callbacks
|
2024-07-22 20:46:20 -07:00 |
|
Ishaan Jaff
|
447bab4d46
|
only allow unique callbacks for team callbacks
|
2024-07-22 20:43:42 -07:00 |
|