Krrish Dholakia
97d57b8449
fix(proxy_server.py): respect internal_user_budget_duration for sso user
2024-08-08 17:28:28 -07:00
Krrish Dholakia
4ac0afef2c
fix(anthropic.py): fix translation from /v1/messages format to openai format
2024-08-08 17:19:12 -07:00
Krrish Dholakia
ac601a69fc
feat: set max_internal_budget for user w/ sso
2024-08-08 12:44:07 -07:00
Ishaan Jaff
218ba0f470
support provider wildcard routing
2024-08-07 14:20:22 -07:00
Ishaan Jaff
7173cef822
clean up unused func
2024-08-06 20:58:51 -07:00
Ishaan Jaff
a1c1ad60d2
fix forward ui requests when base url set
2024-08-06 18:10:34 -07:00
Ishaan Jaff
d08f59ae09
add redirect_ui_middleware
2024-08-06 18:01:23 -07:00
Ishaan Jaff
237c3a0244
Merge pull request #5077 from BerriAI/litellm_fix_ui_emails
...
[Fix] Fix testing emails through Admin UI
2024-08-06 16:41:01 -07:00
Ishaan Jaff
e5e5628fc8
update alerting settings on ui
2024-08-06 16:05:33 -07:00
Ishaan Jaff
7024fcacec
add debug statements on docker
2024-08-06 15:29:49 -07:00
Krrish Dholakia
2abd4d5363
feat(proxy_server.py): allow restricting allowed email domains for the UI
2024-08-06 12:53:03 -07:00
Ishaan Jaff
8ec184b93f
add debugging utils to print when connecting to prisma
2024-08-05 19:19:28 -07:00
Ishaan Jaff
091449e81b
build ui on custom path
2024-08-05 16:34:37 -07:00
Ishaan Jaff
900e89c0ad
set PROXY_BASE_URL when server root path set
2024-08-05 15:48:44 -07:00
Krish Dholakia
bca71019ad
Merge branch 'main' into litellm_anthropic_api_streaming
2024-08-03 21:16:50 -07:00
Krrish Dholakia
ac6c39c283
feat(anthropic_adapter.py): support streaming requests for /v1/messages
endpoint
...
Fixes https://github.com/BerriAI/litellm/issues/5011
2024-08-03 20:16:19 -07:00
Ishaan Jaff
af04bbecca
set native vertex endpoints
2024-08-03 16:52:03 -07:00
Krrish Dholakia
5d96ff6694
fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
...
Fixes https://github.com/BerriAI/litellm/issues/4912
2024-08-02 17:48:53 -07:00
Krrish Dholakia
d8a8cd2961
feat(ui): add ability to enable traceloop + langsmith via ui
2024-07-31 21:40:29 -07:00
Krrish Dholakia
d914aa558d
feat(ui): add braintrust logging to ui
2024-07-31 21:15:19 -07:00
Ishaan Jaff
d833c69acb
Merge pull request #4987 from BerriAI/litellm_add_ft_endpoints
...
[Feat-Proxy] Add List fine-tuning jobs
2024-07-31 16:49:59 -07:00
Ishaan Jaff
cd3a01c03f
fix fine tuning endpoint postion on swagger
2024-07-31 16:46:58 -07:00
Ishaan Jaff
424a70c331
fix routes order
2024-07-31 16:36:50 -07:00
Ishaan Jaff
2371df9ded
allow setting files config
2024-07-31 15:28:41 -07:00
Krrish Dholakia
6202f9bbb0
fix(http_handler.py): correctly re-raise timeout exception
2024-07-31 14:51:28 -07:00
Ishaan Jaff
bd7b485d09
read ft config
2024-07-31 13:31:00 -07:00
Ishaan Jaff
9d90f174a7
fix endpoint to create fine tuning jobs
2024-07-31 12:44:01 -07:00
Ishaan Jaff
66211b42db
fix linting errors
2024-07-30 12:51:39 -07:00
Ishaan Jaff
563d59a305
test batches endpoint on proxy
2024-07-30 09:46:30 -07:00
Ishaan Jaff
1a34756159
Merge pull request #4916 from BerriAI/litellm_fix_ui_login
...
Feat UI - allow using custom header for litellm api key
2024-07-29 17:08:53 -07:00
Ishaan Jaff
0c25aaf9df
check litellm header in login on ui
2024-07-29 17:03:04 -07:00
Ishaan Jaff
9b69e500e5
Merge pull request #4927 from BerriAI/litellm_set_max_request_response_size_ui
...
Feat Enterprise - set max request / response size UI
2024-07-27 20:06:09 -07:00
Ishaan Jaff
10e70f842d
Merge pull request #4928 from BerriAI/litellm_check_response_size
...
[Feat Enterprise] - check max response size
2024-07-27 17:03:56 -07:00
Ishaan Jaff
3511aadf99
allow setting max request / response size on admin UI
2024-07-27 17:00:39 -07:00
Ishaan Jaff
b2f72338f6
feat check check_response_size_is_safe
2024-07-27 16:53:39 -07:00
Ishaan Jaff
19fb5cc11c
use common helpers for writing to otel
2024-07-27 11:40:39 -07:00
Ishaan Jaff
61c10e60a4
feat - use log_to_opentelemetry for _PROXY_track_cost_callback
2024-07-27 11:08:22 -07:00
Ishaan Jaff
1adf71b9b7
feat - clearly show version litellm enterprise
2024-07-27 09:50:03 -07:00
Krish Dholakia
9bdcef238b
Merge pull request #4907 from BerriAI/litellm_proxy_get_secret
...
fix(proxy_server.py): fix get secret for environment_variables
2024-07-26 22:17:11 -07:00
Ishaan Jaff
2501b4eccd
feat link to model cost map on swagger
2024-07-26 21:34:42 -07:00
Ishaan Jaff
f627fa9b40
fix for GET /v1/batches{batch_id:path}
2024-07-26 18:23:15 -07:00
Ishaan Jaff
159a880dcc
fix /v1/batches POST
2024-07-26 18:06:00 -07:00
Krrish Dholakia
9943c6d607
fix(proxy_server.py): fix get secret for environment_variables
2024-07-26 13:33:02 -07:00
Krrish Dholakia
1d6c39a607
feat(proxy_server.py): handle pydantic mockselvar error
...
Fixes https://github.com/BerriAI/litellm/issues/4898#issuecomment-2252105485
2024-07-26 08:38:51 -07:00
Ishaan Jaff
079a41fbe1
Merge branch 'main' into litellm_proxy_support_all_providers
2024-07-25 20:15:37 -07:00
Ishaan Jaff
693bcfac39
fix using pass_through_all_models
2024-07-25 19:32:49 -07:00
Ishaan Jaff
8f4c5437b8
router support setting pass_through_all_models
2024-07-25 18:34:12 -07:00
Krrish Dholakia
bd7af04a72
feat(proxy_server.py): support custom llm handler on proxy
2024-07-25 17:56:34 -07:00
Krrish Dholakia
bfdda089c8
fix(proxy_server.py): check if input list > 0 before indexing into it
...
resolves 'list index out of range' error
2024-07-25 14:23:07 -07:00
Marc Abramowitz
6faaa8aa50
Allow not displaying feedback box
...
by setting an env var called `LITELLM_DONT_SHOW_FEEDBACK_BOX` to `"true"`.
I liked the feedback box when I first started using LiteLLM, because it showed
me that the authors care about customers. But now that I've seen it a bunch of
times, I don't need to see it every time I start the server and I'd rather have
less output on startup.
2024-07-24 16:50:10 -07:00