Commit graph

1844 commits

Author SHA1 Message Date
Krish Dholakia
ff6ff133ee
Merge pull request #5260 from BerriAI/google_ai_studio_pass_through
Pass-through endpoints for Gemini - Google AI Studio
2024-08-17 13:51:51 -07:00
Ishaan Jaff
feb8c3c5b4
Merge pull request #5259 from BerriAI/litellm_return_remaining_tokens_in_header
[Feat] return `x-litellm-key-remaining-requests-{model}`: 1, `x-litellm-key-remaining-tokens-{model}: None` in response headers
2024-08-17 12:41:16 -07:00
Ishaan Jaff
ee0f772b5c feat return rmng tokens for model for api key 2024-08-17 12:35:10 -07:00
Krrish Dholakia
bc0023a409 feat(google_ai_studio_endpoints.py): support pass-through endpoint for all google ai studio requests
New Feature
2024-08-17 10:46:59 -07:00
Ishaan Jaff
5985c7e933 feat - use commong helper for getting model group 2024-08-17 10:46:04 -07:00
Krrish Dholakia
61f4b71ef7 refactor: replace .error() with .exception() logging for better debugging on sentry 2024-08-16 09:22:47 -07:00
Krish Dholakia
b3d15ace89
Merge branch 'main' into litellm_pass_through_endpoints_api 2024-08-15 22:39:19 -07:00
Krrish Dholakia
6fc6df134f feat(ui): for adding pass-through endpoints 2024-08-15 21:58:11 -07:00
Krrish Dholakia
589da45c24 feat(pass_through_endpoints.py): initial working CRUD endpoints for /pass_through_endoints 2024-08-15 21:23:26 -07:00
Ishaan Jaff
58828403ea refactor use 1 util for llm routing 2024-08-15 08:42:20 -07:00
Ishaan Jaff
fdd6664420 use route_request for making llm call 2024-08-15 08:16:44 -07:00
Krrish Dholakia
28faafadb1 feat(pass_through_endpoints.py): initial commit of crud endpoints for pass through endpoints 2024-08-14 21:36:07 -07:00
Krrish Dholakia
a020563149 feat(proxy_server.py): support returning available fields for pass_through_endpoints via `/config/field/list 2024-08-14 19:07:10 -07:00
Ishaan Jaff
4e7b0ce76e feat read config from s3 2024-08-13 20:18:59 -07:00
Krrish Dholakia
b3c7d63ec1 fix(proxy_server.py): add info log when spend logs is skipped because disable_spend_logs=True. 2024-08-12 11:20:30 -07:00
Ishaan Jaff
aef13a37f4 feat add cron job for sending stats from prometheus 2024-08-10 16:56:03 -07:00
Krrish Dholakia
4b40563be3 fix(proxy_server.py): ensure internal_user params only apply to internal_user role 2024-08-08 17:50:29 -07:00
Krrish Dholakia
97d57b8449 fix(proxy_server.py): respect internal_user_budget_duration for sso user 2024-08-08 17:28:28 -07:00
Krrish Dholakia
4ac0afef2c fix(anthropic.py): fix translation from /v1/messages format to openai format 2024-08-08 17:19:12 -07:00
Krrish Dholakia
ac601a69fc feat: set max_internal_budget for user w/ sso 2024-08-08 12:44:07 -07:00
Ishaan Jaff
218ba0f470 support provider wildcard routing 2024-08-07 14:20:22 -07:00
Ishaan Jaff
7173cef822 clean up unused func 2024-08-06 20:58:51 -07:00
Ishaan Jaff
a1c1ad60d2 fix forward ui requests when base url set 2024-08-06 18:10:34 -07:00
Ishaan Jaff
d08f59ae09 add redirect_ui_middleware 2024-08-06 18:01:23 -07:00
Ishaan Jaff
237c3a0244
Merge pull request #5077 from BerriAI/litellm_fix_ui_emails
[Fix] Fix testing emails through Admin UI
2024-08-06 16:41:01 -07:00
Ishaan Jaff
e5e5628fc8 update alerting settings on ui 2024-08-06 16:05:33 -07:00
Ishaan Jaff
7024fcacec add debug statements on docker 2024-08-06 15:29:49 -07:00
Krrish Dholakia
2abd4d5363 feat(proxy_server.py): allow restricting allowed email domains for the UI 2024-08-06 12:53:03 -07:00
Ishaan Jaff
8ec184b93f add debugging utils to print when connecting to prisma 2024-08-05 19:19:28 -07:00
Ishaan Jaff
091449e81b build ui on custom path 2024-08-05 16:34:37 -07:00
Ishaan Jaff
900e89c0ad set PROXY_BASE_URL when server root path set 2024-08-05 15:48:44 -07:00
Krish Dholakia
bca71019ad
Merge branch 'main' into litellm_anthropic_api_streaming 2024-08-03 21:16:50 -07:00
Krrish Dholakia
ac6c39c283 feat(anthropic_adapter.py): support streaming requests for /v1/messages endpoint
Fixes https://github.com/BerriAI/litellm/issues/5011
2024-08-03 20:16:19 -07:00
Ishaan Jaff
af04bbecca set native vertex endpoints 2024-08-03 16:52:03 -07:00
Krrish Dholakia
5d96ff6694 fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
2024-08-02 17:48:53 -07:00
Krrish Dholakia
d8a8cd2961 feat(ui): add ability to enable traceloop + langsmith via ui 2024-07-31 21:40:29 -07:00
Krrish Dholakia
d914aa558d feat(ui): add braintrust logging to ui 2024-07-31 21:15:19 -07:00
Ishaan Jaff
d833c69acb
Merge pull request #4987 from BerriAI/litellm_add_ft_endpoints
[Feat-Proxy] Add List fine-tuning jobs
2024-07-31 16:49:59 -07:00
Ishaan Jaff
cd3a01c03f fix fine tuning endpoint postion on swagger 2024-07-31 16:46:58 -07:00
Ishaan Jaff
424a70c331 fix routes order 2024-07-31 16:36:50 -07:00
Ishaan Jaff
2371df9ded allow setting files config 2024-07-31 15:28:41 -07:00
Krrish Dholakia
6202f9bbb0 fix(http_handler.py): correctly re-raise timeout exception 2024-07-31 14:51:28 -07:00
Ishaan Jaff
bd7b485d09 read ft config 2024-07-31 13:31:00 -07:00
Ishaan Jaff
9d90f174a7 fix endpoint to create fine tuning jobs 2024-07-31 12:44:01 -07:00
Ishaan Jaff
66211b42db fix linting errors 2024-07-30 12:51:39 -07:00
Ishaan Jaff
563d59a305 test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
Ishaan Jaff
1a34756159
Merge pull request #4916 from BerriAI/litellm_fix_ui_login
Feat UI - allow using custom header for litellm api key
2024-07-29 17:08:53 -07:00
Ishaan Jaff
0c25aaf9df check litellm header in login on ui 2024-07-29 17:03:04 -07:00
Ishaan Jaff
9b69e500e5
Merge pull request #4927 from BerriAI/litellm_set_max_request_response_size_ui
Feat Enterprise -  set max request  / response size UI
2024-07-27 20:06:09 -07:00
Ishaan Jaff
10e70f842d
Merge pull request #4928 from BerriAI/litellm_check_response_size
[Feat Enterprise] - check max response size
2024-07-27 17:03:56 -07:00