Ishaan Jaff
|
2be984ebee
|
add docstring for /embeddings and /completions
|
2024-08-22 09:30:47 -07:00 |
|
Ishaan Jaff
|
f6e80b0031
|
add doc string for /chat/completions swagger
|
2024-08-22 09:27:40 -07:00 |
|
Ishaan Jaff
|
a174cbdd72
|
Merge branch 'main' into litellm_pass_through_vtx_multi_modal
|
2024-08-21 17:23:22 -07:00 |
|
Ishaan Jaff
|
e9537c6560
|
proxy - print embedding request when recieved
|
2024-08-21 17:00:18 -07:00 |
|
Krish Dholakia
|
72169fd5c4
|
Merge branch 'main' into litellm_disable_storing_master_key_hash_in_db
|
2024-08-21 15:37:25 -07:00 |
|
Krrish Dholakia
|
e2d7539690
|
feat(caching.py): redis cluster support
Closes https://github.com/BerriAI/litellm/issues/4358
|
2024-08-21 15:01:52 -07:00 |
|
Ishaan Jaff
|
d6493b0e7f
|
docs semantic caching qdrant
|
2024-08-21 13:03:41 -07:00 |
|
Krrish Dholakia
|
89014dfc07
|
feat(proxy_server.py): support disabling storing master key hash in db, for spend tracking
|
2024-08-21 12:35:37 -07:00 |
|
Krrish Dholakia
|
6f8840daa1
|
fix(proxy_server.py): fix invalid login message to not show passed in pwd
Closes https://github.com/BerriAI/litellm/issues/5290
|
2024-08-20 08:56:57 -07:00 |
|
Ishaan Jaff
|
9ef6ae2f7c
|
Merge pull request #4868 from msabramo/allow-not-displaying-feedback-box
Allow not displaying feedback box
|
2024-08-20 08:53:45 -07:00 |
|
Ishaan Jaff
|
c7b3978655
|
Merge pull request #5288 from BerriAI/litellm_aporia_refactor
[Feat] V2 aporia guardrails litellm
|
2024-08-19 20:41:45 -07:00 |
|
Ishaan Jaff
|
8cd1963c11
|
feat - guardrails v2
|
2024-08-19 18:24:20 -07:00 |
|
Krrish Dholakia
|
1701c48ad5
|
feat(langfuse_endpoints.py): support langfuse pass through endpoints by default
|
2024-08-19 17:28:34 -07:00 |
|
Ishaan Jaff
|
613bd1babd
|
feat - return applied guardrails in response headers
|
2024-08-19 11:56:20 -07:00 |
|
Ishaan Jaff
|
4685b9909a
|
feat - allow accessing data post success call
|
2024-08-19 11:35:33 -07:00 |
|
Krish Dholakia
|
ff6ff133ee
|
Merge pull request #5260 from BerriAI/google_ai_studio_pass_through
Pass-through endpoints for Gemini - Google AI Studio
|
2024-08-17 13:51:51 -07:00 |
|
Ishaan Jaff
|
feb8c3c5b4
|
Merge pull request #5259 from BerriAI/litellm_return_remaining_tokens_in_header
[Feat] return `x-litellm-key-remaining-requests-{model}`: 1, `x-litellm-key-remaining-tokens-{model}: None` in response headers
|
2024-08-17 12:41:16 -07:00 |
|
Ishaan Jaff
|
ee0f772b5c
|
feat return rmng tokens for model for api key
|
2024-08-17 12:35:10 -07:00 |
|
Krrish Dholakia
|
bc0023a409
|
feat(google_ai_studio_endpoints.py): support pass-through endpoint for all google ai studio requests
New Feature
|
2024-08-17 10:46:59 -07:00 |
|
Ishaan Jaff
|
5985c7e933
|
feat - use commong helper for getting model group
|
2024-08-17 10:46:04 -07:00 |
|
Krrish Dholakia
|
61f4b71ef7
|
refactor: replace .error() with .exception() logging for better debugging on sentry
|
2024-08-16 09:22:47 -07:00 |
|
Krish Dholakia
|
b3d15ace89
|
Merge branch 'main' into litellm_pass_through_endpoints_api
|
2024-08-15 22:39:19 -07:00 |
|
Krrish Dholakia
|
6fc6df134f
|
feat(ui): for adding pass-through endpoints
|
2024-08-15 21:58:11 -07:00 |
|
Krrish Dholakia
|
589da45c24
|
feat(pass_through_endpoints.py): initial working CRUD endpoints for /pass_through_endoints
|
2024-08-15 21:23:26 -07:00 |
|
Ishaan Jaff
|
58828403ea
|
refactor use 1 util for llm routing
|
2024-08-15 08:42:20 -07:00 |
|
Ishaan Jaff
|
fdd6664420
|
use route_request for making llm call
|
2024-08-15 08:16:44 -07:00 |
|
Krrish Dholakia
|
28faafadb1
|
feat(pass_through_endpoints.py): initial commit of crud endpoints for pass through endpoints
|
2024-08-14 21:36:07 -07:00 |
|
Krrish Dholakia
|
a020563149
|
feat(proxy_server.py): support returning available fields for pass_through_endpoints via `/config/field/list
|
2024-08-14 19:07:10 -07:00 |
|
Ishaan Jaff
|
4e7b0ce76e
|
feat read config from s3
|
2024-08-13 20:18:59 -07:00 |
|
Krrish Dholakia
|
b3c7d63ec1
|
fix(proxy_server.py): add info log when spend logs is skipped because disable_spend_logs=True .
|
2024-08-12 11:20:30 -07:00 |
|
Ishaan Jaff
|
aef13a37f4
|
feat add cron job for sending stats from prometheus
|
2024-08-10 16:56:03 -07:00 |
|
Krrish Dholakia
|
4b40563be3
|
fix(proxy_server.py): ensure internal_user params only apply to internal_user role
|
2024-08-08 17:50:29 -07:00 |
|
Krrish Dholakia
|
97d57b8449
|
fix(proxy_server.py): respect internal_user_budget_duration for sso user
|
2024-08-08 17:28:28 -07:00 |
|
Krrish Dholakia
|
4ac0afef2c
|
fix(anthropic.py): fix translation from /v1/messages format to openai format
|
2024-08-08 17:19:12 -07:00 |
|
Krrish Dholakia
|
ac601a69fc
|
feat: set max_internal_budget for user w/ sso
|
2024-08-08 12:44:07 -07:00 |
|
Ishaan Jaff
|
218ba0f470
|
support provider wildcard routing
|
2024-08-07 14:20:22 -07:00 |
|
Ishaan Jaff
|
7173cef822
|
clean up unused func
|
2024-08-06 20:58:51 -07:00 |
|
Ishaan Jaff
|
a1c1ad60d2
|
fix forward ui requests when base url set
|
2024-08-06 18:10:34 -07:00 |
|
Ishaan Jaff
|
d08f59ae09
|
add redirect_ui_middleware
|
2024-08-06 18:01:23 -07:00 |
|
Ishaan Jaff
|
237c3a0244
|
Merge pull request #5077 from BerriAI/litellm_fix_ui_emails
[Fix] Fix testing emails through Admin UI
|
2024-08-06 16:41:01 -07:00 |
|
Ishaan Jaff
|
e5e5628fc8
|
update alerting settings on ui
|
2024-08-06 16:05:33 -07:00 |
|
Ishaan Jaff
|
7024fcacec
|
add debug statements on docker
|
2024-08-06 15:29:49 -07:00 |
|
Krrish Dholakia
|
2abd4d5363
|
feat(proxy_server.py): allow restricting allowed email domains for the UI
|
2024-08-06 12:53:03 -07:00 |
|
Ishaan Jaff
|
8ec184b93f
|
add debugging utils to print when connecting to prisma
|
2024-08-05 19:19:28 -07:00 |
|
Ishaan Jaff
|
091449e81b
|
build ui on custom path
|
2024-08-05 16:34:37 -07:00 |
|
Ishaan Jaff
|
900e89c0ad
|
set PROXY_BASE_URL when server root path set
|
2024-08-05 15:48:44 -07:00 |
|
Krish Dholakia
|
bca71019ad
|
Merge branch 'main' into litellm_anthropic_api_streaming
|
2024-08-03 21:16:50 -07:00 |
|
Krrish Dholakia
|
ac6c39c283
|
feat(anthropic_adapter.py): support streaming requests for /v1/messages endpoint
Fixes https://github.com/BerriAI/litellm/issues/5011
|
2024-08-03 20:16:19 -07:00 |
|
Ishaan Jaff
|
af04bbecca
|
set native vertex endpoints
|
2024-08-03 16:52:03 -07:00 |
|
Krrish Dholakia
|
5d96ff6694
|
fix(utils.py): handle scenario where model="azure/*" and custom_llm_provider="azure"
Fixes https://github.com/BerriAI/litellm/issues/4912
|
2024-08-02 17:48:53 -07:00 |
|