Commit graph

1844 commits

Author SHA1 Message Date
Ishaan Jaff
3511aadf99 allow setting max request / response size on admin UI 2024-07-27 17:00:39 -07:00
Ishaan Jaff
b2f72338f6 feat check check_response_size_is_safe 2024-07-27 16:53:39 -07:00
Ishaan Jaff
19fb5cc11c use common helpers for writing to otel 2024-07-27 11:40:39 -07:00
Ishaan Jaff
61c10e60a4 feat - use log_to_opentelemetry for _PROXY_track_cost_callback 2024-07-27 11:08:22 -07:00
Ishaan Jaff
1adf71b9b7 feat - clearly show version litellm enterprise 2024-07-27 09:50:03 -07:00
Krish Dholakia
9bdcef238b
Merge pull request #4907 from BerriAI/litellm_proxy_get_secret
fix(proxy_server.py): fix get secret for environment_variables
2024-07-26 22:17:11 -07:00
Ishaan Jaff
2501b4eccd feat link to model cost map on swagger 2024-07-26 21:34:42 -07:00
Ishaan Jaff
f627fa9b40 fix for GET /v1/batches{batch_id:path} 2024-07-26 18:23:15 -07:00
Ishaan Jaff
159a880dcc fix /v1/batches POST 2024-07-26 18:06:00 -07:00
Krrish Dholakia
9943c6d607 fix(proxy_server.py): fix get secret for environment_variables 2024-07-26 13:33:02 -07:00
Krrish Dholakia
1d6c39a607 feat(proxy_server.py): handle pydantic mockselvar error
Fixes https://github.com/BerriAI/litellm/issues/4898#issuecomment-2252105485
2024-07-26 08:38:51 -07:00
Ishaan Jaff
079a41fbe1
Merge branch 'main' into litellm_proxy_support_all_providers 2024-07-25 20:15:37 -07:00
Ishaan Jaff
693bcfac39 fix using pass_through_all_models 2024-07-25 19:32:49 -07:00
Ishaan Jaff
8f4c5437b8 router support setting pass_through_all_models 2024-07-25 18:34:12 -07:00
Krrish Dholakia
bd7af04a72 feat(proxy_server.py): support custom llm handler on proxy 2024-07-25 17:56:34 -07:00
Krrish Dholakia
bfdda089c8 fix(proxy_server.py): check if input list > 0 before indexing into it
resolves 'list index out of range' error
2024-07-25 14:23:07 -07:00
Marc Abramowitz
6faaa8aa50 Allow not displaying feedback box
by setting an env var called `LITELLM_DONT_SHOW_FEEDBACK_BOX` to `"true"`.

I liked the feedback box when I first started using LiteLLM, because it showed
me that the authors care about customers. But now that I've seen it a bunch of
times, I don't need to see it every time I start the server and I'd rather have
less output on startup.
2024-07-24 16:50:10 -07:00
Ishaan Jaff
4c1ee1e282 fix add better debugging _PROXY_track_cost_callback 2024-07-23 15:25:46 -07:00
Krrish Dholakia
f64a3309d1 fix(utils.py): support raw response headers for streaming requests 2024-07-23 11:58:58 -07:00
Ishaan Jaff
d116ff280e feat - set alert_to_webhook_url 2024-07-23 10:08:21 -07:00
Ishaan Jaff
c34c123fe3 feat - add endpoint to set team callbacks 2024-07-22 18:18:09 -07:00
Ishaan Jaff
df1ac92222 fix add fix to update spend logs 2024-07-19 12:49:23 -07:00
Ishaan Jaff
ae316d2d9a fix ui - make default session 24 hours 2024-07-19 10:17:45 -07:00
Ishaan Jaff
51525254e8 fix ui make ui session last 24 hours 2024-07-18 18:22:40 -07:00
Ishaan Jaff
eedacf5193
Merge branch 'main' into litellm_run_moderation_check_on_embedding 2024-07-18 12:44:30 -07:00
Florian Greinacher
f8bec3a86c
feat(proxy): support hiding health check details 2024-07-18 17:21:12 +02:00
Ishaan Jaff
9753c3676a fix run moderation check on embedding 2024-07-17 17:59:20 -07:00
Ishaan Jaff
254ac37f65
Merge pull request #4724 from BerriAI/litellm_Set_max_file_size_transc
[Feat] - set max file size on /audio/transcriptions
2024-07-15 20:42:24 -07:00
Ishaan Jaff
979b5d8eea
Merge pull request #4719 from BerriAI/litellm_fix_audio_transcript
[Fix] /audio/transcription - don't write to the local file system
2024-07-15 20:05:42 -07:00
Ishaan Jaff
b5a2090720 use helper to check check_file_size_under_limit 2024-07-15 19:40:05 -07:00
Krrish Dholakia
9cc2daeec9 fix(utils.py): update get_model_info docstring
Fixes https://github.com/BerriAI/litellm/issues/4711
2024-07-15 18:18:50 -07:00
Ishaan Jaff
a900f352b5 fix - don't write file.filename 2024-07-15 14:56:01 -07:00
Krrish Dholakia
de8230ed41 fix(proxy_server.py): fix returning response headers on exception 2024-07-13 19:11:30 -07:00
Krrish Dholakia
fde434be66 feat(proxy_server.py): return 'retry-after' param for rate limited requests
Closes https://github.com/BerriAI/litellm/issues/4695
2024-07-13 17:15:20 -07:00
Krrish Dholakia
cff66d6151 fix(proxy_server.py): fix linting errors 2024-07-11 22:12:33 -07:00
Krish Dholakia
d72bcdbce3
Merge pull request #4669 from BerriAI/litellm_logging_only_masking
Flag for PII masking on Logging only
2024-07-11 22:03:37 -07:00
Krish Dholakia
72f1c9181d
Merge branch 'main' into litellm_call_id_in_response 2024-07-11 21:54:49 -07:00
Krish Dholakia
79d6b69d1c
Merge pull request #4651 from msabramo/docs-logging-cleanup
Docs: Miscellaneous cleanup of `docs/my-website/docs/proxy/logging.md`
2024-07-11 21:52:20 -07:00
Krrish Dholakia
9deb9b4e3f feat(guardrails): Flag for PII Masking on Logging
Fixes https://github.com/BerriAI/litellm/issues/4580
2024-07-11 16:09:34 -07:00
Ishaan Jaff
28cfca87c1
Merge pull request #4647 from msabramo/msabramo/remove-unnecessary-imports
Remove unnecessary imports
2024-07-11 15:07:30 -07:00
Krrish Dholakia
070ab9f469 docs(model_management.md): update docs to clarify calling /model/info 2024-07-11 09:47:50 -07:00
Krish Dholakia
dacce3d78b
Merge pull request #4635 from BerriAI/litellm_anthropic_adapter
Anthropic `/v1/messages` endpoint support
2024-07-10 22:41:53 -07:00
Krrish Dholakia
31829855c0 feat(proxy_server.py): working /v1/messages with config.yaml
Adds async router support for adapter_completion call
2024-07-10 18:53:54 -07:00
Krrish Dholakia
2f8dbbeb97 feat(proxy_server.py): working /v1/messages endpoint
Works with claude engineer
2024-07-10 18:15:38 -07:00
Marc Abramowitz
dd0c07d2a1 Move JSX stuff so first line of file is heading
This prevents VS Code from displaying a warning about the file not starting with
a heading.
2024-07-10 17:02:56 -07:00
Ishaan Jaff
265ec00d0f fix test routes on litellm proxy 2024-07-10 16:51:47 -07:00
Ishaan Jaff
a313174ecb
Merge pull request #4648 from BerriAI/litellm_add_remaining_file_endpoints
[Feat] Add LIST, DELETE, GET `/files`
2024-07-10 16:42:05 -07:00
Marc Abramowitz
3a2cb151aa Proxy: Add x-litellm-call-id response header
This gives the value of `logging_obj.litellm_call_id` and one particular use of
this is to correlate the HTTP response from a request with a trace in an LLM
logging tool like Langfuse, Langsmith, etc.

For example, if a user in my environment (w/ Langfuse) gets back this in the
response headers:

```
x-litellm-call-id: ffcb49e7-bd6e-4e56-9c08-a7243802b26e
```

then they know that they can see the trace for this request in Langfuse by
visiting https://langfuse.domain.com/trace/ffcb49e7-bd6e-4e56-9c08-a7243802b26e

They can also use this ID to submit scores for this request to the Langfuse
scoring API.
2024-07-10 16:05:37 -07:00
Marc Abramowitz
2db9c23bce Remove unnecessary imports
from `litellm/proxy/proxy_server.py`
2024-07-10 15:06:47 -07:00
Ishaan Jaff
393ce7df14 add /files endpoints 2024-07-10 14:55:10 -07:00