Krrish Dholakia
6b78e39600
feat(guardrails.py): allow setting logging_only
in guardrails_config for presidio pii masking integration
2024-07-13 12:22:17 -07:00
Ishaan Jaff
670bf1b98d
correctly flush cache when updating user
2024-07-13 12:05:09 -07:00
Krish Dholakia
66cedccd6b
Merge pull request #4686 from BerriAI/litellm_custom_chat_endpoints
...
docs(pass_through.md): Creating custom chat endpoints on proxy
2024-07-13 09:45:17 -07:00
Ishaan Jaff
70b96d12e9
Merge pull request #4685 from BerriAI/litellm_return_type_expired_key
...
[Fix] Proxy Return type=expire_key on expired Key errors
2024-07-12 18:52:51 -07:00
Krrish Dholakia
667fd2b376
docs(pass_through.md): add doc on creating custom chat endpoints on proxy
...
Allows developers to call proxy with anthropic sdk/boto3/etc.
2024-07-12 18:48:40 -07:00
Ishaan Jaff
57ced1d25e
raise roxyErrorTypes.expired_key on expired key
2024-07-12 18:41:39 -07:00
Ishaan Jaff
34ff0a7e57
raise expired_key error
2024-07-12 18:39:00 -07:00
Ishaan Jaff
92bf98b30f
Merge pull request #4684 from BerriAI/litellm_safe_memory_mode
...
[Feat] Allow safe memory mode
2024-07-12 18:32:16 -07:00
Ishaan Jaff
24918c5041
Merge pull request #4682 from BerriAI/litellm_mem_leak_debug
...
show stack trace of 10 files taking up memory
2024-07-12 18:31:41 -07:00
Ishaan Jaff
cf5f11cc84
Merge pull request #4681 from BerriAI/litellm_mem_usage
...
[Fix] Reduce Mem Usage - only set ttl for requests to 2 mins
2024-07-12 18:31:19 -07:00
Ishaan Jaff
08efef5316
feat add safe_memory_mode
2024-07-12 18:18:39 -07:00
Krrish Dholakia
fd743aaefd
feat(opentelemetry.py): support logging call metadata to otel
2024-07-12 15:41:34 -07:00
Ishaan Jaff
1a8fce8edb
show stack trace of 10 files tking up memory
2024-07-12 15:33:03 -07:00
Ishaan Jaff
8c8dcdbdb1
reduce ttil for update_request_status
2024-07-12 15:14:54 -07:00
Krrish Dholakia
cff66d6151
fix(proxy_server.py): fix linting errors
2024-07-11 22:12:33 -07:00
Krish Dholakia
d72bcdbce3
Merge pull request #4669 from BerriAI/litellm_logging_only_masking
...
Flag for PII masking on Logging only
2024-07-11 22:03:37 -07:00
Krish Dholakia
72f1c9181d
Merge branch 'main' into litellm_call_id_in_response
2024-07-11 21:54:49 -07:00
Krish Dholakia
79d6b69d1c
Merge pull request #4651 from msabramo/docs-logging-cleanup
...
Docs: Miscellaneous cleanup of `docs/my-website/docs/proxy/logging.md`
2024-07-11 21:52:20 -07:00
Ishaan Jaff
aec468c0e9
ui new build
2024-07-11 19:13:08 -07:00
Krrish Dholakia
9d918d2ac7
fix(presidio_pii_masking.py): support logging_only pii masking
2024-07-11 18:04:12 -07:00
Krrish Dholakia
9deb9b4e3f
feat(guardrails): Flag for PII Masking on Logging
...
Fixes https://github.com/BerriAI/litellm/issues/4580
2024-07-11 16:09:34 -07:00
Ishaan Jaff
28cfca87c1
Merge pull request #4647 from msabramo/msabramo/remove-unnecessary-imports
...
Remove unnecessary imports
2024-07-11 15:07:30 -07:00
Ishaan Jaff
8bf50ac5db
Merge pull request #4661 from BerriAI/litellm_fix_mh
...
[Fix] Model Hub - Show supports vision correctly
2024-07-11 15:03:37 -07:00
Ishaan Jaff
341f88d191
fix supports vision
2024-07-11 12:59:42 -07:00
Krrish Dholakia
2163434ff3
fix(llm_cost_calc/google.py): fix google embedding cost calculation
...
Fixes https://github.com/BerriAI/litellm/issues/4630
2024-07-11 11:55:48 -07:00
Ishaan Jaff
e3470d8e91
Merge pull request #4658 from BerriAI/litellm_check_otel_spans
...
[Test-Proxy] Otel Traces
2024-07-11 10:41:51 -07:00
Ishaan Jaff
b4f8c7304f
Merge pull request #4652 from msabramo/shorter-success_callbacks-in-health-readiness-response
...
Shorter success callbacks from `/health/readiness`
2024-07-11 09:57:52 -07:00
Ishaan Jaff
79f409b21e
Merge pull request #4656 from BerriAI/litellm_otel_fix
...
[Proxy - OTEL] Fix logging DB, Redis Cache Reads
2024-07-11 09:55:51 -07:00
Ishaan Jaff
15d35cd62f
test otel
2024-07-11 09:54:23 -07:00
Krrish Dholakia
070ab9f469
docs(model_management.md): update docs to clarify calling /model/info
2024-07-11 09:47:50 -07:00
Ishaan Jaff
e2c3b2b694
fix add master key in requests
2024-07-11 09:05:08 -07:00
Ishaan Jaff
02ab3cb73d
test- otel span recording
2024-07-11 08:47:16 -07:00
Ishaan Jaff
19d993e120
add otel in callbacks
2024-07-11 07:24:48 -07:00
Ishaan Jaff
498d7d4228
fix - otel log db / redis calls
2024-07-11 07:22:45 -07:00
Krish Dholakia
dacce3d78b
Merge pull request #4635 from BerriAI/litellm_anthropic_adapter
...
Anthropic `/v1/messages` endpoint support
2024-07-10 22:41:53 -07:00
Krrish Dholakia
31829855c0
feat(proxy_server.py): working /v1/messages
with config.yaml
...
Adds async router support for adapter_completion call
2024-07-10 18:53:54 -07:00
Marc Abramowitz
3d86c4f515
Shorter success callbacks from /health/readiness
...
Before:
```shell
$ curl -sSL http://0.0.0.0:4000/health/readiness | jq '.success_callbacks'
[
"langfuse",
"<function _PROXY_track_cost_callback at 0x12fc14b80>",
"<bound method SlackAlerting.response_taking_too_long_callback of <litellm.integrations.slack_alerting.SlackAlerting object at 0x12cedb740>>",
"<litellm.proxy.hooks.parallel_request_limiter._PROXY_MaxParallelRequestsHandler object at 0x12cedb8f0>",
"<litellm.proxy.hooks.max_budget_limiter._PROXY_MaxBudgetLimiter object at 0x12cedb830>",
"<litellm.proxy.hooks.cache_control_check._PROXY_CacheControlCheck object at 0x12ca101d0>",
"<litellm._service_logger.ServiceLogging object at 0x13a6d8c50>"
]
```
After:
```shell
$ curl -sSL http://0.0.0.0:4000/health/readiness | jq '.success_callbacks'
[
"langfuse",
"_PROXY_track_cost_callback",
"response_taking_too_long_callback",
"_PROXY_MaxParallelRequestsHandler",
"_PROXY_MaxBudgetLimiter",
"_PROXY_CacheControlCheck",
"ServiceLogging"
]
```
2024-07-10 18:45:42 -07:00
Krrish Dholakia
2f8dbbeb97
feat(proxy_server.py): working /v1/messages
endpoint
...
Works with claude engineer
2024-07-10 18:15:38 -07:00
Marc Abramowitz
dd0c07d2a1
Move JSX stuff so first line of file is heading
...
This prevents VS Code from displaying a warning about the file not starting with
a heading.
2024-07-10 17:02:56 -07:00
Ishaan Jaff
7b8670f883
add /files/{file_id}/content as openai route
2024-07-10 16:55:09 -07:00
Ishaan Jaff
265ec00d0f
fix test routes on litellm proxy
2024-07-10 16:51:47 -07:00
Ishaan Jaff
a313174ecb
Merge pull request #4648 from BerriAI/litellm_add_remaining_file_endpoints
...
[Feat] Add LIST, DELETE, GET `/files`
2024-07-10 16:42:05 -07:00
Ishaan Jaff
a5f12aba81
add file content
2024-07-10 16:15:43 -07:00
Ishaan Jaff
cb300d30a9
add file delete path
2024-07-10 16:08:58 -07:00
Marc Abramowitz
3a2cb151aa
Proxy: Add x-litellm-call-id
response header
...
This gives the value of `logging_obj.litellm_call_id` and one particular use of
this is to correlate the HTTP response from a request with a trace in an LLM
logging tool like Langfuse, Langsmith, etc.
For example, if a user in my environment (w/ Langfuse) gets back this in the
response headers:
```
x-litellm-call-id: ffcb49e7-bd6e-4e56-9c08-a7243802b26e
```
then they know that they can see the trace for this request in Langfuse by
visiting https://langfuse.domain.com/trace/ffcb49e7-bd6e-4e56-9c08-a7243802b26e
They can also use this ID to submit scores for this request to the Langfuse
scoring API.
2024-07-10 16:05:37 -07:00
Ishaan Jaff
0b39ec6a8e
add /v1/files LIST
2024-07-10 16:04:07 -07:00
Marc Abramowitz
2db9c23bce
Remove unnecessary imports
...
from `litellm/proxy/proxy_server.py`
2024-07-10 15:06:47 -07:00
Ishaan Jaff
efca9daf5d
add "/v1/files/{file_id}" as openai route
2024-07-10 14:56:53 -07:00
Ishaan Jaff
393ce7df14
add /files endpoints
2024-07-10 14:55:10 -07:00
Krrish Dholakia
aace0b22a3
fix(proxy_server.py): fix proxy_server.py premium user check for encrypted license key
2024-07-10 12:25:31 -07:00