Ishaan Jaff
|
eedacf5193
|
Merge branch 'main' into litellm_run_moderation_check_on_embedding
|
2024-07-18 12:44:30 -07:00 |
|
Florian Greinacher
|
f8bec3a86c
|
feat(proxy): support hiding health check details
|
2024-07-18 17:21:12 +02:00 |
|
Krish Dholakia
|
57f6923ab6
|
Merge pull request #4729 from vingiarrusso/vgiarrusso/guardrails
Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook
|
2024-07-17 22:24:35 -07:00 |
|
Krish Dholakia
|
77656d9f11
|
Merge branch 'main' into litellm_aporio_integration
|
2024-07-17 22:14:29 -07:00 |
|
Ishaan Jaff
|
3dfeee03d0
|
fix pre call utils on embedding
|
2024-07-17 18:29:34 -07:00 |
|
Ishaan Jaff
|
9753c3676a
|
fix run moderation check on embedding
|
2024-07-17 17:59:20 -07:00 |
|
Krrish Dholakia
|
0a94953896
|
fix(router.py): check for request_timeout in acompletion
support 'request_timeout' param in router acompletion
|
2024-07-17 17:19:06 -07:00 |
|
Ishaan Jaff
|
8cb228bf16
|
Merge pull request #4754 from BerriAI/litellm_fix_langsmith_api_key_logged
[Fix] Langsmith - Don't Log Provider API Keys
|
2024-07-17 16:40:32 -07:00 |
|
Krrish Dholakia
|
07d90f6739
|
feat(aporio_ai.py): support aporio ai prompt injection for chat completion requests
Closes https://github.com/BerriAI/litellm/issues/2950
|
2024-07-17 16:38:47 -07:00 |
|
Krrish Dholakia
|
a176feeacc
|
fix(utils.py): return optional params from groq
|
2024-07-17 12:09:08 -07:00 |
|
Ishaan Jaff
|
0890299f65
|
test langmsith logging
|
2024-07-17 10:08:29 -07:00 |
|
Krrish Dholakia
|
e7f8ee2aba
|
fix(test_key_generate_prisma.py): pass user_api_key_dict to test call
|
2024-07-17 08:29:21 -07:00 |
|
Krrish Dholakia
|
3630896fde
|
fix(team_endpoints.py): fix check
|
2024-07-16 22:05:48 -07:00 |
|
Krrish Dholakia
|
94af72a857
|
fix(internal_user_endpoints.py): delete associated invitation links before deleting user in /user/delete
Fixes https://github.com/BerriAI/litellm/issues/4740
|
2024-07-16 21:43:17 -07:00 |
|
Ishaan Jaff
|
29050d7c06
|
fix check if user passed custom header
|
2024-07-16 21:43:17 -07:00 |
|
Ishaan Jaff
|
d6c5b2e02c
|
add example on how to use litellm_key_header_name
|
2024-07-16 21:43:17 -07:00 |
|
Ishaan Jaff
|
7338ce3d1d
|
feat - use custom api key name
|
2024-07-16 21:43:17 -07:00 |
|
Krrish Dholakia
|
92307c9224
|
fix(team_endpoints.py): check if key belongs to team before returning /team/info
|
2024-07-16 21:43:17 -07:00 |
|
Ishaan Jaff
|
3736152e7d
|
fix calculate correct alerting threshold
|
2024-07-16 21:43:17 -07:00 |
|
Ishaan Jaff
|
6c918f2373
|
fix tracking hanging requests
|
2024-07-16 21:43:16 -07:00 |
|
Ishaan Jaff
|
36be9967d1
|
fix storing request status in mem
|
2024-07-16 21:43:16 -07:00 |
|
Ishaan Jaff
|
86b311eeca
|
fix set default value for max_file_size_mb
|
2024-07-16 21:43:16 -07:00 |
|
Ishaan Jaff
|
ac7849ee47
|
ui new build
|
2024-07-16 20:04:36 -07:00 |
|
Krrish Dholakia
|
ec03e675c9
|
fix(proxy/utils.py): fix failure logging for rejected requests. + unit tests
|
2024-07-16 17:15:20 -07:00 |
|
Vinnie Giarrusso
|
6ff863ee00
|
Add enabled_roles to Guardrails configuration, Update Lakera guardrail moderation hook
|
2024-07-16 01:52:08 -07:00 |
|
Ishaan Jaff
|
254ac37f65
|
Merge pull request #4724 from BerriAI/litellm_Set_max_file_size_transc
[Feat] - set max file size on /audio/transcriptions
|
2024-07-15 20:42:24 -07:00 |
|
Ishaan Jaff
|
af19a2aff3
|
ui new build
|
2024-07-15 20:09:17 -07:00 |
|
Ishaan Jaff
|
979b5d8eea
|
Merge pull request #4719 from BerriAI/litellm_fix_audio_transcript
[Fix] /audio/transcription - don't write to the local file system
|
2024-07-15 20:05:42 -07:00 |
|
Ishaan Jaff
|
bac6685bfc
|
fix linting
|
2024-07-15 20:02:41 -07:00 |
|
Ishaan Jaff
|
38cef1c58d
|
fix error from max file size
|
2024-07-15 19:57:33 -07:00 |
|
Ishaan Jaff
|
48d28e37a4
|
fix set max_file_size
|
2024-07-15 19:41:38 -07:00 |
|
Ishaan Jaff
|
b5a2090720
|
use helper to check check_file_size_under_limit
|
2024-07-15 19:40:05 -07:00 |
|
Ishaan Jaff
|
6c060b1fdc
|
check_file_size_under_limit
|
2024-07-15 19:38:08 -07:00 |
|
Krrish Dholakia
|
959c627dd3
|
fix(litellm_logging.py): log response_cost=0 for failed calls
Fixes https://github.com/BerriAI/litellm/issues/4604
|
2024-07-15 19:25:56 -07:00 |
|
Krrish Dholakia
|
9cc2daeec9
|
fix(utils.py): update get_model_info docstring
Fixes https://github.com/BerriAI/litellm/issues/4711
|
2024-07-15 18:18:50 -07:00 |
|
Ishaan Jaff
|
a900f352b5
|
fix - don't write file.filename
|
2024-07-15 14:56:01 -07:00 |
|
Krrish Dholakia
|
e8e31c4029
|
docs(enterprise.md): cleanup docs
|
2024-07-15 14:52:08 -07:00 |
|
Ishaan Jaff
|
3dc2ec8119
|
fix show debugging utils on in mem usage
|
2024-07-15 10:05:57 -07:00 |
|
Krish Dholakia
|
6bf60d773e
|
Merge pull request #4696 from BerriAI/litellm_guardrail_logging_only
Allow setting `logging_only` in guardrails config
|
2024-07-13 21:50:43 -07:00 |
|
Krish Dholakia
|
7bc9a189e7
|
Merge branch 'main' into litellm_add_azure_ai_pricing
|
2024-07-13 21:50:26 -07:00 |
|
Krish Dholakia
|
d0fb685c56
|
Merge pull request #4706 from BerriAI/litellm_retry_after
Return `retry-after` header for rate limited requests
|
2024-07-13 21:37:41 -07:00 |
|
Krrish Dholakia
|
de8230ed41
|
fix(proxy_server.py): fix returning response headers on exception
|
2024-07-13 19:11:30 -07:00 |
|
Ishaan Jaff
|
4d7d6504b6
|
Merge pull request #4704 from BerriAI/litellm_debug_mem
[Debug-Utils] Add some useful memory usage debugging utils
|
2024-07-13 18:44:40 -07:00 |
|
Ishaan Jaff
|
ed5114c680
|
Merge pull request #4703 from BerriAI/litellm_only_use_internal_use_cache
[Fix Memory Usage] - only use per request tracking if slack alerting is being used
|
2024-07-13 18:40:22 -07:00 |
|
Ishaan Jaff
|
31783196c0
|
feat - return size of in memory cache
|
2024-07-13 18:22:44 -07:00 |
|
Ishaan Jaff
|
759e02bdaa
|
debug mem issues show growth
|
2024-07-13 18:05:19 -07:00 |
|
Ishaan Jaff
|
69f74c1e6c
|
fix only use per request tracking if slack alerting is being used
|
2024-07-13 18:01:53 -07:00 |
|
Krrish Dholakia
|
fde434be66
|
feat(proxy_server.py): return 'retry-after' param for rate limited requests
Closes https://github.com/BerriAI/litellm/issues/4695
|
2024-07-13 17:15:20 -07:00 |
|
Krrish Dholakia
|
bc9fe23ebf
|
fix: cleanup
|
2024-07-13 16:36:04 -07:00 |
|
Krrish Dholakia
|
b1be355d42
|
build(model_prices_and_context_window.json): add azure ai jamba instruct pricing + token details
Adds jamba instruct, mistral, llama3 pricing + token info for azure_ai
|
2024-07-13 16:34:31 -07:00 |
|