Commit graph

3089 commits

Author SHA1 Message Date
Ishaan Jaff
74c4e3def8 return ProxyException code as str 2024-07-30 12:35:46 -07:00
Ishaan Jaff
563d59a305 test batches endpoint on proxy 2024-07-30 09:46:30 -07:00
Ishaan Jaff
6f34998cab ui new build 2024-07-29 21:20:27 -07:00
Ishaan Jaff
1a34756159
Merge pull request #4916 from BerriAI/litellm_fix_ui_login
Feat UI - allow using custom header for litellm api key
2024-07-29 17:08:53 -07:00
Ishaan Jaff
0c25aaf9df check litellm header in login on ui 2024-07-29 17:03:04 -07:00
Ishaan Jaff
f25ed92ee2 better debugging for custom headers 2024-07-29 16:59:15 -07:00
Krrish Dholakia
66dbd938e8 fix(exceptions.py): use correct status code for content policy exceptions
Fixes https://github.com/BerriAI/litellm/issues/4941#issuecomment-2256578732
2024-07-29 12:01:54 -07:00
Ishaan Jaff
a2939c2f08
Merge pull request #4939 from BerriAI/litellm_log_transcription_resp_langfuse
[Feat-Proxy] - Langfuse log /audio/transcription on langfuse
2024-07-29 08:58:40 -07:00
Ishaan Jaff
95f063f978 fix default input/output values for /audio/trancription logging 2024-07-29 08:03:08 -07:00
Ishaan Jaff
b2fcf65653 log file_size_in_mb in metadata 2024-07-29 08:00:28 -07:00
Krrish Dholakia
92b539b42a fix(auth_checks.py): handle writing team object to redis caching correctly 2024-07-29 07:51:44 -07:00
Ishaan Jaff
9b69e500e5
Merge pull request #4927 from BerriAI/litellm_set_max_request_response_size_ui
Feat Enterprise -  set max request  / response size UI
2024-07-27 20:06:09 -07:00
Ishaan Jaff
10e70f842d
Merge pull request #4928 from BerriAI/litellm_check_response_size
[Feat Enterprise] - check max response size
2024-07-27 17:03:56 -07:00
Ishaan Jaff
b2f745f0e2
Merge pull request #4926 from BerriAI/litellm_check_max_request_size
Proxy Enterprise - security - check max request size
2024-07-27 17:02:12 -07:00
Ishaan Jaff
3511aadf99 allow setting max request / response size on admin UI 2024-07-27 17:00:39 -07:00
Ishaan Jaff
f633f7d92d set max_response_size_mb 2024-07-27 16:54:31 -07:00
Ishaan Jaff
b2f72338f6 feat check check_response_size_is_safe 2024-07-27 16:53:39 -07:00
Ishaan Jaff
41ca6fd52a feat - check max response size 2024-07-27 16:53:00 -07:00
Ishaan Jaff
4ab8d2229d security - check max request size 2024-07-27 16:08:41 -07:00
Ishaan Jaff
2e9fb5ca1f
Merge pull request #4924 from BerriAI/litellm_log_writing_spend_to_db_otel
[Feat] - log writing BatchSpendUpdate events on OTEL
2024-07-27 16:07:56 -07:00
Ishaan Jaff
19fb5cc11c use common helpers for writing to otel 2024-07-27 11:40:39 -07:00
Ishaan Jaff
d5d9ed73af use _get_parent_otel_span_from_kwargs 2024-07-27 11:14:06 -07:00
Ishaan Jaff
61c10e60a4 feat - use log_to_opentelemetry for _PROXY_track_cost_callback 2024-07-27 11:08:22 -07:00
Krrish Dholakia
2719860c46 build(model_prices_and_context_window.json): add mistral-large on vertex ai pricing 2024-07-27 10:37:18 -07:00
Ishaan Jaff
1adf71b9b7 feat - clearly show version litellm enterprise 2024-07-27 09:50:03 -07:00
Ishaan Jaff
6f428a16fa fix update public key 2024-07-27 09:45:58 -07:00
Krish Dholakia
9bdcef238b
Merge pull request #4907 from BerriAI/litellm_proxy_get_secret
fix(proxy_server.py): fix get secret for environment_variables
2024-07-26 22:17:11 -07:00
Krish Dholakia
f9c2fec1a6
Merge pull request #4918 from BerriAI/litellm_ollama_tool_calling
feat(ollama_chat.py): support ollama tool calling
2024-07-26 22:16:58 -07:00
Krrish Dholakia
77fe8f57cf docs(ollama.md): add ollama tool calling to docs 2024-07-26 22:12:52 -07:00
Krrish Dholakia
b25d4a8cb3 feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
2024-07-26 21:51:54 -07:00
Ishaan Jaff
2501b4eccd feat link to model cost map on swagger 2024-07-26 21:34:42 -07:00
Ishaan Jaff
548adea8cf add litellm_header_name endpoint 2024-07-26 21:04:31 -07:00
Ishaan Jaff
a7f964b869
Merge pull request #4913 from BerriAI/litellm_fix_error_limit
[Proxy-Fix] - raise more descriptive errors when crossing tpm / rpm limits on keys, user, global limits
2024-07-26 20:25:28 -07:00
Ishaan Jaff
3c463ccbe6
Merge pull request #4914 from BerriAI/litellm_fix_batches
[Proxy-Fix + Test] - /batches endpoint
2024-07-26 20:12:03 -07:00
Krrish Dholakia
fe0b55f2ca fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
2024-07-26 19:04:08 -07:00
Ishaan Jaff
f627fa9b40 fix for GET /v1/batches{batch_id:path} 2024-07-26 18:23:15 -07:00
Ishaan Jaff
56ce7e892d fix batches inserting metadata 2024-07-26 18:08:54 -07:00
Ishaan Jaff
159a880dcc fix /v1/batches POST 2024-07-26 18:06:00 -07:00
Ishaan Jaff
c4e4b4675c fix raise better error when crossing tpm / rpm limits 2024-07-26 17:35:08 -07:00
Krrish Dholakia
9943c6d607 fix(proxy_server.py): fix get secret for environment_variables 2024-07-26 13:33:02 -07:00
Krrish Dholakia
84482703b8 docs(config.md): update wildcard docs 2024-07-26 08:59:53 -07:00
Krrish Dholakia
1d6c39a607 feat(proxy_server.py): handle pydantic mockselvar error
Fixes https://github.com/BerriAI/litellm/issues/4898#issuecomment-2252105485
2024-07-26 08:38:51 -07:00
Krrish Dholakia
2f773d9cb6 fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking 2024-07-25 22:12:07 -07:00
Ishaan Jaff
079a41fbe1
Merge branch 'main' into litellm_proxy_support_all_providers 2024-07-25 20:15:37 -07:00
Ishaan Jaff
68e94f0976 example mistral sdk 2024-07-25 19:48:54 -07:00
Ishaan Jaff
693bcfac39 fix using pass_through_all_models 2024-07-25 19:32:49 -07:00
Krish Dholakia
c2086300b7
Merge branch 'main' into litellm_redis_team_object 2024-07-25 19:31:52 -07:00
Krish Dholakia
a306b83b2d
Merge pull request #4887 from BerriAI/litellm_custom_llm
feat(custom_llm.py): Support Custom LLM Handlers
2024-07-25 19:05:29 -07:00
Ishaan Jaff
9863520376 support using */* 2024-07-25 18:48:56 -07:00
Ishaan Jaff
8f4c5437b8 router support setting pass_through_all_models 2024-07-25 18:34:12 -07:00