Ishaan Jaff
|
1985d6ce0e
|
Merge pull request #4939 from BerriAI/litellm_log_transcription_resp_langfuse
[Feat-Proxy] - Langfuse log /audio/transcription on langfuse
|
2024-07-29 08:58:40 -07:00 |
|
Ishaan Jaff
|
4c427a3793
|
fix default input/output values for /audio/trancription logging
|
2024-07-29 08:03:08 -07:00 |
|
Ishaan Jaff
|
cc0e790863
|
log file_size_in_mb in metadata
|
2024-07-29 08:00:28 -07:00 |
|
Krrish Dholakia
|
80c3759719
|
fix(auth_checks.py): handle writing team object to redis caching correctly
|
2024-07-29 07:51:44 -07:00 |
|
Ishaan Jaff
|
096844c258
|
Merge pull request #4927 from BerriAI/litellm_set_max_request_response_size_ui
Feat Enterprise - set max request / response size UI
|
2024-07-27 20:06:09 -07:00 |
|
Ishaan Jaff
|
64bc224d63
|
Merge pull request #4928 from BerriAI/litellm_check_response_size
[Feat Enterprise] - check max response size
|
2024-07-27 17:03:56 -07:00 |
|
Ishaan Jaff
|
003108a074
|
Merge pull request #4926 from BerriAI/litellm_check_max_request_size
Proxy Enterprise - security - check max request size
|
2024-07-27 17:02:12 -07:00 |
|
Ishaan Jaff
|
b5451eaf21
|
allow setting max request / response size on admin UI
|
2024-07-27 17:00:39 -07:00 |
|
Ishaan Jaff
|
5cc97f3c5d
|
set max_response_size_mb
|
2024-07-27 16:54:31 -07:00 |
|
Ishaan Jaff
|
805d04f7f3
|
feat check check_response_size_is_safe
|
2024-07-27 16:53:39 -07:00 |
|
Ishaan Jaff
|
5f07afa268
|
feat - check max response size
|
2024-07-27 16:53:00 -07:00 |
|
Ishaan Jaff
|
a18f5bd5c8
|
security - check max request size
|
2024-07-27 16:08:41 -07:00 |
|
Ishaan Jaff
|
ee11aff6e2
|
Merge pull request #4924 from BerriAI/litellm_log_writing_spend_to_db_otel
[Feat] - log writing BatchSpendUpdate events on OTEL
|
2024-07-27 16:07:56 -07:00 |
|
Ishaan Jaff
|
aade38760d
|
use common helpers for writing to otel
|
2024-07-27 11:40:39 -07:00 |
|
Ishaan Jaff
|
bb7fc3e426
|
use _get_parent_otel_span_from_kwargs
|
2024-07-27 11:14:06 -07:00 |
|
Ishaan Jaff
|
cde46a4a09
|
feat - use log_to_opentelemetry for _PROXY_track_cost_callback
|
2024-07-27 11:08:22 -07:00 |
|
Krrish Dholakia
|
2c76524a19
|
build(model_prices_and_context_window.json): add mistral-large on vertex ai pricing
|
2024-07-27 10:37:18 -07:00 |
|
Ishaan Jaff
|
e3a66f2c62
|
feat - clearly show version litellm enterprise
|
2024-07-27 09:50:03 -07:00 |
|
Ishaan Jaff
|
a9561a1451
|
fix update public key
|
2024-07-27 09:45:58 -07:00 |
|
Krish Dholakia
|
fb80839e8c
|
Merge pull request #4907 from BerriAI/litellm_proxy_get_secret
fix(proxy_server.py): fix get secret for environment_variables
|
2024-07-26 22:17:11 -07:00 |
|
Krish Dholakia
|
f011f48195
|
Merge pull request #4918 from BerriAI/litellm_ollama_tool_calling
feat(ollama_chat.py): support ollama tool calling
|
2024-07-26 22:16:58 -07:00 |
|
Krrish Dholakia
|
ce2cd73801
|
docs(ollama.md): add ollama tool calling to docs
|
2024-07-26 22:12:52 -07:00 |
|
Krrish Dholakia
|
3a1eedfbf3
|
feat(ollama_chat.py): support ollama tool calling
Closes https://github.com/BerriAI/litellm/issues/4812
|
2024-07-26 21:51:54 -07:00 |
|
Ishaan Jaff
|
56cf8e2798
|
feat link to model cost map on swagger
|
2024-07-26 21:34:42 -07:00 |
|
Ishaan Jaff
|
d98dd53755
|
add litellm_header_name endpoint
|
2024-07-26 21:04:31 -07:00 |
|
Ishaan Jaff
|
5ca8aa89e8
|
Merge pull request #4913 from BerriAI/litellm_fix_error_limit
[Proxy-Fix] - raise more descriptive errors when crossing tpm / rpm limits on keys, user, global limits
|
2024-07-26 20:25:28 -07:00 |
|
Ishaan Jaff
|
9a1b454ccc
|
Merge pull request #4914 from BerriAI/litellm_fix_batches
[Proxy-Fix + Test] - /batches endpoint
|
2024-07-26 20:12:03 -07:00 |
|
Krrish Dholakia
|
1562cba823
|
fix(utils.py): fix cache hits for streaming
Fixes https://github.com/BerriAI/litellm/issues/4109
|
2024-07-26 19:04:08 -07:00 |
|
Ishaan Jaff
|
864f803ccf
|
fix for GET /v1/batches{batch_id:path}
|
2024-07-26 18:23:15 -07:00 |
|
Ishaan Jaff
|
46a441cfd1
|
fix batches inserting metadata
|
2024-07-26 18:08:54 -07:00 |
|
Ishaan Jaff
|
2b889b83b3
|
fix /v1/batches POST
|
2024-07-26 18:06:00 -07:00 |
|
Ishaan Jaff
|
bda2ac1af5
|
fix raise better error when crossing tpm / rpm limits
|
2024-07-26 17:35:08 -07:00 |
|
Krrish Dholakia
|
1a172b7636
|
fix(proxy_server.py): fix get secret for environment_variables
|
2024-07-26 13:33:02 -07:00 |
|
Krrish Dholakia
|
e39ff46222
|
docs(config.md): update wildcard docs
|
2024-07-26 08:59:53 -07:00 |
|
Krrish Dholakia
|
9d87767639
|
feat(proxy_server.py): handle pydantic mockselvar error
Fixes https://github.com/BerriAI/litellm/issues/4898#issuecomment-2252105485
|
2024-07-26 08:38:51 -07:00 |
|
Krrish Dholakia
|
d3ff21181c
|
fix(litellm_cost_calc/google.py): support meta llama vertex ai cost tracking
|
2024-07-25 22:12:07 -07:00 |
|
Ishaan Jaff
|
1103c614a0
|
Merge branch 'main' into litellm_proxy_support_all_providers
|
2024-07-25 20:15:37 -07:00 |
|
Ishaan Jaff
|
f452a6e053
|
example mistral sdk
|
2024-07-25 19:48:54 -07:00 |
|
Krrish Dholakia
|
5f67958231
|
feat(proxy_server.py): support custom llm handler on proxy
|
2024-07-25 19:35:52 -07:00 |
|
Ishaan Jaff
|
e327c1a01f
|
feat - support health check audio_speech
|
2024-07-25 19:35:48 -07:00 |
|
Krrish Dholakia
|
ca179789de
|
fix(proxy_server.py): check if input list > 0 before indexing into it
resolves 'list index out of range' error
|
2024-07-25 19:35:48 -07:00 |
|
Krrish Dholakia
|
dee2d7cea9
|
fix(main.py): fix calling openai gpt-3.5-turbo-instruct via /completions
Fixes https://github.com/BerriAI/litellm/issues/749
|
2024-07-25 19:35:40 -07:00 |
|
Krrish Dholakia
|
4071c52925
|
fix(internal_user_endpoints.py): support updating budgets for /user/update
|
2024-07-25 19:35:29 -07:00 |
|
Krrish Dholakia
|
72387320af
|
feat(auth_check.py): support using redis cache for team objects
Allows team update / check logic to work across instances instantly
|
2024-07-25 19:35:29 -07:00 |
|
Ishaan Jaff
|
d589d8e4ac
|
fix using pass_through_all_models
|
2024-07-25 19:32:49 -07:00 |
|
Krish Dholakia
|
473308a6dd
|
Merge branch 'main' into litellm_redis_team_object
|
2024-07-25 19:31:52 -07:00 |
|
Krish Dholakia
|
9a42d312b5
|
Merge pull request #4887 from BerriAI/litellm_custom_llm
feat(custom_llm.py): Support Custom LLM Handlers
|
2024-07-25 19:05:29 -07:00 |
|
Ishaan Jaff
|
422b4d7e0f
|
support using */*
|
2024-07-25 18:48:56 -07:00 |
|
Ishaan Jaff
|
a46c463dee
|
router support setting pass_through_all_models
|
2024-07-25 18:34:12 -07:00 |
|
Krrish Dholakia
|
84ef8c11ff
|
feat(proxy_server.py): support custom llm handler on proxy
|
2024-07-25 17:56:34 -07:00 |
|