Krrish Dholakia
|
a2da2a8f16
|
feat(aws_secret_manager.py): allows user to keep a hash of the proxy master key in their env
|
2024-06-06 15:32:51 -07:00 |
|
Krrish Dholakia
|
6cca5612d2
|
refactor: replace 'traceback.print_exc()' with logging library
allows error logs to be in json format for otel logging
|
2024-06-06 13:47:43 -07:00 |
|
Ishaan Jaff
|
1e8429bb20
|
feat - redact messages from slack alerting
|
2024-06-06 10:38:15 -07:00 |
|
Raymond1415926
|
38b44c301a
|
Merge branch 'BerriAI:main' into main
|
2024-06-06 10:12:20 -07:00 |
|
Krrish Dholakia
|
a7dcf25722
|
feat(router.py): enable settting 'order' for a deployment in model list
Allows user to control which model gets called first in model group
|
2024-06-06 09:46:51 -07:00 |
|
Raymond Huang
|
f4c49755a0
|
fix token counter bug
|
2024-06-05 23:40:55 -07:00 |
|
Sha Ahammed Roze
|
ea67803747
|
Merge branch 'BerriAI:main' into main
|
2024-06-06 10:02:15 +05:30 |
|
Krrish Dholakia
|
a76a9b7d11
|
feat(bedrock_httpx.py): add support for bedrock converse api
closes https://github.com/BerriAI/litellm/issues/4000
|
2024-06-05 21:20:36 -07:00 |
|
Ishaan Jaff
|
4d2337ec72
|
Merge branch 'main' into patch-1
|
2024-06-05 13:35:31 -07:00 |
|
Sha Ahammed Roze
|
0a4abfdd1d
|
Merge branch 'BerriAI:main' into main
|
2024-06-05 21:56:41 +05:30 |
|
Krrish Dholakia
|
b360ab4c89
|
fix(azure.py): support dynamic drop params
|
2024-06-05 09:03:10 -07:00 |
|
Krrish Dholakia
|
162f9400d2
|
feat(utils.py): support dynamically setting 'drop_params'
Allows user to turn this on/off for individual calls by passing in as a completion arg
|
2024-06-05 08:44:04 -07:00 |
|
sha-ahammed
|
faa4dfe03e
|
feat: Add Ollama as a provider in the proxy UI
|
2024-06-05 16:48:38 +05:30 |
|
Ishaan Jaff
|
6b57352400
|
fix VertexAIException APIError
|
2024-06-04 22:11:11 -07:00 |
|
Ishaan Jaff
|
d6ba50319c
|
fix langfuse log metadata
|
2024-06-04 21:34:49 -07:00 |
|
Krish Dholakia
|
c544ba3654
|
Merge pull request #4009 from BerriAI/litellm_fix_streaming_cost_cal
fix(utils.py): fix cost calculation for openai-compatible streaming object
|
2024-06-04 21:00:22 -07:00 |
|
Krish Dholakia
|
d6f4233441
|
Merge pull request #4015 from BerriAI/litellm_stream_options_fix_2
feat(utils.py): Support `stream_options` param across all providers
|
2024-06-04 20:59:39 -07:00 |
|
Krrish Dholakia
|
1a95660495
|
fix(test_completion.py): fix predibase test to be mock + fix optional param mapping for predibase
|
2024-06-04 20:06:23 -07:00 |
|
Krrish Dholakia
|
43af5575c8
|
fix(utils.py): fix
|
2024-06-04 19:41:20 -07:00 |
|
Krrish Dholakia
|
54dacfdf61
|
feat(utils.py): support 'stream_options' param across all providers
Closes https://github.com/BerriAI/litellm/issues/3553
|
2024-06-04 19:03:26 -07:00 |
|
Krrish Dholakia
|
34f31a1994
|
fix(utils.py): add coverage for text openai and databricks
|
2024-06-04 18:27:03 -07:00 |
|
Krrish Dholakia
|
9aa29854de
|
fix(utils.py): fix stream options to return consistent response object
|
2024-06-04 18:17:45 -07:00 |
|
Ishaan Jaff
|
3b823c7587
|
fix - by default log raw curl command on langfuse
|
2024-06-04 16:30:25 -07:00 |
|
Krrish Dholakia
|
52a2f5150c
|
fix(utils.py): fix cost calculation for openai-compatible streaming object
|
2024-06-04 10:36:25 -07:00 |
|
Krrish Dholakia
|
7b474ec267
|
fix(utils.py): add coverage for azure img gen content policy violation error
|
2024-06-04 08:29:30 -07:00 |
|
Krrish Dholakia
|
1de5235ba0
|
fix(router.py): use litellm.request_timeout as default for router clients
|
2024-06-03 14:19:53 -07:00 |
|
Ishaan Jaff
|
dd7d0a2895
|
Merge pull request #3983 from BerriAI/litellm_log_request_boddy_langfuse
[Feat] Log Raw Request from LiteLLM on Langfuse - when `"log_raw_request": true`
|
2024-06-03 13:42:06 -07:00 |
|
Ishaan Jaff
|
857ceb40bc
|
feat - log raw_request to langfuse / other logging providers
|
2024-06-03 07:53:52 -07:00 |
|
Krrish Dholakia
|
ea30359b38
|
fix(utils.py): handle else block for get optional params
|
2024-06-03 07:45:44 -07:00 |
|
Krrish Dholakia
|
9ef83126d7
|
fix(utils.py): correctly instrument passing through api version in optional param check
|
2024-06-01 19:31:52 -07:00 |
|
Krrish Dholakia
|
7efac4d36c
|
fix(azure.py): support dropping 'tool_choice=required' for older azure API versions
Closes https://github.com/BerriAI/litellm/issues/3876
|
2024-06-01 18:44:50 -07:00 |
|
Krish Dholakia
|
e7ff3adc26
|
Merge pull request #3944 from BerriAI/litellm_fix_parallel_streaming
fix: fix streaming with httpx client
|
2024-05-31 21:42:37 -07:00 |
|
Krrish Dholakia
|
7523f803d2
|
fix(utils.py): support get_max_tokens() call with same model_name as completion
Closes https://github.com/BerriAI/litellm/issues/3921
|
2024-05-31 21:37:51 -07:00 |
|
Yulong Liu
|
6a004b9211
|
add document
|
2024-05-31 18:55:22 -07:00 |
|
Krrish Dholakia
|
93c3635b64
|
fix: fix streaming with httpx client
prevent overwriting streams in parallel streaming calls
|
2024-05-31 10:55:18 -07:00 |
|
lj
|
27ed72405b
|
Merge branch 'main' into fix-pydantic-warnings-again
|
2024-05-31 11:35:42 +08:00 |
|
Krish Dholakia
|
d3a247bf20
|
Merge pull request #3928 from BerriAI/litellm_audio_speech_endpoint
feat(main.py): support openai tts endpoint
|
2024-05-30 17:30:42 -07:00 |
|
Krrish Dholakia
|
d65b7fe01b
|
fix(main.py): add logging to audio_transcription calls
|
2024-05-30 16:57:11 -07:00 |
|
KX
|
d3921a3d28
|
fix: add missing seed parameter to ollama input
Current ollama interfacing does not allow for seed, which is supported in https://github.com/ollama/ollama/blob/main/docs/api.md#parameters and https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values
This resolves that by adding in handling of seed parameter.
|
2024-05-31 01:47:56 +08:00 |
|
Nir Gazit
|
b8d97c688c
|
Revert "Revert "fix: Log errors in Traceloop Integration (reverts previous revert)""
|
2024-05-30 04:06:45 +03:00 |
|
Krish Dholakia
|
77cc9cded9
|
Revert "fix: Log errors in Traceloop Integration (reverts previous revert)"
|
2024-05-29 16:30:09 -07:00 |
|
Krish Dholakia
|
c76deb8f76
|
Merge pull request #3846 from nirga/revert-3831-revert-3780-traceloop-failures
fix: Log errors in Traceloop Integration (reverts previous revert)
|
2024-05-29 08:54:01 -07:00 |
|
Ishaan Jaff
|
75222d7d4b
|
Merge branch 'main' into litellm_show_openai_params_model_hub
|
2024-05-27 09:27:56 -07:00 |
|
Krrish Dholakia
|
f0f853b941
|
fix(utils.py): support deepinfra optional params
Fixes https://github.com/BerriAI/litellm/issues/3855
|
2024-05-27 09:16:56 -07:00 |
|
Ishaan Jaff
|
245990597e
|
fix - return supported_openai_params from get_model_info
|
2024-05-27 09:00:12 -07:00 |
|
Krrish Dholakia
|
22b6b99b34
|
feat(proxy_server.py): expose new /model_group/info endpoint
returns model-group level info on supported params, max tokens, pricing, etc.
|
2024-05-26 14:07:35 -07:00 |
|
Nir Gazit
|
7602c6f436
|
Revert "Revert "Log errors in Traceloop Integration""
|
2024-05-26 12:01:10 +03:00 |
|
Ishaan Jaff
|
0ae6b337a3
|
Merge pull request #3824 from BerriAI/litellm_include_litellm_exception-in-error
[Feature]: Attach litellm exception in error string
|
2024-05-25 17:09:22 -07:00 |
|
Krrish Dholakia
|
25a2f00db6
|
fix(proxy_server.py): fix model check for /v1/models endpoint when team has restricted access
|
2024-05-25 13:02:03 -07:00 |
|
Ishaan Jaff
|
0083776a14
|
Revert "Log errors in Traceloop Integration"
|
2024-05-24 21:25:17 -07:00 |
|