Krrish Dholakia
|
a2ca3887d1
|
feat(aws_secret_manager.py): allows user to keep a hash of the proxy master key in their env
|
2024-06-06 15:32:51 -07:00 |
|
Krrish Dholakia
|
e391e30285
|
refactor: replace 'traceback.print_exc()' with logging library
allows error logs to be in json format for otel logging
|
2024-06-06 13:47:43 -07:00 |
|
Ishaan Jaff
|
3df177d0d0
|
feat - redact messages from slack alerting
|
2024-06-06 10:38:15 -07:00 |
|
Raymond1415926
|
f9368228c0
|
Merge branch 'BerriAI:main' into main
|
2024-06-06 10:12:20 -07:00 |
|
Krrish Dholakia
|
005128addc
|
feat(router.py): enable settting 'order' for a deployment in model list
Allows user to control which model gets called first in model group
|
2024-06-06 09:46:51 -07:00 |
|
Raymond Huang
|
fe8539923d
|
fix token counter bug
|
2024-06-05 23:40:55 -07:00 |
|
Sha Ahammed Roze
|
2c2315431a
|
Merge branch 'BerriAI:main' into main
|
2024-06-06 10:02:15 +05:30 |
|
Krrish Dholakia
|
96b556f385
|
feat(bedrock_httpx.py): add support for bedrock converse api
closes https://github.com/BerriAI/litellm/issues/4000
|
2024-06-05 21:20:36 -07:00 |
|
Ishaan Jaff
|
d09353a5e7
|
Merge branch 'main' into patch-1
|
2024-06-05 13:35:31 -07:00 |
|
Sha Ahammed Roze
|
01e1a8e518
|
Merge branch 'BerriAI:main' into main
|
2024-06-05 21:56:41 +05:30 |
|
Krrish Dholakia
|
129745da90
|
fix(azure.py): support dynamic drop params
|
2024-06-05 09:03:10 -07:00 |
|
Krrish Dholakia
|
72a0fa7db5
|
feat(utils.py): support dynamically setting 'drop_params'
Allows user to turn this on/off for individual calls by passing in as a completion arg
|
2024-06-05 08:44:04 -07:00 |
|
sha-ahammed
|
93e7b9346c
|
feat: Add Ollama as a provider in the proxy UI
|
2024-06-05 16:48:38 +05:30 |
|
Ishaan Jaff
|
53e6286622
|
fix VertexAIException APIError
|
2024-06-04 22:11:11 -07:00 |
|
Ishaan Jaff
|
cb810965b5
|
fix langfuse log metadata
|
2024-06-04 21:34:49 -07:00 |
|
Krish Dholakia
|
e678dce88b
|
Merge pull request #4009 from BerriAI/litellm_fix_streaming_cost_cal
fix(utils.py): fix cost calculation for openai-compatible streaming object
|
2024-06-04 21:00:22 -07:00 |
|
Krish Dholakia
|
56b5f4bab8
|
Merge pull request #4015 from BerriAI/litellm_stream_options_fix_2
feat(utils.py): Support `stream_options` param across all providers
|
2024-06-04 20:59:39 -07:00 |
|
Krrish Dholakia
|
3dcf287826
|
fix(test_completion.py): fix predibase test to be mock + fix optional param mapping for predibase
|
2024-06-04 20:06:23 -07:00 |
|
Krrish Dholakia
|
1336957077
|
fix(utils.py): fix
|
2024-06-04 19:41:20 -07:00 |
|
Krrish Dholakia
|
e279498970
|
feat(utils.py): support 'stream_options' param across all providers
Closes https://github.com/BerriAI/litellm/issues/3553
|
2024-06-04 19:03:26 -07:00 |
|
Krrish Dholakia
|
5b9808270b
|
fix(utils.py): add coverage for text openai and databricks
|
2024-06-04 18:27:03 -07:00 |
|
Krrish Dholakia
|
d74ccc6c84
|
fix(utils.py): fix stream options to return consistent response object
|
2024-06-04 18:17:45 -07:00 |
|
Ishaan Jaff
|
bf81065ac6
|
fix - by default log raw curl command on langfuse
|
2024-06-04 16:30:25 -07:00 |
|
Krrish Dholakia
|
7432c6a4d9
|
fix(utils.py): fix cost calculation for openai-compatible streaming object
|
2024-06-04 10:36:25 -07:00 |
|
Krrish Dholakia
|
8a0b4f5bef
|
fix(utils.py): add coverage for azure img gen content policy violation error
|
2024-06-04 08:29:30 -07:00 |
|
Krrish Dholakia
|
ae52e7559e
|
fix(router.py): use litellm.request_timeout as default for router clients
|
2024-06-03 14:19:53 -07:00 |
|
Ishaan Jaff
|
6ee073928b
|
Merge pull request #3983 from BerriAI/litellm_log_request_boddy_langfuse
[Feat] Log Raw Request from LiteLLM on Langfuse - when `"log_raw_request": true`
|
2024-06-03 13:42:06 -07:00 |
|
Ishaan Jaff
|
7f824e5705
|
feat - log raw_request to langfuse / other logging providers
|
2024-06-03 07:53:52 -07:00 |
|
Krrish Dholakia
|
aa99012397
|
fix(utils.py): handle else block for get optional params
|
2024-06-03 07:45:44 -07:00 |
|
Krrish Dholakia
|
594daef07a
|
fix(utils.py): correctly instrument passing through api version in optional param check
|
2024-06-01 19:31:52 -07:00 |
|
Krrish Dholakia
|
23087295e1
|
fix(azure.py): support dropping 'tool_choice=required' for older azure API versions
Closes https://github.com/BerriAI/litellm/issues/3876
|
2024-06-01 18:44:50 -07:00 |
|
Krish Dholakia
|
f2ca86b0e7
|
Merge pull request #3944 from BerriAI/litellm_fix_parallel_streaming
fix: fix streaming with httpx client
|
2024-05-31 21:42:37 -07:00 |
|
Krrish Dholakia
|
ecbb3c54c3
|
fix(utils.py): support get_max_tokens() call with same model_name as completion
Closes https://github.com/BerriAI/litellm/issues/3921
|
2024-05-31 21:37:51 -07:00 |
|
Yulong Liu
|
4177cf7a59
|
add document
|
2024-05-31 18:55:22 -07:00 |
|
Krrish Dholakia
|
3896e3e88f
|
fix: fix streaming with httpx client
prevent overwriting streams in parallel streaming calls
|
2024-05-31 10:55:18 -07:00 |
|
lj
|
f1fe41db74
|
Merge branch 'main' into fix-pydantic-warnings-again
|
2024-05-31 11:35:42 +08:00 |
|
Krish Dholakia
|
73e3dba2f6
|
Merge pull request #3928 from BerriAI/litellm_audio_speech_endpoint
feat(main.py): support openai tts endpoint
|
2024-05-30 17:30:42 -07:00 |
|
Krrish Dholakia
|
6b4153ff03
|
fix(main.py): add logging to audio_transcription calls
|
2024-05-30 16:57:11 -07:00 |
|
KX
|
ddb998fac1
|
fix: add missing seed parameter to ollama input
Current ollama interfacing does not allow for seed, which is supported in https://github.com/ollama/ollama/blob/main/docs/api.md#parameters and https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values
This resolves that by adding in handling of seed parameter.
|
2024-05-31 01:47:56 +08:00 |
|
Nir Gazit
|
8aebad9d25
|
Revert "Revert "fix: Log errors in Traceloop Integration (reverts previous revert)""
|
2024-05-30 04:06:45 +03:00 |
|
Krish Dholakia
|
06ae6cad8d
|
Revert "fix: Log errors in Traceloop Integration (reverts previous revert)"
|
2024-05-29 16:30:09 -07:00 |
|
Krish Dholakia
|
5063f0eab8
|
Merge pull request #3846 from nirga/revert-3831-revert-3780-traceloop-failures
fix: Log errors in Traceloop Integration (reverts previous revert)
|
2024-05-29 08:54:01 -07:00 |
|
Ishaan Jaff
|
000f23d005
|
Merge branch 'main' into litellm_show_openai_params_model_hub
|
2024-05-27 09:27:56 -07:00 |
|
Krrish Dholakia
|
23542fc1d2
|
fix(utils.py): support deepinfra optional params
Fixes https://github.com/BerriAI/litellm/issues/3855
|
2024-05-27 09:16:56 -07:00 |
|
Ishaan Jaff
|
50f1cbb1dd
|
fix - return supported_openai_params from get_model_info
|
2024-05-27 09:00:12 -07:00 |
|
Krrish Dholakia
|
8e9a3fef81
|
feat(proxy_server.py): expose new /model_group/info endpoint
returns model-group level info on supported params, max tokens, pricing, etc.
|
2024-05-26 14:07:35 -07:00 |
|
Nir Gazit
|
5509e9f531
|
Revert "Revert "Log errors in Traceloop Integration""
|
2024-05-26 12:01:10 +03:00 |
|
Ishaan Jaff
|
af82336cad
|
Merge pull request #3824 from BerriAI/litellm_include_litellm_exception-in-error
[Feature]: Attach litellm exception in error string
|
2024-05-25 17:09:22 -07:00 |
|
Krrish Dholakia
|
b0afacf7e3
|
fix(proxy_server.py): fix model check for /v1/models endpoint when team has restricted access
|
2024-05-25 13:02:03 -07:00 |
|
Ishaan Jaff
|
b16c58d521
|
Revert "Log errors in Traceloop Integration"
|
2024-05-24 21:25:17 -07:00 |
|