Ishaan Jaff
|
bf81065ac6
|
fix - by default log raw curl command on langfuse
|
2024-06-04 16:30:25 -07:00 |
|
Krrish Dholakia
|
7432c6a4d9
|
fix(utils.py): fix cost calculation for openai-compatible streaming object
|
2024-06-04 10:36:25 -07:00 |
|
Krrish Dholakia
|
8a0b4f5bef
|
fix(utils.py): add coverage for azure img gen content policy violation error
|
2024-06-04 08:29:30 -07:00 |
|
Krrish Dholakia
|
ae52e7559e
|
fix(router.py): use litellm.request_timeout as default for router clients
|
2024-06-03 14:19:53 -07:00 |
|
Ishaan Jaff
|
6ee073928b
|
Merge pull request #3983 from BerriAI/litellm_log_request_boddy_langfuse
[Feat] Log Raw Request from LiteLLM on Langfuse - when `"log_raw_request": true`
|
2024-06-03 13:42:06 -07:00 |
|
Ishaan Jaff
|
7f824e5705
|
feat - log raw_request to langfuse / other logging providers
|
2024-06-03 07:53:52 -07:00 |
|
Krrish Dholakia
|
aa99012397
|
fix(utils.py): handle else block for get optional params
|
2024-06-03 07:45:44 -07:00 |
|
Krrish Dholakia
|
594daef07a
|
fix(utils.py): correctly instrument passing through api version in optional param check
|
2024-06-01 19:31:52 -07:00 |
|
Krrish Dholakia
|
23087295e1
|
fix(azure.py): support dropping 'tool_choice=required' for older azure API versions
Closes https://github.com/BerriAI/litellm/issues/3876
|
2024-06-01 18:44:50 -07:00 |
|
Krish Dholakia
|
f2ca86b0e7
|
Merge pull request #3944 from BerriAI/litellm_fix_parallel_streaming
fix: fix streaming with httpx client
|
2024-05-31 21:42:37 -07:00 |
|
Krrish Dholakia
|
ecbb3c54c3
|
fix(utils.py): support get_max_tokens() call with same model_name as completion
Closes https://github.com/BerriAI/litellm/issues/3921
|
2024-05-31 21:37:51 -07:00 |
|
Krrish Dholakia
|
3896e3e88f
|
fix: fix streaming with httpx client
prevent overwriting streams in parallel streaming calls
|
2024-05-31 10:55:18 -07:00 |
|
lj
|
f1fe41db74
|
Merge branch 'main' into fix-pydantic-warnings-again
|
2024-05-31 11:35:42 +08:00 |
|
Krish Dholakia
|
73e3dba2f6
|
Merge pull request #3928 from BerriAI/litellm_audio_speech_endpoint
feat(main.py): support openai tts endpoint
|
2024-05-30 17:30:42 -07:00 |
|
Krrish Dholakia
|
6b4153ff03
|
fix(main.py): add logging to audio_transcription calls
|
2024-05-30 16:57:11 -07:00 |
|
KX
|
ddb998fac1
|
fix: add missing seed parameter to ollama input
Current ollama interfacing does not allow for seed, which is supported in https://github.com/ollama/ollama/blob/main/docs/api.md#parameters and https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values
This resolves that by adding in handling of seed parameter.
|
2024-05-31 01:47:56 +08:00 |
|
Nir Gazit
|
8aebad9d25
|
Revert "Revert "fix: Log errors in Traceloop Integration (reverts previous revert)""
|
2024-05-30 04:06:45 +03:00 |
|
Krish Dholakia
|
06ae6cad8d
|
Revert "fix: Log errors in Traceloop Integration (reverts previous revert)"
|
2024-05-29 16:30:09 -07:00 |
|
Krish Dholakia
|
5063f0eab8
|
Merge pull request #3846 from nirga/revert-3831-revert-3780-traceloop-failures
fix: Log errors in Traceloop Integration (reverts previous revert)
|
2024-05-29 08:54:01 -07:00 |
|
Ishaan Jaff
|
000f23d005
|
Merge branch 'main' into litellm_show_openai_params_model_hub
|
2024-05-27 09:27:56 -07:00 |
|
Krrish Dholakia
|
23542fc1d2
|
fix(utils.py): support deepinfra optional params
Fixes https://github.com/BerriAI/litellm/issues/3855
|
2024-05-27 09:16:56 -07:00 |
|
Ishaan Jaff
|
50f1cbb1dd
|
fix - return supported_openai_params from get_model_info
|
2024-05-27 09:00:12 -07:00 |
|
Krrish Dholakia
|
8e9a3fef81
|
feat(proxy_server.py): expose new /model_group/info endpoint
returns model-group level info on supported params, max tokens, pricing, etc.
|
2024-05-26 14:07:35 -07:00 |
|
Nir Gazit
|
5509e9f531
|
Revert "Revert "Log errors in Traceloop Integration""
|
2024-05-26 12:01:10 +03:00 |
|
Ishaan Jaff
|
af82336cad
|
Merge pull request #3824 from BerriAI/litellm_include_litellm_exception-in-error
[Feature]: Attach litellm exception in error string
|
2024-05-25 17:09:22 -07:00 |
|
Krrish Dholakia
|
b0afacf7e3
|
fix(proxy_server.py): fix model check for /v1/models endpoint when team has restricted access
|
2024-05-25 13:02:03 -07:00 |
|
Ishaan Jaff
|
b16c58d521
|
Revert "Log errors in Traceloop Integration"
|
2024-05-24 21:25:17 -07:00 |
|
Krish Dholakia
|
40791ee1f8
|
Merge pull request #3828 from BerriAI/litellm_outage_alerting
fix(slack_alerting.py): support region based outage alerting
|
2024-05-24 19:13:17 -07:00 |
|
Krrish Dholakia
|
4536ed6f6e
|
feat(slack_alerting.py): refactor region outage alerting to do model based alerting instead
Unable to extract azure region from api base, makes sense to start with model alerting and then move to region
|
2024-05-24 19:10:33 -07:00 |
|
Krrish Dholakia
|
7368406c24
|
fix(slack_alerting.py): support region based outage alerting
|
2024-05-24 16:59:16 -07:00 |
|
Krish Dholakia
|
0f195d6b94
|
Merge pull request #3780 from nirga/traceloop-failures
Log errors in Traceloop Integration
|
2024-05-24 14:23:26 -07:00 |
|
Nir Gazit
|
43c30a4489
|
fix(traceloop): log errors
|
2024-05-24 22:05:31 +03:00 |
|
Ishaan Jaff
|
2b85d0faf9
|
feat - include litellm exception type when raising exception
|
2024-05-24 10:45:37 -07:00 |
|
Krish Dholakia
|
baa53d94f0
|
Merge pull request #3812 from afbarbaro/main
Fix issue with delta being None when Deferred / Async Content Filter is enabled on Azure OpenAI
|
2024-05-24 10:05:08 -07:00 |
|
Andres Barbaro
|
8dd4838d96
|
Fix issue with delta being None when Deferred / Async Content Filter is enabled on Azure OpenAI
|
2024-05-23 22:42:42 -05:00 |
|
Krrish Dholakia
|
c50074a0b7
|
feat(ui/model_dashboard.tsx): add databricks models via admin ui
|
2024-05-23 20:28:54 -07:00 |
|
Krish Dholakia
|
edb349a9ab
|
Merge pull request #3808 from BerriAI/litellm_databricks_api
feat(databricks.py): adds databricks support - completion, async, streaming
|
2024-05-23 19:23:19 -07:00 |
|
Krrish Dholakia
|
e3c5e004c5
|
feat(databricks.py): add embedding model support
|
2024-05-23 18:22:03 -07:00 |
|
Krrish Dholakia
|
143a44823a
|
feat(databricks.py): adds databricks support - completion, async, streaming
Closes https://github.com/BerriAI/litellm/issues/2160
|
2024-05-23 16:29:46 -07:00 |
|
Ishaan Jaff
|
bff4227f6a
|
feat - add prixing for vertex_ai image gen
|
2024-05-23 16:27:08 -07:00 |
|
Krish Dholakia
|
10e1b43751
|
Merge branch 'main' into litellm_filter_invalid_params
|
2024-05-21 20:42:21 -07:00 |
|
Krrish Dholakia
|
c989b92801
|
feat(router.py): Fixes https://github.com/BerriAI/litellm/issues/3769
|
2024-05-21 17:24:51 -07:00 |
|
Krrish Dholakia
|
413be6d805
|
fix(utils.py): filter out hf eos token
Closes https://github.com/BerriAI/litellm/issues/3757
|
2024-05-21 14:31:54 -07:00 |
|
alisalim17
|
fe0e600062
|
Revert "Revert "Logfire Integration""
This reverts commit b04a8d878a .
|
2024-05-21 11:07:40 +04:00 |
|
Krish Dholakia
|
db77e41833
|
Merge pull request #3740 from BerriAI/litellm_return_rejected_response
feat(proxy_server.py): allow admin to return rejected response as string to user
|
2024-05-20 17:48:21 -07:00 |
|
Ishaan Jaff
|
7e6c9274fc
|
Merge branch 'main' into litellm_standardize_slack_exception_msg_format
|
2024-05-20 16:39:41 -07:00 |
|
Ishaan Jaff
|
dc55a57d8a
|
Merge pull request #3716 from BerriAI/litellm_set_cooldown_time_based_on_exception_header
[Feat] Router/ Proxy - set cooldown_time based on Azure exception headers
|
2024-05-20 16:34:12 -07:00 |
|
Ishaan Jaff
|
233828e16f
|
fix - standardize slack alerting format
|
2024-05-20 16:26:11 -07:00 |
|
Ishaan Jaff
|
28d1bde250
|
Merge pull request #3739 from BerriAI/litellm_add_imagen_support
[FEAT] Async VertexAI Image Generation
|
2024-05-20 14:14:43 -07:00 |
|
Ishaan Jaff
|
883a9eb69a
|
add parameter mapping with vertex ai
|
2024-05-20 13:28:20 -07:00 |
|