Krrish Dholakia
|
52a2f5150c
|
fix(utils.py): fix cost calculation for openai-compatible streaming object
|
2024-06-04 10:36:25 -07:00 |
|
Krrish Dholakia
|
5d3674b63d
|
fix(main.py): fix typing for image gen response
|
2024-06-04 08:29:30 -07:00 |
|
Krrish Dholakia
|
90f5aa7125
|
fix(main.py): fix ahealth_check to infer mode when custom_llm_provider/model_name used
|
2024-06-03 14:06:36 -07:00 |
|
Krrish Dholakia
|
9ef83126d7
|
fix(utils.py): correctly instrument passing through api version in optional param check
|
2024-06-01 19:31:52 -07:00 |
|
Krrish Dholakia
|
93c9ea160d
|
fix(openai.py): fix client caching logic
|
2024-06-01 16:45:56 -07:00 |
|
Krrish Dholakia
|
a16a1c407a
|
fix(http_handler.py): allow setting ca bundle path
|
2024-06-01 14:48:53 -07:00 |
|
Krrish Dholakia
|
a0fb301b18
|
docs(assistants.md): add assistants api to docs
|
2024-06-01 10:30:07 -07:00 |
|
Krish Dholakia
|
8375e9621c
|
Merge pull request #3954 from BerriAI/litellm_simple_request_prioritization
feat(scheduler.py): add request prioritization scheduler
|
2024-05-31 23:29:09 -07:00 |
|
Krrish Dholakia
|
e49325b234
|
fix(router.py): fix cooldown logic for usage-based-routing-v2 pre-call-checks
|
2024-05-31 21:32:01 -07:00 |
|
Krrish Dholakia
|
93c3635b64
|
fix: fix streaming with httpx client
prevent overwriting streams in parallel streaming calls
|
2024-05-31 10:55:18 -07:00 |
|
Krrish Dholakia
|
d65b7fe01b
|
fix(main.py): add logging to audio_transcription calls
|
2024-05-30 16:57:11 -07:00 |
|
Krrish Dholakia
|
93166cdabf
|
fix(openai.py): fix openai response for /audio/speech endpoint
|
2024-05-30 16:41:06 -07:00 |
|
Krrish Dholakia
|
a67cbf47f6
|
feat(main.py): support openai tts endpoint
Closes https://github.com/BerriAI/litellm/issues/3094
|
2024-05-30 14:28:28 -07:00 |
|
Krrish Dholakia
|
da56201e80
|
fix(main.py): pass api key and api base to openai.py for audio transcription call
|
2024-05-29 21:29:01 -07:00 |
|
Giri Tatavarty
|
51b9178630
|
#Fixed mypy errors. The requests package and stubs need to be imported - waiting to hear from Ishaan/Krrish before changing requirements.txt
|
2024-05-29 15:08:56 -07:00 |
|
Ishaan Jaff
|
99e506525c
|
Revert "Added support for Triton chat completion using trtlllm generate endpo…"
|
2024-05-29 13:42:49 -07:00 |
|
Ishaan Jaff
|
e8c1e87ac9
|
Merge pull request #3895 from giritatavarty-8451/litellm_triton_chatcompletion_support
Added support for Triton chat completion using trtlllm generate endpo…
|
2024-05-29 12:50:31 -07:00 |
|
Krrish Dholakia
|
f168e35629
|
build(config.yml): add pillow to ci/cd
|
2024-05-28 21:39:09 -07:00 |
|
Krrish Dholakia
|
20106715d5
|
feat(proxy_server.py): enable batch completion fastest response calls on proxy
introduces new `fastest_response` flag for enabling the call
|
2024-05-28 20:09:31 -07:00 |
|
Giri Tatavarty
|
a58dc68418
|
Added support for Triton chat completion using trtlllm generate endpoint and custom infer endpoint
|
2024-05-28 07:54:11 -07:00 |
|
Krrish Dholakia
|
6b50e656b8
|
fix(main.py): pass extra headers through for async calls
|
2024-05-27 19:11:40 -07:00 |
|
Krrish Dholakia
|
d2e14ca833
|
fix(bedrock_httpx.py): fix bedrock ptu model id str encoding
Fixes https://github.com/BerriAI/litellm/issues/3805
|
2024-05-25 10:54:01 -07:00 |
|
Krish Dholakia
|
d25ed9c4d3
|
Merge pull request #3828 from BerriAI/litellm_outage_alerting
fix(slack_alerting.py): support region based outage alerting
|
2024-05-24 19:13:17 -07:00 |
|
Krrish Dholakia
|
8dec87425e
|
feat(slack_alerting.py): refactor region outage alerting to do model based alerting instead
Unable to extract azure region from api base, makes sense to start with model alerting and then move to region
|
2024-05-24 19:10:33 -07:00 |
|
Krrish Dholakia
|
f8350b9461
|
fix(slack_alerting.py): support region based outage alerting
|
2024-05-24 16:59:16 -07:00 |
|
Ishaan Jaff
|
466accd4f5
|
Merge pull request #3462 from ffreemt/main
Add return_exceptions to batch_completion (retry)
|
2024-05-24 09:19:10 -07:00 |
|
ffreemt
|
86d46308bf
|
Make return-exceptions as default behavior in litellm.batch_completion
|
2024-05-24 11:09:11 +08:00 |
|
Krrish Dholakia
|
43353c28b3
|
feat(databricks.py): add embedding model support
|
2024-05-23 18:22:03 -07:00 |
|
Krrish Dholakia
|
d2229dcd21
|
feat(databricks.py): adds databricks support - completion, async, streaming
Closes https://github.com/BerriAI/litellm/issues/2160
|
2024-05-23 16:29:46 -07:00 |
|
Krrish Dholakia
|
f3d29a6b4a
|
feat(anthropic.py): support anthropic 'tool_choice' param
Closes https://github.com/BerriAI/litellm/issues/3752
|
2024-05-21 17:50:44 -07:00 |
|
Ishaan Jaff
|
2519879e67
|
add ImageObject
|
2024-05-20 10:45:37 -07:00 |
|
Ishaan Jaff
|
24951d44a4
|
feat - working httpx requests vertex ai image gen
|
2024-05-20 09:51:15 -07:00 |
|
Krrish Dholakia
|
5d24a72b7e
|
fix(bedrock_httpx.py): support mapping for bedrock cohere command r text
|
2024-05-17 16:13:49 -07:00 |
|
Krrish Dholakia
|
0258351c61
|
fix(main.py): fix async stream handling during bedrock error
|
2024-05-16 23:37:59 -07:00 |
|
Krrish Dholakia
|
92c2e2af6a
|
fix(bedrock_httpx.py): add async support for bedrock amazon, meta, mistral models
|
2024-05-16 22:39:25 -07:00 |
|
Krrish Dholakia
|
0293f7766a
|
fix(bedrock_httpx.py): move bedrock ai21 calls to being async
|
2024-05-16 22:21:30 -07:00 |
|
Krrish Dholakia
|
180bc46ca4
|
fix(bedrock_httpx.py): move anthropic bedrock calls to httpx
Fixing https://github.com/BerriAI/litellm/issues/2921
|
2024-05-16 21:51:55 -07:00 |
|
Krrish Dholakia
|
709373b15c
|
fix(replicate.py): move replicate calls to being completely async
Closes https://github.com/BerriAI/litellm/issues/3128
|
2024-05-16 17:24:08 -07:00 |
|
Ishaan Jaff
|
97324800ec
|
Merge pull request #3694 from BerriAI/litellm_allow_setting_anthropic_beta
[Feat] Support Anthropic `tools-2024-05-16` - Set Custom Anthropic Custom Headers
|
2024-05-16 15:48:26 -07:00 |
|
Ishaan Jaff
|
1fc9bcb184
|
feat use OpenAI extra_headers param
|
2024-05-16 14:38:17 -07:00 |
|
Krrish Dholakia
|
f43da3597d
|
test: fix test
|
2024-05-15 08:51:40 -07:00 |
|
Krrish Dholakia
|
1840919ebd
|
fix(main.py): testing fix
|
2024-05-15 08:23:00 -07:00 |
|
Edwin Jose George
|
81836ebe5d
|
fix: custom_llm_provider needs to be set before setting timeout
|
2024-05-15 22:36:15 +09:30 |
|
Krrish Dholakia
|
b06f989871
|
refactor(main.py): trigger new build
|
2024-05-14 22:46:44 -07:00 |
|
Krrish Dholakia
|
3b5c06747d
|
refactor(main.py): trigger new build
|
2024-05-14 22:17:40 -07:00 |
|
Krrish Dholakia
|
0262c480be
|
refactor(main.py): trigger new build
|
2024-05-14 19:52:23 -07:00 |
|
Krrish Dholakia
|
298fd9b25c
|
fix(main.py): ignore model_config param
|
2024-05-14 19:03:17 -07:00 |
|
Krrish Dholakia
|
724d880a45
|
test(test_completion.py): handle async watsonx call fail
|
2024-05-13 18:40:51 -07:00 |
|
Krrish Dholakia
|
d4123951d9
|
test: handle watsonx rate limit error
|
2024-05-13 18:27:39 -07:00 |
|
Krrish Dholakia
|
3694b5e7c0
|
refactor(main.py): trigger new build
|
2024-05-13 18:12:01 -07:00 |
|