Krrish Dholakia
|
1f56ea6015
|
build: trigger new build
|
2024-04-04 10:23:13 -07:00 |
|
Krrish Dholakia
|
b73cd05674
|
build: trigger new build
|
2024-04-04 10:20:25 -07:00 |
|
Krrish Dholakia
|
6f64eccafe
|
refactor(main.py): trigger new build
|
2024-04-03 08:01:26 -07:00 |
|
Krrish Dholakia
|
bc1ee5c838
|
fix(main.py): support async calls from azure_text
|
2024-04-03 07:59:32 -07:00 |
|
Krrish Dholakia
|
ed46af19ec
|
fix(openai.py): return logprobs for text completion calls
|
2024-04-02 14:05:56 -07:00 |
|
Krrish Dholakia
|
fba2ae61d3
|
fix(main.py): fix elif block
|
2024-04-02 09:47:49 -07:00 |
|
Krish Dholakia
|
221cac0ac2
|
Merge pull request #2790 from phact/patch-2
Fix max_tokens type in main.py
|
2024-04-02 09:02:34 -07:00 |
|
Krrish Dholakia
|
812dc7e3cc
|
refactor(main.py): trigger new build
|
2024-04-02 08:51:18 -07:00 |
|
Krrish Dholakia
|
67f62aa53e
|
fix(main.py): support text completion input being a list of strings
addresses - https://github.com/BerriAI/litellm/issues/2792, https://github.com/BerriAI/litellm/issues/2777
|
2024-04-02 08:50:16 -07:00 |
|
Sebastián Estévez
|
5c4823923e
|
Fix max_tokens type in main.py
|
2024-04-02 00:28:08 -04:00 |
|
Krrish Dholakia
|
5546f9f10a
|
fix(main.py): support max retries for transcription calls
|
2024-04-01 18:37:53 -07:00 |
|
Krrish Dholakia
|
5f3b7ba523
|
refactor(main.py): trigger new build
|
2024-04-01 18:03:46 -07:00 |
|
Krrish Dholakia
|
82052689e7
|
refactor(main.py): trigger new build
|
2024-03-30 21:41:14 -07:00 |
|
Krrish Dholakia
|
5c199e4e4e
|
fix(main.py): fix translation to text_completions format for async text completion calls
|
2024-03-30 09:02:51 -07:00 |
|
Krrish Dholakia
|
fb72b79d2e
|
refactor(main.py): trigger new build
|
2024-03-29 09:24:47 -07:00 |
|
Krrish Dholakia
|
62ac3e1de4
|
fix(sagemaker.py): support 'model_id' param for sagemaker
allow passing inference component param to sagemaker in the same format as we handle this for bedrock
|
2024-03-29 08:43:17 -07:00 |
|
Krish Dholakia
|
8f0b4457fe
|
Merge pull request #2720 from onukura/ollama-batch-embedding
Batch embedding for Ollama
|
2024-03-28 14:58:55 -07:00 |
|
Krrish Dholakia
|
4a2abfd659
|
refactor(main.py): trigger new build
|
2024-03-28 14:52:47 -07:00 |
|
onukura
|
1bd60287ba
|
Add a feature to ollama aembedding to accept batch input
|
2024-03-27 21:39:19 +00:00 |
|
Krrish Dholakia
|
71cb12b0f8
|
refactor(main.py): trigger new build
|
2024-03-26 21:18:51 -07:00 |
|
Krish Dholakia
|
4d53b484cb
|
Merge pull request #2675 from onukura/ollama-embedding
Fix Ollama embedding
|
2024-03-26 16:08:28 -07:00 |
|
onukura
|
3423038601
|
Fix ollama api_base to enable remote url
|
2024-03-25 16:26:40 +00:00 |
|
Krrish Dholakia
|
8821b3d243
|
feat(main.py): support router.chat.completions.create
allows using router with instructor
https://github.com/BerriAI/litellm/issues/2673
|
2024-03-25 08:26:28 -07:00 |
|
Krrish Dholakia
|
292cdd81e4
|
fix(router.py): fix pre call check logic
|
2024-03-23 18:56:08 -07:00 |
|
Krrish Dholakia
|
e8fbe9a9a5
|
fix(bedrock.py): support claude 3 function calling when stream=true
https://github.com/BerriAI/litellm/issues/2615
|
2024-03-21 18:39:03 -07:00 |
|
Krrish Dholakia
|
a626f4abfb
|
refactor(main.py): trigger new build
|
2024-03-21 10:56:44 -07:00 |
|
Krrish Dholakia
|
c9f20c8142
|
refactor(main.py): trigger new build
|
2024-03-19 21:05:53 -07:00 |
|
Krish Dholakia
|
09269005db
|
Merge pull request #2142 from vilmar-hillow/azure_embedding_ad_token
Fixed azure ad token not being processed properly in embedding models
|
2024-03-19 11:51:28 -07:00 |
|
Krish Dholakia
|
f522a5236b
|
Merge pull request #2561 from BerriAI/litellm_batch_writing_db
fix(proxy/utils.py): move to batch writing db updates
|
2024-03-18 21:50:47 -07:00 |
|
Krrish Dholakia
|
5a2b3c1b68
|
refactor(main.py): trigger new build
|
2024-03-18 21:27:32 -07:00 |
|
Krrish Dholakia
|
c692124d58
|
docs(main.py): add timeout to docstring
|
2024-03-18 21:23:46 -07:00 |
|
Bincheng Li
|
794f90dbb7
|
fix bug: custom prompt templates registered are never applied to vllm provider
|
2024-03-17 15:21:14 +08:00 |
|
Krrish Dholakia
|
aed7f65ef2
|
refactor(main.py): trigger new build
|
2024-03-16 18:49:54 -07:00 |
|
Krish Dholakia
|
b0d530d029
|
Merge pull request #2535 from BerriAI/litellm_fireworks_ai_support
feat(utils.py): add native fireworks ai support
|
2024-03-15 10:02:53 -07:00 |
|
Krrish Dholakia
|
b56f9e148a
|
refactor(main.py): trigger new build
|
2024-03-15 09:42:23 -07:00 |
|
Krrish Dholakia
|
0783a3f247
|
feat(utils.py): add native fireworks ai support
addresses - https://github.com/BerriAI/litellm/issues/777, https://github.com/BerriAI/litellm/issues/2486
|
2024-03-15 09:09:59 -07:00 |
|
Krrish Dholakia
|
ca5654a20f
|
refactor(main.py): trigger new build
|
2024-03-14 13:01:18 -07:00 |
|
Krrish Dholakia
|
ec81664e38
|
refactor(main.py): trigger new build
|
2024-03-14 12:10:39 -07:00 |
|
Krrish Dholakia
|
d2f47ee45b
|
fix(parallel_request_limiter.py): handle metadata being none
|
2024-03-14 10:02:41 -07:00 |
|
Krrish Dholakia
|
b6f7eb922f
|
docs(enterprise.md): add prompt injection detection to docs
|
2024-03-13 12:37:32 -07:00 |
|
Krish Dholakia
|
ce3c865adb
|
Merge pull request #2472 from BerriAI/litellm_anthropic_streaming_tool_calling
fix(anthropic.py): support claude-3 streaming with function calling
|
2024-03-12 21:36:01 -07:00 |
|
Dmitry Supranovich
|
f69ce1a6cf
|
Fixed azure ad token not being processed properly in embedding models
|
2024-03-12 21:29:24 -04:00 |
|
Krish Dholakia
|
bf0adfc246
|
Merge pull request #2473 from BerriAI/litellm_fix_compatible_provider_model_name
fix(openai.py): return model name with custom llm provider for openai-compatible endpoints (e.g. mistral, together ai, etc.)
|
2024-03-12 12:58:29 -07:00 |
|
Ishaan Jaff
|
15591d0978
|
Merge pull request #2474 from BerriAI/litellm_support_command_r
[New-Model] Cohere/command-r
|
2024-03-12 11:11:56 -07:00 |
|
Krrish Dholakia
|
4dd28e9646
|
fix(main.py): trigger new build
|
2024-03-12 11:07:14 -07:00 |
|
Krrish Dholakia
|
e94c4f818c
|
fix(openai.py): return model name with custom llm provider for openai compatible endpoints
|
2024-03-12 10:30:10 -07:00 |
|
ishaan-jaff
|
f398d6e48f
|
(feat) cohere_chat provider
|
2024-03-12 10:29:26 -07:00 |
|
Krrish Dholakia
|
1c6438c267
|
fix(anthropic.py): support streaming with function calling
|
2024-03-12 09:52:11 -07:00 |
|
ishaan-jaff
|
c5ebbd1868
|
(feat) support azure/gpt-instruct models
|
2024-03-12 09:30:15 -07:00 |
|
Krrish Dholakia
|
4586ba554c
|
refactor(main.py): trigger new build
|
2024-03-11 13:57:40 -07:00 |
|