Krrish Dholakia
|
62b8749fe8
|
build: trigger new build
|
2024-04-04 10:23:13 -07:00 |
|
Krrish Dholakia
|
7f3a8d2a5e
|
build: trigger new build
|
2024-04-04 10:20:25 -07:00 |
|
Krrish Dholakia
|
a26732e710
|
refactor(main.py): trigger new build
|
2024-04-03 08:01:26 -07:00 |
|
Krrish Dholakia
|
88e8f14b69
|
fix(main.py): support async calls from azure_text
|
2024-04-03 07:59:32 -07:00 |
|
Krrish Dholakia
|
1d341970ba
|
feat(vertex_ai_anthropic.py): add claude 3 on vertex ai support - working .completions call
.completions() call works
|
2024-04-02 22:07:39 -07:00 |
|
Krrish Dholakia
|
b07788d2a5
|
fix(openai.py): return logprobs for text completion calls
|
2024-04-02 14:05:56 -07:00 |
|
Krrish Dholakia
|
72abe200a9
|
fix(main.py): fix elif block
|
2024-04-02 09:47:49 -07:00 |
|
Krish Dholakia
|
d95a1f3a28
|
Merge pull request #2790 from phact/patch-2
Fix max_tokens type in main.py
|
2024-04-02 09:02:34 -07:00 |
|
Krrish Dholakia
|
2fc7aede12
|
refactor(main.py): trigger new build
|
2024-04-02 08:51:18 -07:00 |
|
Krrish Dholakia
|
0d949d71ab
|
fix(main.py): support text completion input being a list of strings
addresses - https://github.com/BerriAI/litellm/issues/2792, https://github.com/BerriAI/litellm/issues/2777
|
2024-04-02 08:50:16 -07:00 |
|
Sebastián Estévez
|
d3a86e7b7f
|
Fix max_tokens type in main.py
|
2024-04-02 00:28:08 -04:00 |
|
Krrish Dholakia
|
ceabf726b0
|
fix(main.py): support max retries for transcription calls
|
2024-04-01 18:37:53 -07:00 |
|
Krrish Dholakia
|
ca54b62656
|
refactor(main.py): trigger new build
|
2024-04-01 18:03:46 -07:00 |
|
Krrish Dholakia
|
f5d920e314
|
refactor(main.py): trigger new build
|
2024-03-30 21:41:14 -07:00 |
|
Krrish Dholakia
|
c0204310ee
|
fix(main.py): fix translation to text_completions format for async text completion calls
|
2024-03-30 09:02:51 -07:00 |
|
Krrish Dholakia
|
63271846c2
|
refactor(main.py): trigger new build
|
2024-03-29 09:24:47 -07:00 |
|
Krrish Dholakia
|
d547944556
|
fix(sagemaker.py): support 'model_id' param for sagemaker
allow passing inference component param to sagemaker in the same format as we handle this for bedrock
|
2024-03-29 08:43:17 -07:00 |
|
Krish Dholakia
|
28905c85b6
|
Merge pull request #2720 from onukura/ollama-batch-embedding
Batch embedding for Ollama
|
2024-03-28 14:58:55 -07:00 |
|
Krrish Dholakia
|
664663f301
|
refactor(main.py): trigger new build
|
2024-03-28 14:52:47 -07:00 |
|
onukura
|
f86472518d
|
Add a feature to ollama aembedding to accept batch input
|
2024-03-27 21:39:19 +00:00 |
|
Krrish Dholakia
|
9375b131ee
|
refactor(main.py): trigger new build
|
2024-03-26 21:18:51 -07:00 |
|
Krish Dholakia
|
7eb2c7942c
|
Merge pull request #2675 from onukura/ollama-embedding
Fix Ollama embedding
|
2024-03-26 16:08:28 -07:00 |
|
onukura
|
ef69eefcdb
|
Fix ollama api_base to enable remote url
|
2024-03-25 16:26:40 +00:00 |
|
Krrish Dholakia
|
f98aead602
|
feat(main.py): support router.chat.completions.create
allows using router with instructor
https://github.com/BerriAI/litellm/issues/2673
|
2024-03-25 08:26:28 -07:00 |
|
Krrish Dholakia
|
b7321ae4ee
|
fix(router.py): fix pre call check logic
|
2024-03-23 18:56:08 -07:00 |
|
Krrish Dholakia
|
94f55aa6d9
|
fix(bedrock.py): support claude 3 function calling when stream=true
https://github.com/BerriAI/litellm/issues/2615
|
2024-03-21 18:39:03 -07:00 |
|
Krrish Dholakia
|
af27a61d76
|
refactor(main.py): trigger new build
|
2024-03-21 10:56:44 -07:00 |
|
Krrish Dholakia
|
d6624bf6c3
|
refactor(main.py): trigger new build
|
2024-03-19 21:05:53 -07:00 |
|
Krish Dholakia
|
c840fecdeb
|
Merge pull request #2142 from vilmar-hillow/azure_embedding_ad_token
Fixed azure ad token not being processed properly in embedding models
|
2024-03-19 11:51:28 -07:00 |
|
Krish Dholakia
|
c4dbd0407e
|
Merge pull request #2561 from BerriAI/litellm_batch_writing_db
fix(proxy/utils.py): move to batch writing db updates
|
2024-03-18 21:50:47 -07:00 |
|
Krrish Dholakia
|
2827acc487
|
refactor(main.py): trigger new build
|
2024-03-18 21:27:32 -07:00 |
|
Krrish Dholakia
|
693b5eb376
|
docs(main.py): add timeout to docstring
|
2024-03-18 21:23:46 -07:00 |
|
Bincheng Li
|
e605b04927
|
fix bug: custom prompt templates registered are never applied to vllm provider
|
2024-03-17 15:21:14 +08:00 |
|
Krrish Dholakia
|
c69ae8efce
|
refactor(main.py): trigger new build
|
2024-03-16 18:49:54 -07:00 |
|
Krish Dholakia
|
32ca306123
|
Merge pull request #2535 from BerriAI/litellm_fireworks_ai_support
feat(utils.py): add native fireworks ai support
|
2024-03-15 10:02:53 -07:00 |
|
Krrish Dholakia
|
860b06d273
|
refactor(main.py): trigger new build
|
2024-03-15 09:42:23 -07:00 |
|
Krrish Dholakia
|
9909f44015
|
feat(utils.py): add native fireworks ai support
addresses - https://github.com/BerriAI/litellm/issues/777, https://github.com/BerriAI/litellm/issues/2486
|
2024-03-15 09:09:59 -07:00 |
|
Krrish Dholakia
|
0b6cf3d5cf
|
refactor(main.py): trigger new build
|
2024-03-14 13:01:18 -07:00 |
|
Krrish Dholakia
|
bdd2004691
|
refactor(main.py): trigger new build
|
2024-03-14 12:10:39 -07:00 |
|
Krrish Dholakia
|
7876aa2d75
|
fix(parallel_request_limiter.py): handle metadata being none
|
2024-03-14 10:02:41 -07:00 |
|
Krrish Dholakia
|
16e3aaced5
|
docs(enterprise.md): add prompt injection detection to docs
|
2024-03-13 12:37:32 -07:00 |
|
Krish Dholakia
|
9f2d540ebf
|
Merge pull request #2472 from BerriAI/litellm_anthropic_streaming_tool_calling
fix(anthropic.py): support claude-3 streaming with function calling
|
2024-03-12 21:36:01 -07:00 |
|
Dmitry Supranovich
|
57ebb9582e
|
Fixed azure ad token not being processed properly in embedding models
|
2024-03-12 21:29:24 -04:00 |
|
Krish Dholakia
|
0d18f3c0ca
|
Merge pull request #2473 from BerriAI/litellm_fix_compatible_provider_model_name
fix(openai.py): return model name with custom llm provider for openai-compatible endpoints (e.g. mistral, together ai, etc.)
|
2024-03-12 12:58:29 -07:00 |
|
Ishaan Jaff
|
5172fb1de9
|
Merge pull request #2474 from BerriAI/litellm_support_command_r
[New-Model] Cohere/command-r
|
2024-03-12 11:11:56 -07:00 |
|
Krrish Dholakia
|
d2286fb93c
|
fix(main.py): trigger new build
|
2024-03-12 11:07:14 -07:00 |
|
Krrish Dholakia
|
0033613b9e
|
fix(openai.py): return model name with custom llm provider for openai compatible endpoints
|
2024-03-12 10:30:10 -07:00 |
|
ishaan-jaff
|
7635c764cf
|
(feat) cohere_chat provider
|
2024-03-12 10:29:26 -07:00 |
|
Krrish Dholakia
|
86ed0aaba8
|
fix(anthropic.py): support streaming with function calling
|
2024-03-12 09:52:11 -07:00 |
|
ishaan-jaff
|
b193b01f40
|
(feat) support azure/gpt-instruct models
|
2024-03-12 09:30:15 -07:00 |
|