ishaan-jaff
|
105dacb6fa
|
(chore) completion - move functions lower
|
2023-12-26 14:35:59 +05:30 |
|
ishaan-jaff
|
0428a5cc04
|
(fix) optional params - openai/azure. don't overwrite it
|
2023-12-26 14:32:59 +05:30 |
|
ishaan-jaff
|
749860ec3b
|
(test) openai logprobs
|
2023-12-26 14:00:42 +05:30 |
|
ishaan-jaff
|
c1b1d0d15d
|
(feat) support logprobs, top_logprobs openai
|
2023-12-26 14:00:42 +05:30 |
|
Krrish Dholakia
|
8f3732eeeb
|
docs(user_keys.md): docs on passing user keys to litellm proxy
|
2023-12-26 13:55:28 +05:30 |
|
ishaan-jaff
|
6f19117fb3
|
(feat) add logprobs, top_logprobs to litellm.completion
|
2023-12-26 13:39:48 +05:30 |
|
ishaan-jaff
|
eb6d8364be
|
(test) azure gpt-4 vision test
|
2023-12-26 13:18:38 +05:30 |
|
Krrish Dholakia
|
a4633c57c4
|
fix(router.py): accept dynamic api key
|
2023-12-26 13:16:22 +05:30 |
|
ishaan-jaff
|
a8468e30f0
|
(feat) proxy, use --model with --test
|
2023-12-26 09:40:58 +05:30 |
|
ishaan-jaff
|
2616a564a1
|
(feat) add langfuse logging tests to ci/cd
|
2023-12-26 09:16:13 +05:30 |
|
ishaan-jaff
|
109f82efee
|
(fix) langfuse - asycn logger
|
2023-12-26 08:49:49 +05:30 |
|
ishaan-jaff
|
cd4c9a6543
|
(test) ollama cht
|
2023-12-26 00:08:35 +05:30 |
|
ishaan-jaff
|
a2d963ce22
|
bump: version 1.15.7 → 1.15.8
|
2023-12-26 00:07:57 +05:30 |
|
ishaan-jaff
|
5db0fc7a5b
|
(test) ollama-chat
|
2023-12-25 23:59:24 +05:30 |
|
ishaan-jaff
|
dbf46823f8
|
(feat) ollama_chat add async stream
|
2023-12-25 23:45:27 +05:30 |
|
ishaan-jaff
|
c199d4c1fc
|
(feat) ollama_chat - add async streaming
|
2023-12-25 23:45:01 +05:30 |
|
ishaan-jaff
|
0f4b5a1446
|
(feat) add ollama_chat exception mapping
|
2023-12-25 23:43:14 +05:30 |
|
ishaan-jaff
|
35a68665d1
|
(feat) ollama_chat - streaming
|
2023-12-25 23:38:47 +05:30 |
|
ishaan-jaff
|
b985d996b2
|
(feat) ollama_chat - add streaming support
|
2023-12-25 23:38:01 +05:30 |
|
ishaan-jaff
|
53ab7db8dd
|
(test) ollama chat
|
2023-12-25 23:04:17 +05:30 |
|
ishaan-jaff
|
763ba913ec
|
utils - convert ollama_chat params
|
2023-12-25 23:04:17 +05:30 |
|
ishaan-jaff
|
39ea228046
|
(feat) ollama chat
|
2023-12-25 23:04:17 +05:30 |
|
ishaan-jaff
|
c3aff30464
|
(feat) add ollama_chat as a provider
|
2023-12-25 23:04:17 +05:30 |
|
ishaan-jaff
|
043d874ffe
|
(feat) ollama/chat
|
2023-12-25 23:04:17 +05:30 |
|
ishaan-jaff
|
1742bd8716
|
(feat) ollama use /api/chat
|
2023-12-25 14:29:10 +05:30 |
|
ishaan-jaff
|
edf2b60765
|
(feat) add ollama_chat v0
|
2023-12-25 14:27:10 +05:30 |
|
Krrish Dholakia
|
79978c44ba
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
|
ishaan-jaff
|
f610148398
|
(test) ollama json mode
|
2023-12-25 14:00:56 +05:30 |
|
Krrish Dholakia
|
abee400fb8
|
fix(proxy_server.py): accept keys with none duration
|
2023-12-25 13:46:24 +05:30 |
|
Krrish Dholakia
|
018405b956
|
fix(proxy/utils.py): return different exceptions if key is invalid vs. expired
https://github.com/BerriAI/litellm/issues/1230
|
2023-12-25 10:29:44 +05:30 |
|
Krrish Dholakia
|
72e8c84914
|
build(test_streaming.py): fix linting issues
|
2023-12-25 07:34:54 +05:30 |
|
Krrish Dholakia
|
6d73a77b01
|
fix(proxy_server.py): raise streaming exceptions
|
2023-12-25 07:18:09 +05:30 |
|
Krrish Dholakia
|
70f4dabff6
|
feat(gemini.py): add support for completion calls for gemini-pro (google ai studio)
|
2023-12-24 09:42:58 +05:30 |
|
Krrish Dholakia
|
b7a7c3a4e5
|
feat(ollama.py): add support for async ollama embeddings
|
2023-12-23 18:01:25 +05:30 |
|
Krrish Dholakia
|
2b9615595b
|
fix(test_azure_perf.py): fix linting
|
2023-12-23 13:25:00 +05:30 |
|
Krrish Dholakia
|
7e94885616
|
test(test_azure_perf.py): add perf testing for azure router streaming
|
2023-12-23 13:20:19 +05:30 |
|
Krrish Dholakia
|
429fbf05d4
|
Revert "test(test_azure_perf.py): add perf testing for router streaming"
This reverts commit 4fa7f19888 .
|
2023-12-23 13:19:07 +05:30 |
|
Krrish Dholakia
|
f38867c8b7
|
test(test_azure_perf.py): add perf testing for router streaming
|
2023-12-23 13:16:49 +05:30 |
|
Krrish Dholakia
|
a23fbe4d25
|
test: skip flaky tests
|
2023-12-23 12:37:38 +05:30 |
|
Krrish Dholakia
|
d1dea7c87d
|
fix(utils.py): log user_id to langfuse
|
2023-12-23 12:14:09 +05:30 |
|
Krish Dholakia
|
a52bb3ff1c
|
Merge pull request #1182 from sumanth13131/usage-based-routing-fix
usage_based_routing_fix
|
2023-12-23 11:50:34 +05:30 |
|
Krish Dholakia
|
fc395185af
|
Merge pull request #1183 from maxdeichmann/improve-langchain-integration
Improve langfuse integration
|
2023-12-23 11:47:36 +05:30 |
|
Krish Dholakia
|
f509ee4ee1
|
Merge pull request #1195 from AllentDan/fix-routing
fix least_busy router by updating min_traffic
|
2023-12-23 11:45:35 +05:30 |
|
Krish Dholakia
|
4ec97e0c97
|
Merge pull request #1203 from Manouchehri/bedrock-cloudflare-ai-gateway-1
Add aws_bedrock_runtime_endpoint support
|
2023-12-23 11:44:04 +05:30 |
|
Krish Dholakia
|
7aab6061d7
|
Merge pull request #1211 from sihyeonn/fix/sh-success-callback
fix: success_callback logic for cost_tracking
|
2023-12-23 11:41:30 +05:30 |
|
Krish Dholakia
|
36c1089029
|
Merge pull request #1213 from neubig/vertex_chat_generate_content
Make vertex ai work with generate_content
|
2023-12-23 11:40:43 +05:30 |
|
Krrish Dholakia
|
1878392f64
|
bump: version 1.15.6 → 1.15.7
|
2023-12-23 10:03:49 +05:30 |
|
Krrish Dholakia
|
c8d3a609e1
|
fix(langsmith.py): fix langsmith streaming logging
|
2023-12-23 10:02:35 +05:30 |
|
Krrish Dholakia
|
a96bac14af
|
fix(proxy_server.py): manage budget at user-level not key-level
https://github.com/BerriAI/litellm/issues/1220
|
2023-12-22 15:10:38 +05:30 |
|
Krrish Dholakia
|
61ab8dd5c1
|
fix(proxy_server.py): handle misformatted json body in chat completion request
|
2023-12-22 12:30:36 +05:30 |
|