ishaan-jaff
|
1f04446222
|
(fix) bedrock - embedding - support str input
|
2024-01-11 23:02:12 +05:30 |
|
ishaan-jaff
|
9aac1de191
|
v0
|
2024-01-11 22:56:18 +05:30 |
|
Krrish Dholakia
|
f32ec52673
|
build(pyproject.toml): drop certifi dependency (unused)
|
2024-01-10 08:09:03 +05:30 |
|
Krrish Dholakia
|
556e7d4e1a
|
fix(openai.py): fix exception raising logic
|
2024-01-09 11:58:30 +05:30 |
|
Krrish Dholakia
|
d105751643
|
fix(azure.py,-openai.py): raise the correct exceptions for image generation calls
|
2024-01-09 11:55:38 +05:30 |
|
ishaan-jaff
|
3081dc525a
|
(feat) litellm.completion - support ollama timeout
|
2024-01-09 10:34:41 +05:30 |
|
Krrish Dholakia
|
d89a58ec54
|
fix(ollama.py): use tiktoken as backup for prompt token counting
|
2024-01-09 09:47:18 +05:30 |
|
Krrish Dholakia
|
045ece4582
|
refactor(gemini.py): fix linting issue
|
2024-01-08 11:43:33 +05:30 |
|
Krrish Dholakia
|
e4a5a3395c
|
fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
|
2024-01-08 11:40:56 +05:30 |
|
Krish Dholakia
|
a394eb12db
|
Merge pull request #1315 from spdustin/feature_allow_claude_prefill
Adds "pre-fill" support for Claude
|
2024-01-08 10:48:15 +05:30 |
|
Krrish Dholakia
|
f300d17176
|
fix(gemini.py): better error handling
|
2024-01-08 07:32:26 +05:30 |
|
Krrish Dholakia
|
67ff7797c6
|
fix(factory.py): more logging around the image loading for gemini
|
2024-01-06 22:50:44 +05:30 |
|
Krish Dholakia
|
67ecab4b38
|
Merge pull request #1344 from BerriAI/litellm_speed_improvements
Litellm speed improvements
|
2024-01-06 22:38:10 +05:30 |
|
Krrish Dholakia
|
2d1871a1ae
|
fix(factory.py): support gemini-pro-vision on google ai studio
https://github.com/BerriAI/litellm/issues/1329
|
2024-01-06 22:36:22 +05:30 |
|
Krrish Dholakia
|
35fd28073e
|
fix(sagemaker.py): fix the post-call logging logic
|
2024-01-06 21:52:58 +05:30 |
|
Krrish Dholakia
|
4c7d530c2a
|
fix(openai.py): fix image generation model dump
|
2024-01-06 17:55:32 +05:30 |
|
Krrish Dholakia
|
807b64e68e
|
perf(azure+openai-files): use model_dump instead of json.loads + model_dump_json
|
2024-01-06 15:50:05 +05:30 |
|
spdustin@gmail.com
|
6520d153e7
|
Update factory (and tests) for Claude 2.1 via Bedrock
|
2024-01-05 23:32:32 +00:00 |
|
Dustin Miller
|
7172f83ef4
|
Merge branch 'BerriAI:main' into feature_allow_claude_prefill
|
2024-01-05 15:15:29 -06:00 |
|
ishaan-jaff
|
a36b1a4890
|
(fix) undo - model_dump_json() before logging
|
2024-01-05 11:47:16 +05:30 |
|
ishaan-jaff
|
6e1ea2c44c
|
(fix) proxy - log response before model_dump_json
|
2024-01-05 11:00:02 +05:30 |
|
ishaan-jaff
|
70bbc2e446
|
(fix) azure+cf gateway, health check
|
2024-01-04 12:34:07 +05:30 |
|
Krrish Dholakia
|
62ea95c25b
|
fix(proxy/rules.md): add docs on setting post-call rules on the proxy
|
2024-01-04 11:16:50 +05:30 |
|
Dustin Miller
|
5f54fc2383
|
Adds "pre-fill" support for Claude
|
2024-01-03 18:45:36 -06:00 |
|
ishaan-jaff
|
6672591198
|
(fix) init_bedrock_client
|
2024-01-01 22:48:56 +05:30 |
|
Krrish Dholakia
|
7be5f74b70
|
fix(aimage_generation): fix response type
|
2023-12-30 12:53:24 +05:30 |
|
Krrish Dholakia
|
4d239f1e65
|
fix(openai.py): fix async image gen call
|
2023-12-30 12:44:54 +05:30 |
|
Krrish Dholakia
|
b69ffb3738
|
fix: support dynamic timeouts for openai and azure
|
2023-12-30 12:14:02 +05:30 |
|
Krrish Dholakia
|
7d55a563ee
|
fix(main.py): don't set timeout as an optional api param
|
2023-12-30 11:47:07 +05:30 |
|
ishaan-jaff
|
224d38ba48
|
(fix) vertex ai - use usage from response
|
2023-12-29 16:30:25 +05:30 |
|
ishaan-jaff
|
c69f4f17a5
|
(feat) cloudflare - add optional params
|
2023-12-29 11:50:09 +05:30 |
|
ishaan-jaff
|
b990fc8324
|
(feat) cloudflare ai workers - add completion support
|
2023-12-29 11:34:58 +05:30 |
|
ishaan-jaff
|
796e735881
|
(feat) v0 adding cloudflare
|
2023-12-29 09:32:29 +05:30 |
|
ishaan-jaff
|
362bed6ca3
|
(fix) together_ai cost tracking
|
2023-12-28 22:11:08 +05:30 |
|
Krrish Dholakia
|
5a48dac83f
|
fix(vertex_ai.py): support function calling for gemini
|
2023-12-28 19:07:04 +05:30 |
|
Krrish Dholakia
|
8188475c16
|
feat(admin_ui.py): support creating keys on admin ui
|
2023-12-28 16:59:11 +05:30 |
|
Krrish Dholakia
|
507b6bf96e
|
fix(utils.py): use local tiktoken copy
|
2023-12-28 11:22:33 +05:30 |
|
Krrish Dholakia
|
2285282ef8
|
feat(health_check.py): more detailed health check calls
|
2023-12-28 09:12:57 +05:30 |
|
Krrish Dholakia
|
db6ef70a68
|
fix(azure.py): raise streaming exceptions
|
2023-12-27 15:43:13 +05:30 |
|
Krrish Dholakia
|
fd5e6efb1d
|
fix(azure.py,-openai.py): correctly raise errors if streaming calls fail
|
2023-12-27 15:08:37 +05:30 |
|
Krrish Dholakia
|
2269f01c17
|
fix: fix linting issues
|
2023-12-27 12:21:31 +05:30 |
|
dan
|
c7be18cf46
|
updated oobabooga to new api and support for embeddings
|
2023-12-26 19:45:28 -05:00 |
|
ishaan-jaff
|
c9be1cfcb1
|
(fix) ollama_chat - support function calling + fix for comp
|
2023-12-26 20:07:55 +05:30 |
|
ishaan-jaff
|
4233e42f5d
|
(feat) ollama_chat acompletion without streaming
|
2023-12-26 20:01:51 +05:30 |
|
ishaan-jaff
|
dbf46823f8
|
(feat) ollama_chat add async stream
|
2023-12-25 23:45:27 +05:30 |
|
ishaan-jaff
|
b985d996b2
|
(feat) ollama_chat - add streaming support
|
2023-12-25 23:38:01 +05:30 |
|
ishaan-jaff
|
043d874ffe
|
(feat) ollama/chat
|
2023-12-25 23:04:17 +05:30 |
|
ishaan-jaff
|
1742bd8716
|
(feat) ollama use /api/chat
|
2023-12-25 14:29:10 +05:30 |
|
ishaan-jaff
|
edf2b60765
|
(feat) add ollama_chat v0
|
2023-12-25 14:27:10 +05:30 |
|
Krrish Dholakia
|
79978c44ba
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
|