Commit graph

459 commits

Author SHA1 Message Date
Krrish Dholakia
e990c70beb fix(ollama.py): fix returned error message for streaming error 2024-01-08 23:52:57 +05:30
Krrish Dholakia
3d0ea08f77 refactor(gemini.py): fix linting issue 2024-01-08 11:43:33 +05:30
Krrish Dholakia
b1fd0a164b fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
2024-01-08 11:40:56 +05:30
Krish Dholakia
4ea3e778f7
Merge pull request #1315 from spdustin/feature_allow_claude_prefill
Adds "pre-fill" support for Claude
2024-01-08 10:48:15 +05:30
Krrish Dholakia
79264b0dab fix(gemini.py): better error handling 2024-01-08 07:32:26 +05:30
Krrish Dholakia
1507217725 fix(factory.py): more logging around the image loading for gemini 2024-01-06 22:50:44 +05:30
Krish Dholakia
439ee3bafc
Merge pull request #1344 from BerriAI/litellm_speed_improvements
Litellm speed improvements
2024-01-06 22:38:10 +05:30
Krrish Dholakia
5fd2f945f3 fix(factory.py): support gemini-pro-vision on google ai studio
https://github.com/BerriAI/litellm/issues/1329
2024-01-06 22:36:22 +05:30
Krrish Dholakia
3577857ed1 fix(sagemaker.py): fix the post-call logging logic 2024-01-06 21:52:58 +05:30
Krrish Dholakia
f2ad13af65 fix(openai.py): fix image generation model dump 2024-01-06 17:55:32 +05:30
Krrish Dholakia
9a4a96f46e perf(azure+openai-files): use model_dump instead of json.loads + model_dump_json 2024-01-06 15:50:05 +05:30
spdustin@gmail.com
6201ab2c21 Update factory (and tests) for Claude 2.1 via Bedrock 2024-01-05 23:32:32 +00:00
Dustin Miller
53e5e1df07
Merge branch 'BerriAI:main' into feature_allow_claude_prefill 2024-01-05 15:15:29 -06:00
ishaan-jaff
79ab1aa35b (fix) undo - model_dump_json() before logging 2024-01-05 11:47:16 +05:30
ishaan-jaff
40b9f1dcb1 (fix) proxy - log response before model_dump_json 2024-01-05 11:00:02 +05:30
ishaan-jaff
234c057e97 (fix) azure+cf gateway, health check 2024-01-04 12:34:07 +05:30
Krrish Dholakia
0f7d03f761 fix(proxy/rules.md): add docs on setting post-call rules on the proxy 2024-01-04 11:16:50 +05:30
Dustin Miller
b10f64face
Adds "pre-fill" support for Claude 2024-01-03 18:45:36 -06:00
ishaan-jaff
d1e8d13c4f (fix) init_bedrock_client 2024-01-01 22:48:56 +05:30
Krrish Dholakia
a6719caebd fix(aimage_generation): fix response type 2023-12-30 12:53:24 +05:30
Krrish Dholakia
750432457b fix(openai.py): fix async image gen call 2023-12-30 12:44:54 +05:30
Krrish Dholakia
c33c1d85bb fix: support dynamic timeouts for openai and azure 2023-12-30 12:14:02 +05:30
Krrish Dholakia
77be3e3114 fix(main.py): don't set timeout as an optional api param 2023-12-30 11:47:07 +05:30
ishaan-jaff
739d9e7a78 (fix) vertex ai - use usage from response 2023-12-29 16:30:25 +05:30
ishaan-jaff
dde6bc4fb6 (feat) cloudflare - add optional params 2023-12-29 11:50:09 +05:30
ishaan-jaff
8fcfb7df22 (feat) cloudflare ai workers - add completion support 2023-12-29 11:34:58 +05:30
ishaan-jaff
367e9913dc (feat) v0 adding cloudflare 2023-12-29 09:32:29 +05:30
ishaan-jaff
d79df3a1e9 (fix) together_ai cost tracking 2023-12-28 22:11:08 +05:30
Krrish Dholakia
86403cd14e fix(vertex_ai.py): support function calling for gemini 2023-12-28 19:07:04 +05:30
Krrish Dholakia
cbcf406fd0 feat(admin_ui.py): support creating keys on admin ui 2023-12-28 16:59:11 +05:30
Krrish Dholakia
c4fc28ab0d fix(utils.py): use local tiktoken copy 2023-12-28 11:22:33 +05:30
Krrish Dholakia
3b1685e7c6 feat(health_check.py): more detailed health check calls 2023-12-28 09:12:57 +05:30
Krrish Dholakia
31148922b3 fix(azure.py): raise streaming exceptions 2023-12-27 15:43:13 +05:30
Krrish Dholakia
c9fdbaf898 fix(azure.py,-openai.py): correctly raise errors if streaming calls fail 2023-12-27 15:08:37 +05:30
Krrish Dholakia
c88a8d71f0 fix: fix linting issues 2023-12-27 12:21:31 +05:30
dan
c4dfd9be7c updated oobabooga to new api and support for embeddings 2023-12-26 19:45:28 -05:00
ishaan-jaff
3f6e6e7f55 (fix) ollama_chat - support function calling + fix for comp 2023-12-26 20:07:55 +05:30
ishaan-jaff
3839213d28 (feat) ollama_chat acompletion without streaming 2023-12-26 20:01:51 +05:30
ishaan-jaff
837ce269ae (feat) ollama_chat add async stream 2023-12-25 23:45:27 +05:30
ishaan-jaff
916ba9a6b3 (feat) ollama_chat - add streaming support 2023-12-25 23:38:01 +05:30
ishaan-jaff
03de92eec0 (feat) ollama/chat 2023-12-25 23:04:17 +05:30
ishaan-jaff
d85c19394f (feat) ollama use /api/chat 2023-12-25 14:29:10 +05:30
ishaan-jaff
da4ec6c8b6 (feat) add ollama_chat v0 2023-12-25 14:27:10 +05:30
Krrish Dholakia
4905929de3 refactor: add black formatting 2023-12-25 14:11:20 +05:30
Krrish Dholakia
1262d89ab3 feat(gemini.py): add support for completion calls for gemini-pro (google ai studio) 2023-12-24 09:42:58 +05:30
Krrish Dholakia
eaaad79823 feat(ollama.py): add support for async ollama embeddings 2023-12-23 18:01:25 +05:30
Krish Dholakia
03fd5da5ae
Merge pull request #1203 from Manouchehri/bedrock-cloudflare-ai-gateway-1
Add aws_bedrock_runtime_endpoint support
2023-12-23 11:44:04 +05:30
Krish Dholakia
81617534b6
Merge pull request #1213 from neubig/vertex_chat_generate_content
Make vertex ai work with generate_content
2023-12-23 11:40:43 +05:30
Krrish Dholakia
eb2d13e2fb test(test_completion.py-+-test_streaming.py): add ollama endpoint to ci/cd pipeline 2023-12-22 12:21:33 +05:30
Krrish Dholakia
57607f111a fix(ollama.py): use litellm.request timeout for async call timeout 2023-12-22 11:22:24 +05:30