Krrish Dholakia
|
cc89aa7456
|
fix(bedrock.py): add support for sts based boto3 initialization
https://github.com/BerriAI/litellm/issues/1476
|
2024-01-17 12:08:59 -08:00 |
|
ishaan-jaff
|
118c53af26
|
(fix) using base_url Azure
|
2024-01-17 10:12:55 -08:00 |
|
ishaan-jaff
|
82954728a7
|
(feat) support base_url with /openai passed for Azure
|
2024-01-17 10:03:25 -08:00 |
|
ishaan-jaff
|
a49fa3abaf
|
(v0) fixes for Azure GPT Vision enhancements
|
2024-01-17 09:57:16 -08:00 |
|
Krrish Dholakia
|
f7e8bdbb32
|
fix(vertex_ai.py): raise exception if vertex ai missing required dependendencies
|
2024-01-16 16:23:29 -08:00 |
|
ishaan-jaff
|
66c8eb582c
|
(feat) sagemaker - map status code and message
|
2024-01-15 21:43:16 -08:00 |
|
ishaan-jaff
|
a5f3907334
|
(feat) provisioned throughput - bedrock embedding models
|
2024-01-13 21:07:38 -08:00 |
|
ishaan-jaff
|
a661149a39
|
(feat) bedrock support provisioned throughput
|
2024-01-13 15:39:54 -08:00 |
|
Krish Dholakia
|
84879c0318
|
Merge pull request #1429 from dleen/data
[bug] unbound variable in bedrock
|
2024-01-12 22:16:11 +05:30 |
|
David Leen
|
1660e4ab72
|
improve bedrock exception granularity
|
2024-01-12 16:38:55 +01:00 |
|
David Leen
|
d0e9c9dce9
|
[bug] unbound variable in bedrock
note: the code was written as `json.dumps({})` even though it is more verbose in order to facilitate easier refactoring in the future
fixes #1428
|
2024-01-12 12:33:00 +01:00 |
|
Krrish Dholakia
|
0e1ea4325c
|
fix(azure.py): support health checks to text completion endpoints
|
2024-01-12 00:13:01 +05:30 |
|
ishaan-jaff
|
1f04446222
|
(fix) bedrock - embedding - support str input
|
2024-01-11 23:02:12 +05:30 |
|
ishaan-jaff
|
9aac1de191
|
v0
|
2024-01-11 22:56:18 +05:30 |
|
Krrish Dholakia
|
f32ec52673
|
build(pyproject.toml): drop certifi dependency (unused)
|
2024-01-10 08:09:03 +05:30 |
|
Krrish Dholakia
|
556e7d4e1a
|
fix(openai.py): fix exception raising logic
|
2024-01-09 11:58:30 +05:30 |
|
Krrish Dholakia
|
d105751643
|
fix(azure.py,-openai.py): raise the correct exceptions for image generation calls
|
2024-01-09 11:55:38 +05:30 |
|
ishaan-jaff
|
3081dc525a
|
(feat) litellm.completion - support ollama timeout
|
2024-01-09 10:34:41 +05:30 |
|
Krrish Dholakia
|
d89a58ec54
|
fix(ollama.py): use tiktoken as backup for prompt token counting
|
2024-01-09 09:47:18 +05:30 |
|
Krrish Dholakia
|
045ece4582
|
refactor(gemini.py): fix linting issue
|
2024-01-08 11:43:33 +05:30 |
|
Krrish Dholakia
|
e4a5a3395c
|
fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
|
2024-01-08 11:40:56 +05:30 |
|
Krish Dholakia
|
a394eb12db
|
Merge pull request #1315 from spdustin/feature_allow_claude_prefill
Adds "pre-fill" support for Claude
|
2024-01-08 10:48:15 +05:30 |
|
Krrish Dholakia
|
f300d17176
|
fix(gemini.py): better error handling
|
2024-01-08 07:32:26 +05:30 |
|
Krrish Dholakia
|
67ff7797c6
|
fix(factory.py): more logging around the image loading for gemini
|
2024-01-06 22:50:44 +05:30 |
|
Krish Dholakia
|
67ecab4b38
|
Merge pull request #1344 from BerriAI/litellm_speed_improvements
Litellm speed improvements
|
2024-01-06 22:38:10 +05:30 |
|
Krrish Dholakia
|
2d1871a1ae
|
fix(factory.py): support gemini-pro-vision on google ai studio
https://github.com/BerriAI/litellm/issues/1329
|
2024-01-06 22:36:22 +05:30 |
|
Krrish Dholakia
|
35fd28073e
|
fix(sagemaker.py): fix the post-call logging logic
|
2024-01-06 21:52:58 +05:30 |
|
Krrish Dholakia
|
4c7d530c2a
|
fix(openai.py): fix image generation model dump
|
2024-01-06 17:55:32 +05:30 |
|
Krrish Dholakia
|
807b64e68e
|
perf(azure+openai-files): use model_dump instead of json.loads + model_dump_json
|
2024-01-06 15:50:05 +05:30 |
|
spdustin@gmail.com
|
6520d153e7
|
Update factory (and tests) for Claude 2.1 via Bedrock
|
2024-01-05 23:32:32 +00:00 |
|
Dustin Miller
|
7172f83ef4
|
Merge branch 'BerriAI:main' into feature_allow_claude_prefill
|
2024-01-05 15:15:29 -06:00 |
|
ishaan-jaff
|
a36b1a4890
|
(fix) undo - model_dump_json() before logging
|
2024-01-05 11:47:16 +05:30 |
|
ishaan-jaff
|
6e1ea2c44c
|
(fix) proxy - log response before model_dump_json
|
2024-01-05 11:00:02 +05:30 |
|
ishaan-jaff
|
70bbc2e446
|
(fix) azure+cf gateway, health check
|
2024-01-04 12:34:07 +05:30 |
|
Krrish Dholakia
|
62ea95c25b
|
fix(proxy/rules.md): add docs on setting post-call rules on the proxy
|
2024-01-04 11:16:50 +05:30 |
|
Dustin Miller
|
5f54fc2383
|
Adds "pre-fill" support for Claude
|
2024-01-03 18:45:36 -06:00 |
|
ishaan-jaff
|
6672591198
|
(fix) init_bedrock_client
|
2024-01-01 22:48:56 +05:30 |
|
Krrish Dholakia
|
7be5f74b70
|
fix(aimage_generation): fix response type
|
2023-12-30 12:53:24 +05:30 |
|
Krrish Dholakia
|
4d239f1e65
|
fix(openai.py): fix async image gen call
|
2023-12-30 12:44:54 +05:30 |
|
Krrish Dholakia
|
b69ffb3738
|
fix: support dynamic timeouts for openai and azure
|
2023-12-30 12:14:02 +05:30 |
|
Krrish Dholakia
|
7d55a563ee
|
fix(main.py): don't set timeout as an optional api param
|
2023-12-30 11:47:07 +05:30 |
|
ishaan-jaff
|
224d38ba48
|
(fix) vertex ai - use usage from response
|
2023-12-29 16:30:25 +05:30 |
|
ishaan-jaff
|
c69f4f17a5
|
(feat) cloudflare - add optional params
|
2023-12-29 11:50:09 +05:30 |
|
ishaan-jaff
|
b990fc8324
|
(feat) cloudflare ai workers - add completion support
|
2023-12-29 11:34:58 +05:30 |
|
ishaan-jaff
|
796e735881
|
(feat) v0 adding cloudflare
|
2023-12-29 09:32:29 +05:30 |
|
ishaan-jaff
|
362bed6ca3
|
(fix) together_ai cost tracking
|
2023-12-28 22:11:08 +05:30 |
|
Krrish Dholakia
|
5a48dac83f
|
fix(vertex_ai.py): support function calling for gemini
|
2023-12-28 19:07:04 +05:30 |
|
Krrish Dholakia
|
8188475c16
|
feat(admin_ui.py): support creating keys on admin ui
|
2023-12-28 16:59:11 +05:30 |
|
Krrish Dholakia
|
507b6bf96e
|
fix(utils.py): use local tiktoken copy
|
2023-12-28 11:22:33 +05:30 |
|
Krrish Dholakia
|
2285282ef8
|
feat(health_check.py): more detailed health check calls
|
2023-12-28 09:12:57 +05:30 |
|