Krrish Dholakia
|
8e9dc09955
|
fix(bedrock.py): add support for sts based boto3 initialization
https://github.com/BerriAI/litellm/issues/1476
|
2024-01-17 12:08:59 -08:00 |
|
ishaan-jaff
|
5a8a5fa0fd
|
(fix) using base_url Azure
|
2024-01-17 10:12:55 -08:00 |
|
ishaan-jaff
|
7178d01c8f
|
(feat) support base_url with /openai passed for Azure
|
2024-01-17 10:03:25 -08:00 |
|
ishaan-jaff
|
b95d6ec207
|
(v0) fixes for Azure GPT Vision enhancements
|
2024-01-17 09:57:16 -08:00 |
|
Krrish Dholakia
|
7cb49ee509
|
fix(vertex_ai.py): raise exception if vertex ai missing required dependendencies
|
2024-01-16 16:23:29 -08:00 |
|
ishaan-jaff
|
0e26ef858b
|
(feat) sagemaker - map status code and message
|
2024-01-15 21:43:16 -08:00 |
|
ishaan-jaff
|
069d060ec9
|
(feat) provisioned throughput - bedrock embedding models
|
2024-01-13 21:07:38 -08:00 |
|
ishaan-jaff
|
5e03c9c637
|
(feat) bedrock support provisioned throughput
|
2024-01-13 15:39:54 -08:00 |
|
Krish Dholakia
|
554080804f
|
Merge pull request #1429 from dleen/data
[bug] unbound variable in bedrock
|
2024-01-12 22:16:11 +05:30 |
|
David Leen
|
a674de8f36
|
improve bedrock exception granularity
|
2024-01-12 16:38:55 +01:00 |
|
David Leen
|
8b021fc4cd
|
[bug] unbound variable in bedrock
note: the code was written as `json.dumps({})` even though it is more verbose in order to facilitate easier refactoring in the future
fixes #1428
|
2024-01-12 12:33:00 +01:00 |
|
Krrish Dholakia
|
a7f182b8ec
|
fix(azure.py): support health checks to text completion endpoints
|
2024-01-12 00:13:01 +05:30 |
|
ishaan-jaff
|
a9d812eb8d
|
(fix) bedrock - embedding - support str input
|
2024-01-11 23:02:12 +05:30 |
|
ishaan-jaff
|
a876748bf5
|
v0
|
2024-01-11 22:56:18 +05:30 |
|
Krrish Dholakia
|
ebe752fb61
|
build(pyproject.toml): drop certifi dependency (unused)
|
2024-01-10 08:09:03 +05:30 |
|
Krrish Dholakia
|
ed6ae8600f
|
fix(openai.py): fix exception raising logic
|
2024-01-09 11:58:30 +05:30 |
|
Krrish Dholakia
|
be1e101b5f
|
fix(azure.py,-openai.py): raise the correct exceptions for image generation calls
|
2024-01-09 11:55:38 +05:30 |
|
ishaan-jaff
|
5f2cbfc711
|
(feat) litellm.completion - support ollama timeout
|
2024-01-09 10:34:41 +05:30 |
|
Krrish Dholakia
|
88d498a54a
|
fix(ollama.py): use tiktoken as backup for prompt token counting
|
2024-01-09 09:47:18 +05:30 |
|
Krrish Dholakia
|
3d0ea08f77
|
refactor(gemini.py): fix linting issue
|
2024-01-08 11:43:33 +05:30 |
|
Krrish Dholakia
|
b1fd0a164b
|
fix(huggingface_restapi.py): support timeouts for huggingface + openai text completions
https://github.com/BerriAI/litellm/issues/1334
|
2024-01-08 11:40:56 +05:30 |
|
Krish Dholakia
|
4ea3e778f7
|
Merge pull request #1315 from spdustin/feature_allow_claude_prefill
Adds "pre-fill" support for Claude
|
2024-01-08 10:48:15 +05:30 |
|
Krrish Dholakia
|
79264b0dab
|
fix(gemini.py): better error handling
|
2024-01-08 07:32:26 +05:30 |
|
Krrish Dholakia
|
1507217725
|
fix(factory.py): more logging around the image loading for gemini
|
2024-01-06 22:50:44 +05:30 |
|
Krish Dholakia
|
439ee3bafc
|
Merge pull request #1344 from BerriAI/litellm_speed_improvements
Litellm speed improvements
|
2024-01-06 22:38:10 +05:30 |
|
Krrish Dholakia
|
5fd2f945f3
|
fix(factory.py): support gemini-pro-vision on google ai studio
https://github.com/BerriAI/litellm/issues/1329
|
2024-01-06 22:36:22 +05:30 |
|
Krrish Dholakia
|
3577857ed1
|
fix(sagemaker.py): fix the post-call logging logic
|
2024-01-06 21:52:58 +05:30 |
|
Krrish Dholakia
|
f2ad13af65
|
fix(openai.py): fix image generation model dump
|
2024-01-06 17:55:32 +05:30 |
|
Krrish Dholakia
|
9a4a96f46e
|
perf(azure+openai-files): use model_dump instead of json.loads + model_dump_json
|
2024-01-06 15:50:05 +05:30 |
|
spdustin@gmail.com
|
6201ab2c21
|
Update factory (and tests) for Claude 2.1 via Bedrock
|
2024-01-05 23:32:32 +00:00 |
|
Dustin Miller
|
53e5e1df07
|
Merge branch 'BerriAI:main' into feature_allow_claude_prefill
|
2024-01-05 15:15:29 -06:00 |
|
ishaan-jaff
|
79ab1aa35b
|
(fix) undo - model_dump_json() before logging
|
2024-01-05 11:47:16 +05:30 |
|
ishaan-jaff
|
40b9f1dcb1
|
(fix) proxy - log response before model_dump_json
|
2024-01-05 11:00:02 +05:30 |
|
ishaan-jaff
|
234c057e97
|
(fix) azure+cf gateway, health check
|
2024-01-04 12:34:07 +05:30 |
|
Krrish Dholakia
|
0f7d03f761
|
fix(proxy/rules.md): add docs on setting post-call rules on the proxy
|
2024-01-04 11:16:50 +05:30 |
|
Dustin Miller
|
b10f64face
|
Adds "pre-fill" support for Claude
|
2024-01-03 18:45:36 -06:00 |
|
ishaan-jaff
|
d1e8d13c4f
|
(fix) init_bedrock_client
|
2024-01-01 22:48:56 +05:30 |
|
Krrish Dholakia
|
a6719caebd
|
fix(aimage_generation): fix response type
|
2023-12-30 12:53:24 +05:30 |
|
Krrish Dholakia
|
750432457b
|
fix(openai.py): fix async image gen call
|
2023-12-30 12:44:54 +05:30 |
|
Krrish Dholakia
|
c33c1d85bb
|
fix: support dynamic timeouts for openai and azure
|
2023-12-30 12:14:02 +05:30 |
|
Krrish Dholakia
|
77be3e3114
|
fix(main.py): don't set timeout as an optional api param
|
2023-12-30 11:47:07 +05:30 |
|
ishaan-jaff
|
739d9e7a78
|
(fix) vertex ai - use usage from response
|
2023-12-29 16:30:25 +05:30 |
|
ishaan-jaff
|
dde6bc4fb6
|
(feat) cloudflare - add optional params
|
2023-12-29 11:50:09 +05:30 |
|
ishaan-jaff
|
8fcfb7df22
|
(feat) cloudflare ai workers - add completion support
|
2023-12-29 11:34:58 +05:30 |
|
ishaan-jaff
|
367e9913dc
|
(feat) v0 adding cloudflare
|
2023-12-29 09:32:29 +05:30 |
|
ishaan-jaff
|
d79df3a1e9
|
(fix) together_ai cost tracking
|
2023-12-28 22:11:08 +05:30 |
|
Krrish Dholakia
|
86403cd14e
|
fix(vertex_ai.py): support function calling for gemini
|
2023-12-28 19:07:04 +05:30 |
|
Krrish Dholakia
|
cbcf406fd0
|
feat(admin_ui.py): support creating keys on admin ui
|
2023-12-28 16:59:11 +05:30 |
|
Krrish Dholakia
|
c4fc28ab0d
|
fix(utils.py): use local tiktoken copy
|
2023-12-28 11:22:33 +05:30 |
|
Krrish Dholakia
|
3b1685e7c6
|
feat(health_check.py): more detailed health check calls
|
2023-12-28 09:12:57 +05:30 |
|