Ishaan Jaff
|
b09f38e835
|
Merge pull request #1381 from BerriAI/litellm_content_policy_violation_exception
[Feat] Add litellm.ContentPolicyViolationError
|
2024-01-09 17:18:29 +05:30 |
|
ishaan-jaff
|
650a6a8640
|
(chore) remove deprecated completion_with_config() tests
|
2024-01-09 17:13:06 +05:30 |
|
ishaan-jaff
|
66b23ecbb5
|
(v0) add ContentPolicyViolationError
|
2024-01-09 16:33:03 +05:30 |
|
ishaan-jaff
|
9313bda4c8
|
(feat) completion_cost - embeddings + raise Exception
|
2024-01-05 13:11:23 +05:30 |
|
ishaan-jaff
|
0e8809abf2
|
(feat) add xinference as an embedding provider
|
2024-01-02 15:32:26 +05:30 |
|
fatih
|
783f5378f4
|
update azure turbo namings
|
2024-01-01 13:03:08 +03:00 |
|
ishaan-jaff
|
806551ff99
|
(fix) use openai token counter for azure llms
|
2023-12-29 15:37:46 +05:30 |
|
ishaan-jaff
|
796e735881
|
(feat) v0 adding cloudflare
|
2023-12-29 09:32:29 +05:30 |
|
ishaan-jaff
|
2a147579ec
|
(feat) add voyage ai embeddings
|
2023-12-28 17:10:15 +05:30 |
|
Krrish Dholakia
|
606de01ac0
|
fix(utils.py): allow text completion input to be either model or engine
|
2023-12-27 17:24:16 +05:30 |
|
Krrish Dholakia
|
85549c3d66
|
fix(google_kms.py): support enums for key management system
|
2023-12-27 13:19:33 +05:30 |
|
Krrish Dholakia
|
6f695838e5
|
feat(utils.py): support google kms for secret management
https://github.com/BerriAI/litellm/issues/1235
|
2023-12-26 15:39:40 +05:30 |
|
ishaan-jaff
|
c3aff30464
|
(feat) add ollama_chat as a provider
|
2023-12-25 23:04:17 +05:30 |
|
Krrish Dholakia
|
79978c44ba
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
|
Krrish Dholakia
|
70f4dabff6
|
feat(gemini.py): add support for completion calls for gemini-pro (google ai studio)
|
2023-12-24 09:42:58 +05:30 |
|
Krrish Dholakia
|
51cb16a015
|
feat(main.py): add support for image generation endpoint
|
2023-12-16 21:07:29 -08:00 |
|
ishaan-jaff
|
3fd00393be
|
init vertex_vision_models
|
2023-12-16 18:37:00 +05:30 |
|
ishaan-jaff
|
a5540bf24d
|
(feat) proxy logs: dynamodb - set table name
|
2023-12-15 21:38:44 +05:30 |
|
ishaan-jaff
|
0530d16595
|
(feat) add openai.NotFoundError
|
2023-12-15 10:18:02 +05:30 |
|
Krrish Dholakia
|
bb5b883316
|
fix(main.py): support async streaming for text completions endpoint
|
2023-12-14 13:56:32 -08:00 |
|
ishaan-jaff
|
072fdac48c
|
(feat) caching - add supported call types
|
2023-12-14 22:27:14 +05:30 |
|
ishaan-jaff
|
95454e5176
|
(feat) mistral - add exception mapping
|
2023-12-14 18:57:39 +05:30 |
|
ishaan-jaff
|
303d9aa286
|
(feat) add mistral api
|
2023-12-14 18:17:48 +05:30 |
|
Krrish Dholakia
|
853508e8c0
|
fix(utils.py): support caching for embedding + log cache hits
n
n
|
2023-12-13 18:37:30 -08:00 |
|
Krrish Dholakia
|
72b9d4c5e8
|
test(test_amazing_vertex_completion.py): fix testing
|
2023-12-13 16:41:26 -08:00 |
|
Krrish Dholakia
|
43b160d70d
|
feat(vertex_ai.py): adds support for gemini-pro on vertex ai
|
2023-12-13 10:26:30 -08:00 |
|
Krrish Dholakia
|
a65c8919fc
|
fix(router.py): fix least-busy routing
|
2023-12-08 20:29:49 -08:00 |
|
ishaan-jaff
|
1177c54636
|
(feat) router - add model_group_alias_map
|
2023-12-06 20:13:33 -08:00 |
|
ishaan-jaff
|
bac8125e5c
|
(feat) litellm - add _async_failure_callback
|
2023-12-06 14:43:47 -08:00 |
|
Krrish Dholakia
|
a18bdb3f2e
|
fix(bedrock.py): adding support for cohere embeddings
|
2023-12-06 13:25:18 -08:00 |
|
Frank Colson
|
3c6f9333ac
|
Use litellm logging convention
|
2023-12-05 22:28:23 -07:00 |
|
Krrish Dholakia
|
d1a525b6c9
|
feat(utils.py): add async success callbacks for custom functions
|
2023-12-04 16:42:40 -08:00 |
|
Krrish Dholakia
|
5b0968b380
|
fix(__init__.py): fix linting error
|
2023-12-01 20:08:08 -08:00 |
|
Krrish Dholakia
|
fbdcde1a54
|
fix(proxy_server.py): fix linting errors
|
2023-12-01 19:45:09 -08:00 |
|
Krrish Dholakia
|
284fb64f4d
|
feat: support for azure key vault
|
2023-12-01 19:36:06 -08:00 |
|
Krrish Dholakia
|
60d6b6bc37
|
fix(router.py): fix exponential backoff to use retry-after if present in headers
|
2023-11-28 17:25:03 -08:00 |
|
ishaan-jaff
|
00454df83f
|
(fix) add timeout to __init__ litellm
|
2023-11-27 07:49:18 -08:00 |
|
Krrish Dholakia
|
8884ceb606
|
fix(proxy_server.py): expose a /health endpoint
|
2023-11-25 18:28:47 -08:00 |
|
Krrish Dholakia
|
68168cc743
|
fix(router.py): fix retry logic
|
2023-11-24 13:27:44 -08:00 |
|
Krrish Dholakia
|
27fd144950
|
docs(simple_proxy.md): add tutorial for doing fallbacks + retries + timeouts on the proxy
|
2023-11-24 12:20:38 -08:00 |
|
Krrish Dholakia
|
9bb2c7ee0f
|
fix(utils.py): add param mapping for perplexity, anyscale, deepinfra
n
n
|
2023-11-22 10:04:27 -08:00 |
|
Krrish Dholakia
|
7fb3a71b47
|
refactor(proxy_server.py): using celery workers instead of rq for concurrency
|
2023-11-21 16:31:56 -08:00 |
|
Krrish Dholakia
|
68c955409d
|
refactor(proxy_server.py): refactoring background rq worker
|
2023-11-21 13:47:09 -08:00 |
|
Krrish Dholakia
|
b8e62f3d0c
|
feat(proxy_server.py): EXPERIMENTAL: adding queuing endpoints to openai proxy server
|
2023-11-21 12:06:23 -08:00 |
|
Krrish Dholakia
|
c7e2cbd995
|
fix(utils.py): adding support for rules + mythomax/alpaca prompt template
|
2023-11-20 18:58:15 -08:00 |
|
Krrish Dholakia
|
952dd61e0e
|
fix(init.py): exposing apiconnectionerror
|
2023-11-20 08:12:29 -08:00 |
|
ishaan-jaff
|
e9f6741b0b
|
(v1.0+ breaking change) get_max_tokens -> return int
|
2023-11-17 10:38:50 -08:00 |
|
Krrish Dholakia
|
f14bd24b46
|
fix(openai.py): fix linting issues
|
2023-11-16 11:01:28 -08:00 |
|
Krrish Dholakia
|
4bd471644e
|
fix(openai.py): switch back to using requests instead of httpx
|
2023-11-15 18:25:21 -08:00 |
|
Krrish Dholakia
|
ef4e5b9636
|
test: set request timeout at request level
|
2023-11-15 17:42:31 -08:00 |
|