Krrish Dholakia
|
6575143460
|
feat(proxy_server.py): return litellm version in response headers
|
2024-05-08 16:00:08 -07:00 |
|
Krrish Dholakia
|
48c2c3d78a
|
fix(utils.py): fix streaming to not return usage dict
Fixes https://github.com/BerriAI/litellm/issues/3237
|
2024-04-24 08:06:07 -07:00 |
|
Krrish Dholakia
|
4905929de3
|
refactor: add black formatting
|
2023-12-25 14:11:20 +05:30 |
|
maqsoodshaik
|
0f89c3375a
|
this commit fixes #883
|
2023-11-23 12:45:38 +01:00 |
|
ishaan-jaff
|
50f883a2fb
|
(fix) pydantic errors with response.time
|
2023-11-20 18:28:19 -08:00 |
|
Krrish Dholakia
|
8c104e9c6a
|
fix(azure.py-+-proxy_server.py): fix function calling response object + support router on proxy
|
2023-11-15 13:15:16 -08:00 |
|
Krrish Dholakia
|
45b6f8b853
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
|
ishaan-jaff
|
63928fa166
|
(feat) use usage class for model responses for cohere, hf, tg ai, cohere
|
2023-10-27 09:58:47 -07:00 |
|
Krrish Dholakia
|
7572086231
|
style: fix linting errors
|
2023-10-16 17:35:08 -07:00 |
|
ishaan-jaff
|
599be6a374
|
raise vllm error
|
2023-09-08 15:27:01 -07:00 |
|
Krrish Dholakia
|
6b3cb18983
|
fix linting issues
|
2023-09-06 20:43:59 -07:00 |
|
Krrish Dholakia
|
35cf6ef0a1
|
batch completions for vllm now works too
|
2023-09-06 19:26:19 -07:00 |
|
Krrish Dholakia
|
4cfcabd919
|
adding support for vllm
|
2023-09-06 18:07:44 -07:00 |
|