Krrish Dholakia
|
e7d1840d5f
|
fix(main.py): fix caching for router
|
2023-11-11 17:45:23 -08:00 |
|
Krrish Dholakia
|
c0a757a25f
|
refactor(azure.py): working azure completion calls with openai v1 sdk
|
2023-11-11 16:44:39 -08:00 |
|
Krrish Dholakia
|
1ec07c0aba
|
refactor(openai.py): working openai chat + text completion for openai v1 sdk
|
2023-11-11 16:25:10 -08:00 |
|
Krrish Dholakia
|
a5ec85b1f2
|
refactor(openai.py): making it compatible for openai v1
BREAKING CHANGE:
|
2023-11-11 15:33:02 -08:00 |
|
Krrish Dholakia
|
54b4130d54
|
fix(text_completion.py): fix routing logic
|
2023-11-10 15:46:37 -08:00 |
|
Krrish Dholakia
|
548605def8
|
fix(utils.py): return function call as part of response object
|
2023-11-10 11:02:10 -08:00 |
|
Krrish Dholakia
|
67e8b12a09
|
fix(utils.py): fix cached responses - translate dict to objects
|
2023-11-10 10:38:20 -08:00 |
|
Pratham Soni
|
fc2f4cecdc
|
add custom open ai models to asyncio call
|
2023-11-09 20:47:46 -08:00 |
|
Krrish Dholakia
|
af7468e9bc
|
fix(main.py): accepting azure deployment_id
|
2023-11-09 18:16:02 -08:00 |
|
Krrish Dholakia
|
1affa89ad7
|
test: fix linting issues
|
2023-11-09 16:50:43 -08:00 |
|
Krrish Dholakia
|
272a6dc9b0
|
refactor(azure.py): enabling async streaming with aiohttp
|
2023-11-09 16:41:06 -08:00 |
|
Krrish Dholakia
|
9b278f567b
|
refactor(openai.py): support aiohttp streaming
|
2023-11-09 16:15:30 -08:00 |
|
Krrish Dholakia
|
1d46891ceb
|
fix(azure.py): adding support for aiohttp calls on azure + openai
|
2023-11-09 10:40:33 -08:00 |
|
Krrish Dholakia
|
8ee4b1f603
|
feat(utils.py): enable returning complete response when stream=true
|
2023-11-09 09:17:51 -08:00 |
|
Krrish Dholakia
|
678249ee09
|
refactor(azure.py): moving embeddings to http call
|
2023-11-08 19:07:21 -08:00 |
|
Krrish Dholakia
|
880768f83d
|
refactor(openai.py): moving embedding calls to http
|
2023-11-08 19:01:17 -08:00 |
|
Krrish Dholakia
|
e66373bd47
|
refactor(openai.py): moving openai text completion calls to http
|
2023-11-08 18:40:03 -08:00 |
|
Krrish Dholakia
|
decf86b145
|
refactor(openai.py): moving openai chat completion calls to http
|
2023-11-08 17:40:41 -08:00 |
|
Krrish Dholakia
|
17f5e46080
|
refactor(azure.py): moving azure openai calls to http calls
|
2023-11-08 16:52:18 -08:00 |
|
ishaan-jaff
|
11ee52207e
|
(feat) add streaming for text_completion
|
2023-11-08 11:58:07 -08:00 |
|
ishaan-jaff
|
f8bcf32c4f
|
(feat) parallel HF text completion + completion_with_retries show exception
|
2023-11-06 17:58:06 -08:00 |
|
ishaan-jaff
|
fdded281a9
|
(fix) bug fix: completion, text_completion, check if optional params are not None and pass to LLM
|
2023-11-06 13:17:19 -08:00 |
|
ishaan-jaff
|
70fc10c010
|
(fix) linting fixes
|
2023-11-06 13:02:11 -08:00 |
|
ishaan-jaff
|
de66e42fd0
|
(fix) text_completion naming
|
2023-11-06 12:47:06 -08:00 |
|
ishaan-jaff
|
82e44c7f84
|
(fix) text completion linting
|
2023-11-06 11:53:50 -08:00 |
|
ishaan-jaff
|
2a15da509f
|
(fix) text_completion fixes
|
2023-11-06 09:11:10 -08:00 |
|
ishaan-jaff
|
a2f8ab7eb1
|
(feat) text_completion add docstring
|
2023-11-06 08:36:09 -08:00 |
|
Krrish Dholakia
|
f7c5595a0d
|
fix(main.py): fixing print_verbose
|
2023-11-04 14:41:34 -07:00 |
|
Krrish Dholakia
|
a83b07b310
|
test(test_text_completion.py): fixing print verbose
|
2023-11-04 14:03:09 -07:00 |
|
Krrish Dholakia
|
d0b23a2722
|
refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions
|
2023-11-04 12:50:15 -07:00 |
|
ishaan-jaff
|
8521078793
|
(feat) text completion response now OpenAI Object
|
2023-11-03 22:13:52 -07:00 |
|
ishaan-jaff
|
a2b9ffdd61
|
(fix) remove print statements
|
2023-11-03 16:45:28 -07:00 |
|
ishaan-jaff
|
1f8e29a1b4
|
(feat) text_com support batches for non openai llms
|
2023-11-03 16:36:38 -07:00 |
|
Krrish Dholakia
|
127972a80b
|
build(litellm_server/utils.py): add support for general settings + num retries as a module variable
|
2023-11-02 20:56:41 -07:00 |
|
ishaan-jaff
|
395411d78f
|
(fix) linting fix
|
2023-11-02 17:28:45 -07:00 |
|
Krrish Dholakia
|
33c1118080
|
feat(completion()): enable setting prompt templates via completion()
|
2023-11-02 16:24:01 -07:00 |
|
ishaan-jaff
|
36a2266382
|
(feat) add setting input_type for cohere
|
2023-11-02 10:16:35 -07:00 |
|
Krrish Dholakia
|
943f9d9432
|
fix(main.py): expose custom llm provider for text completions
|
2023-11-02 07:55:54 -07:00 |
|
ishaan-jaff
|
39b570dd81
|
(feat) text completion set top_n_tokens for tgi
|
2023-11-01 18:25:13 -07:00 |
|
ishaan-jaff
|
ad1afd7d36
|
(fix) stream_chunk_builder
|
2023-11-01 14:53:09 -07:00 |
|
ishaan-jaff
|
0668d8d81e
|
(feat) embedding() add bedrock/amazon.titan-embed-text-v1
|
2023-11-01 13:55:28 -07:00 |
|
ishaan-jaff
|
f73289d1fc
|
(docs) add num_retries to docstring
|
2023-11-01 10:55:56 -07:00 |
|
stefan
|
608ddc244f
|
Use supplied headers
|
2023-11-01 20:31:16 +07:00 |
|
ishaan-jaff
|
098e399931
|
(fix) add usage tracking in callback
|
2023-10-31 23:02:54 -07:00 |
|
Krrish Dholakia
|
2cf06a3235
|
feat(utils.py): accept context window fallback dictionary
|
2023-10-31 22:32:36 -07:00 |
|
Krrish Dholakia
|
5ade263079
|
style(main.py): fix linting issues
|
2023-10-31 19:23:14 -07:00 |
|
Krrish Dholakia
|
b9e617c654
|
feat(completion()): adding num_retries
https://github.com/BerriAI/litellm/issues/728
|
2023-10-31 19:14:55 -07:00 |
|
ishaan-jaff
|
19177ae041
|
(feat) add support for echo for HF logprobs
|
2023-10-31 18:20:59 -07:00 |
|
ishaan-jaff
|
525e5476f6
|
(feat) textcompletion - transform hf log probs to openai text completion
|
2023-10-31 17:15:35 -07:00 |
|
Krrish Dholakia
|
b98a58d1b1
|
test(test_completion.py): re-add bedrock + sagemaker testing
|
2023-10-31 16:49:13 -07:00 |
|