Krrish Dholakia
|
8a3b771e50
|
fix(tests): fixing response objects for testing
|
2023-11-13 14:39:30 -08:00 |
|
Krrish Dholakia
|
d4de55b053
|
fix(together_ai.py): exception mapping for tgai
|
2023-11-13 13:17:15 -08:00 |
|
ishaan-jaff
|
27cbd7d895
|
(fix) deepinfra with openai v1.0.0
|
2023-11-13 09:51:22 -08:00 |
|
Krrish Dholakia
|
c5c3096a47
|
build(main.py): trigger testing
|
2023-11-11 19:20:48 -08:00 |
|
Krrish Dholakia
|
45b6f8b853
|
refactor: fixing linting issues
|
2023-11-11 18:52:28 -08:00 |
|
Krrish Dholakia
|
c6ce3fedcd
|
fix(main.py): fix caching for router
|
2023-11-11 17:45:23 -08:00 |
|
Krrish Dholakia
|
39c2597c33
|
refactor(azure.py): working azure completion calls with openai v1 sdk
|
2023-11-11 16:44:39 -08:00 |
|
Krrish Dholakia
|
d0bd932b3c
|
refactor(openai.py): working openai chat + text completion for openai v1 sdk
|
2023-11-11 16:25:10 -08:00 |
|
Krrish Dholakia
|
d3323ba637
|
refactor(openai.py): making it compatible for openai v1
BREAKING CHANGE:
|
2023-11-11 15:33:02 -08:00 |
|
Krrish Dholakia
|
41c94d50e2
|
fix(text_completion.py): fix routing logic
|
2023-11-10 15:46:37 -08:00 |
|
Krrish Dholakia
|
18a8bd5543
|
fix(utils.py): return function call as part of response object
|
2023-11-10 11:02:10 -08:00 |
|
Krrish Dholakia
|
a4c9e6bd46
|
fix(utils.py): fix cached responses - translate dict to objects
|
2023-11-10 10:38:20 -08:00 |
|
Pratham Soni
|
2f37baa690
|
add custom open ai models to asyncio call
|
2023-11-09 20:47:46 -08:00 |
|
Krrish Dholakia
|
249cde3d40
|
fix(main.py): accepting azure deployment_id
|
2023-11-09 18:16:02 -08:00 |
|
Krrish Dholakia
|
b9e6989e41
|
test: fix linting issues
|
2023-11-09 16:50:43 -08:00 |
|
Krrish Dholakia
|
e12bff6d7f
|
refactor(azure.py): enabling async streaming with aiohttp
|
2023-11-09 16:41:06 -08:00 |
|
Krrish Dholakia
|
c053782d96
|
refactor(openai.py): support aiohttp streaming
|
2023-11-09 16:15:30 -08:00 |
|
Krrish Dholakia
|
86ef2a02f7
|
fix(azure.py): adding support for aiohttp calls on azure + openai
|
2023-11-09 10:40:33 -08:00 |
|
Krrish Dholakia
|
9bfbdc18fb
|
feat(utils.py): enable returning complete response when stream=true
|
2023-11-09 09:17:51 -08:00 |
|
Krrish Dholakia
|
6f4707bbb3
|
refactor(azure.py): moving embeddings to http call
|
2023-11-08 19:07:21 -08:00 |
|
Krrish Dholakia
|
70311502c8
|
refactor(openai.py): moving embedding calls to http
|
2023-11-08 19:01:17 -08:00 |
|
Krrish Dholakia
|
c2cbdb23fd
|
refactor(openai.py): moving openai text completion calls to http
|
2023-11-08 18:40:03 -08:00 |
|
Krrish Dholakia
|
c57ed0a9d7
|
refactor(openai.py): moving openai chat completion calls to http
|
2023-11-08 17:40:41 -08:00 |
|
Krrish Dholakia
|
53abc31c27
|
refactor(azure.py): moving azure openai calls to http calls
|
2023-11-08 16:52:18 -08:00 |
|
ishaan-jaff
|
2a751c277f
|
(feat) add streaming for text_completion
|
2023-11-08 11:58:07 -08:00 |
|
ishaan-jaff
|
2498d95dc5
|
(feat) parallel HF text completion + completion_with_retries show exception
|
2023-11-06 17:58:06 -08:00 |
|
ishaan-jaff
|
b4797bec3b
|
(fix) bug fix: completion, text_completion, check if optional params are not None and pass to LLM
|
2023-11-06 13:17:19 -08:00 |
|
ishaan-jaff
|
f591d79376
|
(fix) linting fixes
|
2023-11-06 13:02:11 -08:00 |
|
ishaan-jaff
|
9d65867354
|
(fix) text_completion naming
|
2023-11-06 12:47:06 -08:00 |
|
ishaan-jaff
|
a2f2fd3841
|
(fix) text completion linting
|
2023-11-06 11:53:50 -08:00 |
|
ishaan-jaff
|
1407ef15a8
|
(fix) text_completion fixes
|
2023-11-06 09:11:10 -08:00 |
|
ishaan-jaff
|
cac3148dff
|
(feat) text_completion add docstring
|
2023-11-06 08:36:09 -08:00 |
|
Krrish Dholakia
|
5b3978eff4
|
fix(main.py): fixing print_verbose
|
2023-11-04 14:41:34 -07:00 |
|
Krrish Dholakia
|
763ecf681a
|
test(test_text_completion.py): fixing print verbose
|
2023-11-04 14:03:09 -07:00 |
|
Krrish Dholakia
|
6b40546e59
|
refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions
|
2023-11-04 12:50:15 -07:00 |
|
ishaan-jaff
|
d4430fc51e
|
(feat) text completion response now OpenAI Object
|
2023-11-03 22:13:52 -07:00 |
|
ishaan-jaff
|
6c4816e214
|
(fix) remove print statements
|
2023-11-03 16:45:28 -07:00 |
|
ishaan-jaff
|
0fa7c1ec3a
|
(feat) text_com support batches for non openai llms
|
2023-11-03 16:36:38 -07:00 |
|
Krrish Dholakia
|
e3a1c58dd9
|
build(litellm_server/utils.py): add support for general settings + num retries as a module variable
|
2023-11-02 20:56:41 -07:00 |
|
ishaan-jaff
|
3f1b4c0759
|
(fix) linting fix
|
2023-11-02 17:28:45 -07:00 |
|
Krrish Dholakia
|
512a1637eb
|
feat(completion()): enable setting prompt templates via completion()
|
2023-11-02 16:24:01 -07:00 |
|
ishaan-jaff
|
03860984eb
|
(feat) add setting input_type for cohere
|
2023-11-02 10:16:35 -07:00 |
|
Krrish Dholakia
|
740460f390
|
fix(main.py): expose custom llm provider for text completions
|
2023-11-02 07:55:54 -07:00 |
|
ishaan-jaff
|
8ca7af3a63
|
(feat) text completion set top_n_tokens for tgi
|
2023-11-01 18:25:13 -07:00 |
|
ishaan-jaff
|
863867fe00
|
(fix) stream_chunk_builder
|
2023-11-01 14:53:09 -07:00 |
|
ishaan-jaff
|
2ad81bdd7b
|
(feat) embedding() add bedrock/amazon.titan-embed-text-v1
|
2023-11-01 13:55:28 -07:00 |
|
ishaan-jaff
|
01d90691f9
|
(docs) add num_retries to docstring
|
2023-11-01 10:55:56 -07:00 |
|
stefan
|
bbc82f3afa
|
Use supplied headers
|
2023-11-01 20:31:16 +07:00 |
|
ishaan-jaff
|
d1f2593dc0
|
(fix) add usage tracking in callback
|
2023-10-31 23:02:54 -07:00 |
|
Krrish Dholakia
|
7762ae7762
|
feat(utils.py): accept context window fallback dictionary
|
2023-10-31 22:32:36 -07:00 |
|