Commit graph

375 commits

Author SHA1 Message Date
ishaan-jaff
9d65867354 (fix) text_completion naming 2023-11-06 12:47:06 -08:00
ishaan-jaff
a2f2fd3841 (fix) text completion linting 2023-11-06 11:53:50 -08:00
ishaan-jaff
1407ef15a8 (fix) text_completion fixes 2023-11-06 09:11:10 -08:00
ishaan-jaff
cac3148dff (feat) text_completion add docstring 2023-11-06 08:36:09 -08:00
Krrish Dholakia
5b3978eff4 fix(main.py): fixing print_verbose 2023-11-04 14:41:34 -07:00
Krrish Dholakia
763ecf681a test(test_text_completion.py): fixing print verbose 2023-11-04 14:03:09 -07:00
Krrish Dholakia
6b40546e59 refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions 2023-11-04 12:50:15 -07:00
ishaan-jaff
d4430fc51e (feat) text completion response now OpenAI Object 2023-11-03 22:13:52 -07:00
ishaan-jaff
6c4816e214 (fix) remove print statements 2023-11-03 16:45:28 -07:00
ishaan-jaff
0fa7c1ec3a (feat) text_com support batches for non openai llms 2023-11-03 16:36:38 -07:00
Krrish Dholakia
e3a1c58dd9 build(litellm_server/utils.py): add support for general settings + num retries as a module variable 2023-11-02 20:56:41 -07:00
ishaan-jaff
3f1b4c0759 (fix) linting fix 2023-11-02 17:28:45 -07:00
Krrish Dholakia
512a1637eb feat(completion()): enable setting prompt templates via completion() 2023-11-02 16:24:01 -07:00
ishaan-jaff
03860984eb (feat) add setting input_type for cohere 2023-11-02 10:16:35 -07:00
Krrish Dholakia
740460f390 fix(main.py): expose custom llm provider for text completions 2023-11-02 07:55:54 -07:00
ishaan-jaff
8ca7af3a63 (feat) text completion set top_n_tokens for tgi 2023-11-01 18:25:13 -07:00
ishaan-jaff
863867fe00 (fix) stream_chunk_builder 2023-11-01 14:53:09 -07:00
ishaan-jaff
2ad81bdd7b (feat) embedding() add bedrock/amazon.titan-embed-text-v1 2023-11-01 13:55:28 -07:00
ishaan-jaff
01d90691f9 (docs) add num_retries to docstring 2023-11-01 10:55:56 -07:00
stefan
bbc82f3afa Use supplied headers 2023-11-01 20:31:16 +07:00
ishaan-jaff
d1f2593dc0 (fix) add usage tracking in callback 2023-10-31 23:02:54 -07:00
Krrish Dholakia
7762ae7762 feat(utils.py): accept context window fallback dictionary 2023-10-31 22:32:36 -07:00
Krrish Dholakia
f3efd566c9 style(main.py): fix linting issues 2023-10-31 19:23:14 -07:00
Krrish Dholakia
125642563c feat(completion()): adding num_retries
https://github.com/BerriAI/litellm/issues/728
2023-10-31 19:14:55 -07:00
ishaan-jaff
ce462824be (feat) add support for echo for HF logprobs 2023-10-31 18:20:59 -07:00
ishaan-jaff
9223f7cc7a (feat) textcompletion - transform hf log probs to openai text completion 2023-10-31 17:15:35 -07:00
Krrish Dholakia
4d95756432 test(test_completion.py): re-add bedrock + sagemaker testing 2023-10-31 16:49:13 -07:00
ishaan-jaff
de47058e32 (feat) text_completion return raw openai response for text_completion requests 2023-10-31 15:31:24 -07:00
ishaan-jaff
4875af17a1 (fix) linting errors 2023-10-31 14:43:10 -07:00
ishaan-jaff
b4e14aed6b (feat) text_completion add support for passing prompt as array 2023-10-31 14:29:43 -07:00
Krrish Dholakia
3743893e76 fix(main.py): removing print_verbose 2023-10-30 20:37:12 -07:00
Krrish Dholakia
6b715597e9 test(test_async_fn.py): more logging 2023-10-30 19:29:37 -07:00
Krrish Dholakia
75736cb852 test(test_async_fn.py): adding more logging 2023-10-30 19:11:07 -07:00
Krrish Dholakia
0ed3917b09 feat(main.py): add support for maritalk api 2023-10-30 17:36:51 -07:00
ishaan-jaff
9f72ce9fc6 (fix) improve batch_completion_models + multiple deployments, if 1 model fails, return result from 2nd 2023-10-30 17:20:07 -07:00
ishaan-jaff
43b450319e (docs) add docstring for batch_completion 2023-10-30 14:31:34 -07:00
ishaan-jaff
f52b36a338 (docs) docstring for completion_with_retries 2023-10-30 14:20:09 -07:00
ishaan-jaff
43b8387334 (docs) add doc string for aembedding 2023-10-30 13:59:56 -07:00
ishaan-jaff
7037913f9d (fix) update acompletion docstring 2023-10-30 13:53:32 -07:00
ishaan-jaff
cac320f74e (fix) remove bloat - ratelimitmanager 2023-10-27 18:11:39 -07:00
Krrish Dholakia
afe14c8a96 fix(utils.py/completion_with_fallbacks): accept azure deployment name in rotations 2023-10-27 16:00:42 -07:00
Krrish Dholakia
c1b2553827 fix(utils.py): adding support for anyscale models 2023-10-25 09:08:10 -07:00
Krrish Dholakia
f12dc5df21 fix(vertex_ai.py): fix output parsing 2023-10-24 12:08:22 -07:00
ishaan-jaff
6373f6bddd (feat) add async embeddings 2023-10-23 13:59:37 -07:00
Krrish Dholakia
cd0e699bcf fix(main.py): multiple deployments fix - run in parallel 2023-10-21 14:28:50 -07:00
ishaan-jaff
0b0564167c (fix) embedding() using get_llm_provider 2023-10-20 15:00:08 -07:00
ishaan-jaff
114d8fda65 (feat) native perplexity support 2023-10-20 14:29:07 -07:00
Krrish Dholakia
1f1cf7a11c feat(main.py): support multiple deployments in 1 completion call 2023-10-20 13:01:53 -07:00
Krrish Dholakia
4b48af7c3c fix(anthropic.py-+-bedrock.py): anthropic prompt format 2023-10-20 10:56:15 -07:00
Krrish Dholakia
00993f3575 fix: allow api base to be set for all providers
enables proxy use cases
2023-10-19 19:07:42 -07:00