ishaan-jaff
|
2a15da509f
|
(fix) text_completion fixes
|
2023-11-06 09:11:10 -08:00 |
|
ishaan-jaff
|
a2f8ab7eb1
|
(feat) text_completion add docstring
|
2023-11-06 08:36:09 -08:00 |
|
Krrish Dholakia
|
f7c5595a0d
|
fix(main.py): fixing print_verbose
|
2023-11-04 14:41:34 -07:00 |
|
Krrish Dholakia
|
a83b07b310
|
test(test_text_completion.py): fixing print verbose
|
2023-11-04 14:03:09 -07:00 |
|
Krrish Dholakia
|
d0b23a2722
|
refactor(all-files): removing all print statements; adding pre-commit + flake8 to prevent future regressions
|
2023-11-04 12:50:15 -07:00 |
|
ishaan-jaff
|
8521078793
|
(feat) text completion response now OpenAI Object
|
2023-11-03 22:13:52 -07:00 |
|
ishaan-jaff
|
a2b9ffdd61
|
(fix) remove print statements
|
2023-11-03 16:45:28 -07:00 |
|
ishaan-jaff
|
1f8e29a1b4
|
(feat) text_com support batches for non openai llms
|
2023-11-03 16:36:38 -07:00 |
|
Krrish Dholakia
|
127972a80b
|
build(litellm_server/utils.py): add support for general settings + num retries as a module variable
|
2023-11-02 20:56:41 -07:00 |
|
ishaan-jaff
|
395411d78f
|
(fix) linting fix
|
2023-11-02 17:28:45 -07:00 |
|
Krrish Dholakia
|
33c1118080
|
feat(completion()): enable setting prompt templates via completion()
|
2023-11-02 16:24:01 -07:00 |
|
ishaan-jaff
|
36a2266382
|
(feat) add setting input_type for cohere
|
2023-11-02 10:16:35 -07:00 |
|
Krrish Dholakia
|
943f9d9432
|
fix(main.py): expose custom llm provider for text completions
|
2023-11-02 07:55:54 -07:00 |
|
ishaan-jaff
|
39b570dd81
|
(feat) text completion set top_n_tokens for tgi
|
2023-11-01 18:25:13 -07:00 |
|
ishaan-jaff
|
ad1afd7d36
|
(fix) stream_chunk_builder
|
2023-11-01 14:53:09 -07:00 |
|
ishaan-jaff
|
0668d8d81e
|
(feat) embedding() add bedrock/amazon.titan-embed-text-v1
|
2023-11-01 13:55:28 -07:00 |
|
ishaan-jaff
|
f73289d1fc
|
(docs) add num_retries to docstring
|
2023-11-01 10:55:56 -07:00 |
|
stefan
|
608ddc244f
|
Use supplied headers
|
2023-11-01 20:31:16 +07:00 |
|
ishaan-jaff
|
098e399931
|
(fix) add usage tracking in callback
|
2023-10-31 23:02:54 -07:00 |
|
Krrish Dholakia
|
2cf06a3235
|
feat(utils.py): accept context window fallback dictionary
|
2023-10-31 22:32:36 -07:00 |
|
Krrish Dholakia
|
5ade263079
|
style(main.py): fix linting issues
|
2023-10-31 19:23:14 -07:00 |
|
Krrish Dholakia
|
b9e617c654
|
feat(completion()): adding num_retries
https://github.com/BerriAI/litellm/issues/728
|
2023-10-31 19:14:55 -07:00 |
|
ishaan-jaff
|
19177ae041
|
(feat) add support for echo for HF logprobs
|
2023-10-31 18:20:59 -07:00 |
|
ishaan-jaff
|
525e5476f6
|
(feat) textcompletion - transform hf log probs to openai text completion
|
2023-10-31 17:15:35 -07:00 |
|
Krrish Dholakia
|
b98a58d1b1
|
test(test_completion.py): re-add bedrock + sagemaker testing
|
2023-10-31 16:49:13 -07:00 |
|
ishaan-jaff
|
80c6920709
|
(feat) text_completion return raw openai response for text_completion requests
|
2023-10-31 15:31:24 -07:00 |
|
ishaan-jaff
|
b99b137f10
|
(fix) linting errors
|
2023-10-31 14:43:10 -07:00 |
|
ishaan-jaff
|
daeb4a7f9e
|
(feat) text_completion add support for passing prompt as array
|
2023-10-31 14:29:43 -07:00 |
|
Krrish Dholakia
|
83fd829b49
|
fix(main.py): removing print_verbose
|
2023-10-30 20:37:12 -07:00 |
|
Krrish Dholakia
|
655789ee10
|
test(test_async_fn.py): more logging
|
2023-10-30 19:29:37 -07:00 |
|
Krrish Dholakia
|
2c05513cd4
|
test(test_async_fn.py): adding more logging
|
2023-10-30 19:11:07 -07:00 |
|
Krrish Dholakia
|
147d69f230
|
feat(main.py): add support for maritalk api
|
2023-10-30 17:36:51 -07:00 |
|
ishaan-jaff
|
85cd1faddd
|
(fix) improve batch_completion_models + multiple deployments, if 1 model fails, return result from 2nd
|
2023-10-30 17:20:07 -07:00 |
|
ishaan-jaff
|
d9f5989a7f
|
(docs) add docstring for batch_completion
|
2023-10-30 14:31:34 -07:00 |
|
ishaan-jaff
|
e0e468a56b
|
(docs) docstring for completion_with_retries
|
2023-10-30 14:20:09 -07:00 |
|
ishaan-jaff
|
fed0fec658
|
(docs) add doc string for aembedding
|
2023-10-30 13:59:56 -07:00 |
|
ishaan-jaff
|
263c5055ac
|
(fix) update acompletion docstring
|
2023-10-30 13:53:32 -07:00 |
|
ishaan-jaff
|
19497d0c9a
|
(fix) remove bloat - ratelimitmanager
|
2023-10-27 18:11:39 -07:00 |
|
Krrish Dholakia
|
daa7aed7a4
|
fix(utils.py/completion_with_fallbacks): accept azure deployment name in rotations
|
2023-10-27 16:00:42 -07:00 |
|
Krrish Dholakia
|
715ea54544
|
fix(utils.py): adding support for anyscale models
|
2023-10-25 09:08:10 -07:00 |
|
Krrish Dholakia
|
98c25b08cd
|
fix(vertex_ai.py): fix output parsing
|
2023-10-24 12:08:22 -07:00 |
|
ishaan-jaff
|
a0651533f6
|
(feat) add async embeddings
|
2023-10-23 13:59:37 -07:00 |
|
Krrish Dholakia
|
8072366a5e
|
fix(main.py): multiple deployments fix - run in parallel
|
2023-10-21 14:28:50 -07:00 |
|
ishaan-jaff
|
c8f89f3484
|
(fix) embedding() using get_llm_provider
|
2023-10-20 15:00:08 -07:00 |
|
ishaan-jaff
|
d4c81814f1
|
(feat) native perplexity support
|
2023-10-20 14:29:07 -07:00 |
|
Krrish Dholakia
|
dcf431dbbe
|
feat(main.py): support multiple deployments in 1 completion call
|
2023-10-20 13:01:53 -07:00 |
|
Krrish Dholakia
|
2f9e112c14
|
fix(anthropic.py-+-bedrock.py): anthropic prompt format
|
2023-10-20 10:56:15 -07:00 |
|
Krrish Dholakia
|
18a6facdb3
|
fix: allow api base to be set for all providers
enables proxy use cases
|
2023-10-19 19:07:42 -07:00 |
|
Krrish Dholakia
|
a415c79b8b
|
fix(anthropic.py): enable api base to be customized
|
2023-10-19 18:45:29 -07:00 |
|
Krrish Dholakia
|
44cafb5bac
|
docs(proxy_server.md): update proxy server docs to include multi-agent autogen tutorial
|
2023-10-17 09:22:34 -07:00 |
|