Krish Dholakia
34bdf36eab
Add inference providers support for Hugging Face ( #8258 ) ( #9738 ) ( #9773 )
...
* Add inference providers support for Hugging Face (#8258 )
* add first version of inference providers for huggingface
* temporarily skipping tests
* Add documentation
* Fix titles
* remove max_retries from params and clean up
* add suggestions
* use llm http handler
* update doc
* add suggestions
* run formatters
* add tests
* revert
* revert
* rename file
* set maxsize for lru cache
* fix embeddings
* fix inference url
* fix tests following breaking change in main
* use ChatCompletionRequest
* fix tests and lint
* [Hugging Face] Remove outdated chat completion tests and fix embedding tests (#9749 )
* remove or fix tests
* fix link in doc
* fix(config_settings.md): document hf api key
---------
Co-authored-by: célina <hanouticelina@gmail.com>
2025-04-05 10:50:15 -07:00
Ishaan Jaff
caa848649c
(Bug fix) - Using include_usage
for /completions requests + unit testing ( #8484 )
...
* pass stream options (#8419 )
* test_completion_streaming_usage_metrics
* test_text_completion_include_usage
---------
Co-authored-by: Kaushik Deka <55996465+Kaushikdkrikhanu@users.noreply.github.com>
2025-02-11 20:29:04 -08:00
Ishaan Jaff
58ce30acee
(fix) litellm.text_completion raises a non-blocking error on simple usage ( #6546 )
...
* unit test test_huggingface_text_completion_logprobs
* fix return TextCompletionHandler convert_chat_to_text_completion
* fix hf rest api
* fix test_huggingface_text_completion_logprobs
* fix linting errors
* fix importLiteLLMResponseObjectHandler
* fix test for LiteLLMResponseObjectHandler
* fix test text completion
2024-11-04 15:47:48 -08:00