llama-stack/llama_toolchain/inference
Celina Hanouti 736092f6bc
[Inference] Use huggingface_hub inference client for TGI adapter (#53)
* Use huggingface_hub inference client for TGI inference

* Update the default value for TGI URL

* Use InferenceClient.text_generation for TGI inference

* Fixes post-review and split TGI adapter into local and Inference Endpoints ones

* Update CLI reference and add typing

* Rename TGI Adapter class

* Use HfApi to get the namespace when not provide in the hf endpoint name

* Remove unecessary method argument

* Improve TGI adapter initialization condition

* Move helper into impl file + fix merging conflicts
2024-09-12 09:11:35 -07:00
..
adapters [Inference] Use huggingface_hub inference client for TGI adapter (#53) 2024-09-12 09:11:35 -07:00
api API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51) 2024-09-03 22:39:39 -07:00
meta_reference API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51) 2024-09-03 22:39:39 -07:00
quantization API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51) 2024-09-03 22:39:39 -07:00
__init__.py Initial commit 2024-07-23 08:32:33 -07:00
client.py API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51) 2024-09-03 22:39:39 -07:00
event_logger.py formatting 2024-08-14 17:03:43 -04:00
prepare_messages.py API Updates: fleshing out RAG APIs, introduce "llama stack" CLI command (#51) 2024-09-03 22:39:39 -07:00
providers.py [Inference] Use huggingface_hub inference client for TGI adapter (#53) 2024-09-12 09:11:35 -07:00