forked from phoenix-oss/llama-stack-mirror
* Use huggingface_hub inference client for TGI inference * Update the default value for TGI URL * Use InferenceClient.text_generation for TGI inference * Fixes post-review and split TGI adapter into local and Inference Endpoints ones * Update CLI reference and add typing * Rename TGI Adapter class * Use HfApi to get the namespace when not provide in the hf endpoint name * Remove unecessary method argument * Improve TGI adapter initialization condition * Move helper into impl file + fix merging conflicts |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| build_conda_env.sh | ||
| build_container.sh | ||
| common.sh | ||
| configure.py | ||
| datatypes.py | ||
| distribution.py | ||
| distribution_registry.py | ||
| dynamic.py | ||
| package.py | ||
| server.py | ||
| start_conda_env.sh | ||
| start_container.sh | ||