llama-stack-mirror/llama_stack/providers
Matthew Farrellee c3fc859257 feat: add dynamic model registration support to TGI inference
add new overwrite_completion_id feature to OpenAIMixin to deal with TGI always returning id=""

test with -

tgi: `docker run --gpus all --shm-size 1g -p 8080:80 -v /data:/data ghcr.io/huggingface/text-generation-inference --model-id Qwen/Qwen3-0.6B`

stack: `TGI_URL=http://localhost:8080 uv run llama stack build --image-type venv --distro ci-tests --run`

test: `./scripts/integration-tests.sh --stack-config http://localhost:8321 --setup tgi --subdirs inference --pattern openai`
2025-09-11 02:02:02 -04:00
..
inline chore: Updating documentation, adding exception handling for Vector Stores in RAG Tool, more tests on migration, and migrate off of inference_api for context_retriever for RAG (#3367) 2025-09-11 14:20:11 +02:00
registry feat: add Azure OpenAI inference provider support (#3396) 2025-09-11 13:48:38 +02:00
remote feat: add dynamic model registration support to TGI inference 2025-09-11 02:02:02 -04:00
utils feat: add dynamic model registration support to TGI inference 2025-09-11 02:02:02 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: create unregister shield API endpoint in Llama Stack (#2853) 2025-08-05 07:33:46 -07:00