llama-stack-mirror/llama_stack/providers/utils
Matthew Farrellee f4ab154ade
Some checks failed
Vector IO Integration Tests / test-matrix (push) Failing after 4s
Update ReadTheDocs / update-readthedocs (push) Failing after 3s
UI Tests / ui-tests (22) (push) Successful in 43s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 3s
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 1s
API Conformance Tests / check-schema-compatibility (push) Successful in 7s
Unit Tests / unit-tests (3.13) (push) Failing after 4s
Pre-commit / pre-commit (push) Successful in 1m21s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Integration Tests (Replay) / Integration Tests (, , , client=, ) (push) Failing after 3s
Python Package Build Test / build (3.12) (push) Failing after 2s
Python Package Build Test / build (3.13) (push) Failing after 2s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 5s
Unit Tests / unit-tests (3.12) (push) Failing after 3s
Test External API and Providers / test-external (venv) (push) Failing after 5s
feat: add dynamic model registration support to TGI inference (#3417)
# What does this PR do?

adds dynamic model support to TGI

add new overwrite_completion_id feature to OpenAIMixin to deal with TGI
always returning id=""

## Test Plan

tgi: `docker run --gpus all --shm-size 1g -p 8080:80 -v /data:/data
ghcr.io/huggingface/text-generation-inference --model-id
Qwen/Qwen3-0.6B`

stack: `TGI_URL=http://localhost:8080 uv run llama stack build
--image-type venv --distro ci-tests --run`

test: `./scripts/integration-tests.sh --stack-config
http://localhost:8321 --setup tgi --subdirs inference --pattern openai`
2025-09-15 15:52:40 -04:00
..
bedrock fix: use lambda pattern for bedrock config env vars (#3307) 2025-09-05 10:45:11 +02:00
common chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
datasetio chore(misc): make tests and starter faster (#3042) 2025-08-05 14:55:05 -07:00
inference feat: add dynamic model registration support to TGI inference (#3417) 2025-09-15 15:52:40 -04:00
kvstore refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00
memory chore(migrate apis): move VectorDBWithIndex from embeddings to openai_embeddings (#3294) 2025-08-31 14:48:35 -07:00
responses chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
scoring chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
sqlstore fix(inference_store): on duplicate chat completion IDs, replace (#3408) 2025-09-10 14:34:18 -07:00
telemetry chore: logging perf improvments (#3393) 2025-09-10 11:52:23 -07:00
tools fix: show descriptive MCP server connection errors instead of generic 500s (#3256) 2025-09-04 13:25:02 -07:00
vector_io feat: migrate to FIPS-validated cryptographic algorithms (#3423) 2025-09-12 11:18:19 +02:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
pagination.py chore(refact): move paginate_records fn outside of datasetio (#2137) 2025-05-12 10:56:14 -07:00
scheduler.py refactor(logging): rename llama_stack logger categories (#3065) 2025-08-21 17:31:04 -07:00