llama-stack-mirror/tests/integration/inference
Matthew Farrellee c3fc859257 feat: add dynamic model registration support to TGI inference
add new overwrite_completion_id feature to OpenAIMixin to deal with TGI always returning id=""

test with -

tgi: `docker run --gpus all --shm-size 1g -p 8080:80 -v /data:/data ghcr.io/huggingface/text-generation-inference --model-id Qwen/Qwen3-0.6B`

stack: `TGI_URL=http://localhost:8080 uv run llama stack build --image-type venv --distro ci-tests --run`

test: `./scripts/integration-tests.sh --stack-config http://localhost:8321 --setup tgi --subdirs inference --pattern openai`
2025-09-11 02:02:02 -04:00
..
__init__.py fix: remove ruff N999 (#1388) 2025-03-07 11:14:04 -08:00
dog.png refactor: tests/unittests -> tests/unit; tests/api -> tests/integration 2025-03-04 09:57:00 -08:00
test_batch_inference.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
test_embedding.py fix: fix the error type in embedding test case (#3197) 2025-08-21 16:19:51 -07:00
test_openai_completion.py feat: add dynamic model registration support to TGI inference 2025-09-11 02:02:02 -04:00
test_openai_embeddings.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
test_text_inference.py feat: add Azure OpenAI inference provider support (#3396) 2025-09-11 13:48:38 +02:00
test_vision_inference.py feat(ci): add support for running vision inference tests (#2972) 2025-07-31 11:50:42 -07:00
vision_test_1.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_2.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00
vision_test_3.jpg feat: introduce llama4 support (#1877) 2025-04-05 11:53:35 -07:00