llama-stack-mirror/llama_stack/templates/remote-vllm
Dmitry Rogozhkin 241a42bb26 docs: add example for intel gpu in vllm remote
PR adds instructions to setup vLLM remote endpoint for vllm-remote
llama stack distribution.

* Verified with manual tests of the configured vllm-remote against vllm
  endpoint running on the system with Intel GPU
* Also verified with ci pytests (see cmdline below). Test passes in the
  same capacity as it does on the A10 Nvidia setup (some tests do fail which
  seems to be known issues with vllm remote llama stack distribution)

```
pytest -s -v tests/integration/inference/test_text_inference.py \
   --stack-config=http://localhost:5001 \
   --text-model=meta-llama/Llama-3.2-3B-Instruct
```

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-04-15 07:15:37 -07:00
..
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
build.yaml refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
doc_template.md docs: add example for intel gpu in vllm remote 2025-04-15 07:15:37 -07:00
run-with-safety.yaml chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
run.yaml chore: Revert "chore(telemetry): remove service_name entirely" (#1785) 2025-03-25 14:42:05 -07:00
vllm.py test: Add default vLLM URL in remote-vllm template (#1736) 2025-03-21 07:31:59 -07:00