llama-stack-mirror/docs/source/distributions
Dmitry Rogozhkin 241a42bb26 docs: add example for intel gpu in vllm remote
PR adds instructions to setup vLLM remote endpoint for vllm-remote
llama stack distribution.

* Verified with manual tests of the configured vllm-remote against vllm
  endpoint running on the system with Intel GPU
* Also verified with ci pytests (see cmdline below). Test passes in the
  same capacity as it does on the A10 Nvidia setup (some tests do fail which
  seems to be known issues with vllm remote llama stack distribution)

```
pytest -s -v tests/integration/inference/test_text_inference.py \
   --stack-config=http://localhost:5001 \
   --text-model=meta-llama/Llama-3.2-3B-Instruct
```

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-04-15 07:15:37 -07:00
..
ondevice_distro docs: Fix trailing whitespace error (#1669) 2025-03-17 08:53:30 -07:00
remote_hosted_distro feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
self_hosted_distro docs: add example for intel gpu in vllm remote 2025-04-15 07:15:37 -07:00
building_distro.md fix: Use CONDA_DEFAULT_ENV presence as a flag to use conda mode (#1555) 2025-03-27 17:13:22 -04:00
configuration.md docs: Update quickstart page to structure things a little more for the novices (#1873) 2025-04-10 14:09:00 -07:00
importing_as_library.md docs: update importing_as_library.md (#1863) 2025-04-07 12:31:04 +02:00
index.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
kubernetes_deployment.md docs: Avoid bash script syntax highlighting for dark mode (#1918) 2025-04-09 15:43:43 -07:00
list_of_distributions.md docs: Updated documentation and Sphinx configuration (#1845) 2025-03-31 13:08:05 -07:00
starting_llama_stack_server.md docs: Update quickstart page to structure things a little more for the novices (#1873) 2025-04-10 14:09:00 -07:00