llama-stack/llama_stack
Dmitry Rogozhkin 935e706b15
docs: fix remote-vllm instructions (#1805)
# What does this PR do?

* Fix location of `run.yaml` relative to the cloned llama stack
repository
* Drop `-it` from `docker run` commands as its not needed running
services

## Test Plan

* Verified running the llama stack following updated instruction

CC: @ashwinb

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-03-27 10:19:51 -04:00
..
apis feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00
cli fix: multiple issues with getting_started notebook (#1795) 2025-03-26 10:59:12 -07:00
distribution fix: multiple issues with getting_started notebook (#1795) 2025-03-26 10:59:12 -07:00
models/llama feat: Support "stop" parameter in remote:vLLM (#1715) 2025-03-24 12:42:55 -07:00
providers feat(api): don't return a payload on file delete (#1640) 2025-03-25 17:12:36 -07:00
strong_typing fix: Support types.UnionType in schemas (#1721) 2025-03-20 09:54:02 -07:00
templates docs: fix remote-vllm instructions (#1805) 2025-03-27 10:19:51 -04:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py chore: make mypy happy with webmethod (#1758) 2025-03-22 08:17:23 -07:00