mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-01 20:18:50 +00:00
## What does this PR do? See issue: #747 -- `uv` is just plain better. This PR does the bare minimum of replacing `pip install` by `uv pip install` and ensuring `uv` exists in the environment. ## Test Plan First: create new conda, `uv pip install -e .` on `llama-stack` -- all is good. Next: run `llama stack build --template together` followed by `llama stack run together` -- all good Next: run `llama stack build --template together --image-name yoyo` followed by `llama stack run together --image-name yoyo` -- all good Next: fresh conda and `uv pip install -e .` and `llama stack build --template together --image-type venv` -- all good. Docker: `llama stack build --template together --image-type container` works! |
||
---|---|---|
.. | ||
bedrock | ||
cerebras | ||
experimental-post-training | ||
fireworks | ||
hf-endpoint | ||
hf-serverless | ||
meta-reference-gpu | ||
meta-reference-quantized-gpu | ||
nvidia | ||
ollama | ||
remote-vllm | ||
sambanova | ||
tgi | ||
together | ||
vllm-gpu | ||
__init__.py | ||
template.py |