llama-stack/llama_stack
Charlie Doern 025f615868
feat: add support for running in a venv (#1018)
# What does this PR do?

add --image-type to `llama stack run`. Which takes conda, container or
venv also add start_venv.sh which start the stack using a venv

resolves #1007

## Test Plan

running locally:

`llama stack build --template ollama --image-type venv`
`llama stack run --image-type venv
~/.llama/distributions/ollama/ollama-run.yaml`
...
```
llama stack run --image-type venv ~/.llama/distributions/ollama/ollama-run.yaml
Using run configuration: /Users/charliedoern/.llama/distributions/ollama/ollama-run.yaml
+ python -m llama_stack.distribution.server.server --yaml-config /Users/charliedoern/.llama/distributions/ollama/ollama-run.yaml --port 8321
Using config file: /Users/charliedoern/.llama/distributions/ollama/ollama-run.yaml
Run configuration:
apis:
- agents
- datasetio
...
```

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-02-12 11:13:04 -05:00
..
apis feat: make telemetry attributes be dict[str,PrimitiveType] (#1055) 2025-02-11 15:10:17 -08:00
cli feat: add support for running in a venv (#1018) 2025-02-12 11:13:04 -05:00
distribution feat: add support for running in a venv (#1018) 2025-02-12 11:13:04 -05:00
providers feat: Support tool calling for streaming chat completion in remote vLLM provider (#1063) 2025-02-12 06:17:21 -08:00
scripts fix: Gaps in doc codegen (#1035) 2025-02-10 13:24:15 -08:00
templates fix: a bad newline in ollama docs (#1036) 2025-02-10 14:27:17 -08:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00