llama-stack/llama_stack/distribution/ui
Michael Clifford 9657105304
feat: Add tools page to playground (#1904)
# What does this PR do?

This PR adds an additional page to the playground called "Tools". This
page connects to a llama-stack server and lists all the available LLM
models, builtin tools and MCP tools in the sidebar. Users can select
whatever combination of model and tools they want from the sidebar for
their agent. Once the selections are made, users can chat with their
agent similarly to the RAG page and test out agent tool use.

closes #1902 

## Test Plan

Ran the following commands with a llama-stack server and the updated
playground worked as expected.
```
export LLAMA_STACK_ENDPOINT="http://localhost:8321"     
streamlit run  llama_stack/distribution/ui/app.py
```

[//]: # (## Documentation)

Signed-off-by: Michael Clifford <mcliffor@redhat.com>
2025-04-09 15:26:52 +02:00
..
modules build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
page feat: Add tools page to playground (#1904) 2025-04-09 15:26:52 +02:00
__init__.py move playground ui to llama-stack repo (#536) 2024-11-26 22:04:21 -08:00
app.py feat: Add tools page to playground (#1904) 2025-04-09 15:26:52 +02:00
Containerfile fix: Playground Container Issue (#1868) 2025-04-09 11:45:15 +02:00
README.md feat: Created Playground Containerfile and Image Workflow (#1256) 2025-03-18 09:26:49 -07:00
requirements.txt fix: Playground Container Issue (#1868) 2025-04-09 11:45:15 +02:00

(Experimental) LLama Stack UI

Docker Setup

⚠️ This is a work in progress.

Developer Setup

  1. Start up Llama Stack API server. More details here.
llama stack build --template together --image-type conda

llama stack run together
  1. (Optional) Register datasets and eval tasks as resources. If you want to run pre-configured evaluation flows (e.g. Evaluations (Generation + Scoring) Page).
llama-stack-client datasets register \
--dataset-id "mmlu" \
--provider-id "huggingface" \
--url "https://huggingface.co/datasets/llamastack/evals" \
--metadata '{"path": "llamastack/evals", "name": "evals__mmlu__details", "split": "train"}' \
--schema '{"input_query": {"type": "string"}, "expected_answer": {"type": "string", "chat_completion_input": {"type": "string"}}}'
llama-stack-client benchmarks register \
--eval-task-id meta-reference-mmlu \
--provider-id meta-reference \
--dataset-id mmlu \
--scoring-functions basic::regex_parser_multiple_choice_answer
  1. Start Streamlit UI
cd llama_stack/distribution/ui
pip install -r requirements.txt
streamlit run app.py

Environment Variables

Environment Variable Description Default Value
LLAMA_STACK_ENDPOINT The endpoint for the Llama Stack http://localhost:8321
FIREWORKS_API_KEY API key for Fireworks provider (empty string)
TOGETHER_API_KEY API key for Together provider (empty string)
SAMBANOVA_API_KEY API key for SambaNova provider (empty string)
OPENAI_API_KEY API key for OpenAI provider (empty string)