llama-stack/llama_stack/distribution/ui
Ilya Kolchinsky d39462d073
feat: Hide tool output under an expander in Playground UI (#2003)
# What does this PR do?
Now, tool outputs and retrieved chunks from the vector DB (i.e.,
everything except for the actual model reply) are hidden under an
expander form when presented to the user.

# Test Plan
Navigate to the RAG page in the Playground UI.
2025-04-23 15:32:12 +02:00
..
modules fix: add tavily_search option to playground api (#1909) 2025-04-09 15:56:41 +02:00
page feat: Hide tool output under an expander in Playground UI (#2003) 2025-04-23 15:32:12 +02:00
__init__.py move playground ui to llama-stack repo (#536) 2024-11-26 22:04:21 -08:00
app.py feat: Add tools page to playground (#1904) 2025-04-09 15:26:52 +02:00
Containerfile fix: Playground Container Issue (#1868) 2025-04-09 11:45:15 +02:00
README.md chore: simplify running the demo UI (#1907) 2025-04-09 11:22:29 -07:00
requirements.txt chore: simplify running the demo UI (#1907) 2025-04-09 11:22:29 -07:00

(Experimental) LLama Stack UI

Docker Setup

⚠️ This is a work in progress.

Developer Setup

  1. Start up Llama Stack API server. More details here.
llama stack build --template together --image-type conda

llama stack run together
  1. (Optional) Register datasets and eval tasks as resources. If you want to run pre-configured evaluation flows (e.g. Evaluations (Generation + Scoring) Page).
llama-stack-client datasets register \
--dataset-id "mmlu" \
--provider-id "huggingface" \
--url "https://huggingface.co/datasets/llamastack/evals" \
--metadata '{"path": "llamastack/evals", "name": "evals__mmlu__details", "split": "train"}' \
--schema '{"input_query": {"type": "string"}, "expected_answer": {"type": "string", "chat_completion_input": {"type": "string"}}}'
llama-stack-client benchmarks register \
--eval-task-id meta-reference-mmlu \
--provider-id meta-reference \
--dataset-id mmlu \
--scoring-functions basic::regex_parser_multiple_choice_answer
  1. Start Streamlit UI
uv run --with ".[ui]" streamlit run llama_stack/distribution/ui/app.py

Environment Variables

Environment Variable Description Default Value
LLAMA_STACK_ENDPOINT The endpoint for the Llama Stack http://localhost:8321
FIREWORKS_API_KEY API key for Fireworks provider (empty string)
TOGETHER_API_KEY API key for Together provider (empty string)
SAMBANOVA_API_KEY API key for SambaNova provider (empty string)
OPENAI_API_KEY API key for OpenAI provider (empty string)