llama-stack/llama_stack/distribution/ui
Michael Clifford f12011794b
fix: Updated tools playground to allow vdb selection (#1960)
# What does this PR do?

This PR lets users select an existing vdb to use with their agent on the
tools page of the playground. The drop down menu that lets users select
a vdb only appears when the rag tool is selected. Without this change,
there is no way for a user to specify which vdb they want their rag tool
to use on the tools page. I have intentionally left the RAG options
sparse here since the full RAG options are exposed on the RAG page.

## Test Plan

Without these changes the RAG tool will throw the following error:
`name: knowledge_search) does not have any content `

With these changes the RAG tool works as expected.

Signed-off-by: Michael Clifford <mcliffor@redhat.com>
2025-04-17 09:29:40 +02:00
..
modules fix: add tavily_search option to playground api (#1909) 2025-04-09 15:56:41 +02:00
page fix: Updated tools playground to allow vdb selection (#1960) 2025-04-17 09:29:40 +02:00
__init__.py move playground ui to llama-stack repo (#536) 2024-11-26 22:04:21 -08:00
app.py feat: Add tools page to playground (#1904) 2025-04-09 15:26:52 +02:00
Containerfile fix: Playground Container Issue (#1868) 2025-04-09 11:45:15 +02:00
README.md chore: simplify running the demo UI (#1907) 2025-04-09 11:22:29 -07:00
requirements.txt chore: simplify running the demo UI (#1907) 2025-04-09 11:22:29 -07:00

(Experimental) LLama Stack UI

Docker Setup

⚠️ This is a work in progress.

Developer Setup

  1. Start up Llama Stack API server. More details here.
llama stack build --template together --image-type conda

llama stack run together
  1. (Optional) Register datasets and eval tasks as resources. If you want to run pre-configured evaluation flows (e.g. Evaluations (Generation + Scoring) Page).
llama-stack-client datasets register \
--dataset-id "mmlu" \
--provider-id "huggingface" \
--url "https://huggingface.co/datasets/llamastack/evals" \
--metadata '{"path": "llamastack/evals", "name": "evals__mmlu__details", "split": "train"}' \
--schema '{"input_query": {"type": "string"}, "expected_answer": {"type": "string", "chat_completion_input": {"type": "string"}}}'
llama-stack-client benchmarks register \
--eval-task-id meta-reference-mmlu \
--provider-id meta-reference \
--dataset-id mmlu \
--scoring-functions basic::regex_parser_multiple_choice_answer
  1. Start Streamlit UI
uv run --with ".[ui]" streamlit run llama_stack/distribution/ui/app.py

Environment Variables

Environment Variable Description Default Value
LLAMA_STACK_ENDPOINT The endpoint for the Llama Stack http://localhost:8321
FIREWORKS_API_KEY API key for Fireworks provider (empty string)
TOGETHER_API_KEY API key for Together provider (empty string)
SAMBANOVA_API_KEY API key for SambaNova provider (empty string)
OPENAI_API_KEY API key for OpenAI provider (empty string)