llama-stack/llama_stack/distribution/ui
Jamie Land f4dc290705
feat: Created Playground Containerfile and Image Workflow (#1256)
# What does this PR do?
Adds a container file that can be used to build the playground UI.

This file will be built by this PR in the stack-ops repo:
https://github.com/meta-llama/llama-stack-ops/pull/9

Docker command in the docs will need to change once I know the address
of the official repository.

## Test Plan

Tested image on my local Openshift Instance using this helm chart:
https://github.com/Jaland/llama-stack-helm/tree/main/llama-stack

[//]: # (## Documentation)

---------

Co-authored-by: Jamie Land <hokie10@gmail.com>
2025-03-18 09:26:49 -07:00
..
modules build: format codebase imports using ruff linter (#1028) 2025-02-13 10:06:21 -08:00
page feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
__init__.py move playground ui to llama-stack repo (#536) 2024-11-26 22:04:21 -08:00
app.py Fix precommit check after moving to ruff (#927) 2025-02-02 06:46:45 -08:00
Containerfile feat: Created Playground Containerfile and Image Workflow (#1256) 2025-03-18 09:26:49 -07:00
README.md feat: Created Playground Containerfile and Image Workflow (#1256) 2025-03-18 09:26:49 -07:00
requirements.txt [llama stack ui] add native eval & inspect distro & playground pages (#541) 2024-12-04 09:47:09 -08:00

(Experimental) LLama Stack UI

Docker Setup

⚠️ This is a work in progress.

Developer Setup

  1. Start up Llama Stack API server. More details here.
llama stack build --template together --image-type conda

llama stack run together
  1. (Optional) Register datasets and eval tasks as resources. If you want to run pre-configured evaluation flows (e.g. Evaluations (Generation + Scoring) Page).
llama-stack-client datasets register \
--dataset-id "mmlu" \
--provider-id "huggingface" \
--url "https://huggingface.co/datasets/llamastack/evals" \
--metadata '{"path": "llamastack/evals", "name": "evals__mmlu__details", "split": "train"}' \
--schema '{"input_query": {"type": "string"}, "expected_answer": {"type": "string", "chat_completion_input": {"type": "string"}}}'
llama-stack-client benchmarks register \
--eval-task-id meta-reference-mmlu \
--provider-id meta-reference \
--dataset-id mmlu \
--scoring-functions basic::regex_parser_multiple_choice_answer
  1. Start Streamlit UI
cd llama_stack/distribution/ui
pip install -r requirements.txt
streamlit run app.py

Environment Variables

Environment Variable Description Default Value
LLAMA_STACK_ENDPOINT The endpoint for the Llama Stack http://localhost:8321
FIREWORKS_API_KEY API key for Fireworks provider (empty string)
TOGETHER_API_KEY API key for Together provider (empty string)
SAMBANOVA_API_KEY API key for SambaNova provider (empty string)
OPENAI_API_KEY API key for OpenAI provider (empty string)