mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-13 00:26:10 +00:00
# What does this PR do? This PR improves the Tools page in the LlamaStack Playground UI by enhancing the readability of the active tool list shown in the sidebar. - Previously, active tools were displayed in a flat JSON array with verbose identifiers (e.g., builtin::code_interpreter:code_interpreter). - This PR updates the logic to group tools by their toolgroup (e.g., builtin::websearch) and renders each tool name in a simplified, human-readable format (e.g., web_search). - This change improves usability when working with multiple toolgroups, especially in configurations involving MCP tools or complex tool identifiers. Before and After Comparison: **Before**  **After**  [//]: # (If resolving an issue, uncomment and update the line below) [//]: # (Closes #[issue-number]) ## Test Plan - Followed the [LlamaStack UI Developer Setup instructions](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/distribution/ui) - Ran the Streamlit UI via: `uv run --with "[.ui]" streamlit run llama_stack/distribution/ui/app.py` - Selected multiple built-in toolgroups (e.g., code_interpreter, websearch, wolfram_alpha) from the sidebar. [//]: # (## Documentation) |
||
---|---|---|
.. | ||
modules | ||
page | ||
__init__.py | ||
app.py | ||
Containerfile | ||
README.md | ||
requirements.txt |
(Experimental) LLama Stack UI
Docker Setup
⚠️ This is a work in progress.
Developer Setup
- Start up Llama Stack API server. More details here.
llama stack build --template together --image-type conda
llama stack run together
- (Optional) Register datasets and eval tasks as resources. If you want to run pre-configured evaluation flows (e.g. Evaluations (Generation + Scoring) Page).
llama-stack-client datasets register \
--dataset-id "mmlu" \
--provider-id "huggingface" \
--url "https://huggingface.co/datasets/llamastack/evals" \
--metadata '{"path": "llamastack/evals", "name": "evals__mmlu__details", "split": "train"}' \
--schema '{"input_query": {"type": "string"}, "expected_answer": {"type": "string", "chat_completion_input": {"type": "string"}}}'
llama-stack-client benchmarks register \
--eval-task-id meta-reference-mmlu \
--provider-id meta-reference \
--dataset-id mmlu \
--scoring-functions basic::regex_parser_multiple_choice_answer
- Start Streamlit UI
uv run --with ".[ui]" streamlit run llama_stack/distribution/ui/app.py
Environment Variables
Environment Variable | Description | Default Value |
---|---|---|
LLAMA_STACK_ENDPOINT | The endpoint for the Llama Stack | http://localhost:8321 |
FIREWORKS_API_KEY | API key for Fireworks provider | (empty string) |
TOGETHER_API_KEY | API key for Together provider | (empty string) |
SAMBANOVA_API_KEY | API key for SambaNova provider | (empty string) |
OPENAI_API_KEY | API key for OpenAI provider | (empty string) |