mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-28 02:53:30 +00:00
# What does this PR do? - local persistence for HF dataset provider - follow https://github.com/meta-llama/llama-stack/pull/375 ## Test Plan **e2e** 1. fresh llama stack run w/ yaml 2. kill server 3. restart llama stack run w/ yaml ```yaml datasets: - dataset_id: mmlu provider_id: huggingface-0 url: uri: https://huggingface.co/datasets/llamastack/evals metadata: path: llamastack/evals name: evals__mmlu__details split: train dataset_schema: input_query: type: string expected_answer: type: string ``` <img width="686" alt="image" src="https://github.com/user-attachments/assets/d7737931-6a7d-400a-a17d-fef6cbd97eea"> ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Ran pre-commit to handle lint / formatting issues. - [ ] Read the [contributor guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md), Pull Request section? - [ ] Updated relevant documentation. - [ ] Wrote necessary unit or integration tests. |
||
---|---|---|
.. | ||
apis | ||
cli | ||
distribution | ||
providers | ||
scripts | ||
templates | ||
__init__.py |