llama-stack/llama_stack/templates
Xi Yan 5287b437ae
feat(api): (1/n) datasets api clean up (#1573)
## PR Stack
- https://github.com/meta-llama/llama-stack/pull/1573
- https://github.com/meta-llama/llama-stack/pull/1625
- https://github.com/meta-llama/llama-stack/pull/1656
- https://github.com/meta-llama/llama-stack/pull/1657
- https://github.com/meta-llama/llama-stack/pull/1658
- https://github.com/meta-llama/llama-stack/pull/1659
- https://github.com/meta-llama/llama-stack/pull/1660

**Client SDK**
- https://github.com/meta-llama/llama-stack-client-python/pull/203

**CI**
- 1391130488
<img width="1042" alt="image"
src="https://github.com/user-attachments/assets/69636067-376d-436b-9204-896e2dd490ca"
/>
-- the test_rag_agent_with_attachments is flaky and not related to this
PR

## Doc
<img width="789" alt="image"
src="https://github.com/user-attachments/assets/b88390f3-73d6-4483-b09a-a192064e32d9"
/>


## Client Usage
```python
client.datasets.register(
    source={
        "type": "uri",
        "uri": "lsfs://mydata.jsonl",
    },
    schema="jsonl_messages",
    # optional 
    dataset_id="my_first_train_data"
)

# quick prototype debugging
client.datasets.register(
    data_reference={
        "type": "rows",
        "rows": [
                "messages": [...],
        ],
    },
    schema="jsonl_messages",
)
```

## Test Plan
- CI:
1387805545

```
LLAMA_STACK_CONFIG=fireworks pytest -v tests/integration/datasets/test_datasets.py
```

```
LLAMA_STACK_CONFIG=fireworks pytest -v tests/integration/scoring/test_scoring.py
```

```
pytest -v -s --nbval-lax ./docs/notebooks/Llama_Stack_Benchmark_Evals.ipynb
```
2025-03-17 16:55:45 -07:00
..
bedrock test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
cerebras test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
ci-tests test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
dell test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
dev test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
experimental-post-training feat: [post training] support save hf safetensor format checkpoint (#845) 2025-02-25 23:29:08 -08:00
fireworks test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
groq test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
hf-endpoint test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
hf-serverless test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
meta-reference-gpu test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
meta-reference-quantized-gpu test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
nvidia feat: added nvidia as safety provider (#1248) 2025-03-17 14:39:23 -07:00
ollama test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
open-benchmark feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00
passthrough fix: passthrough provider template + fix (#1612) 2025-03-13 09:44:26 -07:00
remote-vllm test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
sambanova test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
tgi test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
together test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
vllm-gpu test: add unit test to ensure all config types are instantiable (#1601) 2025-03-12 22:29:58 -07:00
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
template.py feat(api): (1/n) datasets api clean up (#1573) 2025-03-17 16:55:45 -07:00