mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-22 20:40:00 +00:00
Fix warnings in builds for documentation
This commit is contained in:
parent
b5a6ecc331
commit
6e17e2ccf2
16 changed files with 55 additions and 0 deletions
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::meta-reference
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# remote::nvidia
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# HuggingFace SFTTrainer
|
||||
|
||||
[HuggingFace SFTTrainer](https://huggingface.co/docs/trl/en/sft_trainer) is an inline post training provider for Llama Stack. It allows you to run supervised fine tuning on a variety of models using many datasets
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# Post_Training Providers
|
||||
|
||||
This section contains documentation for all available providers for the **post_training** API.
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::huggingface
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::torchtune
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# NVIDIA NEMO
|
||||
|
||||
[NVIDIA NEMO](https://developer.nvidia.com/nemo-framework) is a remote post training provider for Llama Stack. It provides enterprise-grade fine-tuning capabilities through NVIDIA's NeMo Customizer service.
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# remote::nvidia
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -1,6 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# TorchTune
|
||||
|
||||
[TorchTune](https://github.com/pytorch/torchtune) is an inline post training provider for Llama Stack. It provides a simple and efficient way to fine-tune language models using PyTorch.
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::basic
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::braintrust
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# inline::llm-as-judge
|
||||
|
||||
## Description
|
||||
|
|
|
|||
|
|
@ -11,4 +11,5 @@ See the [Adding a New API Provider](new_api_provider.md) which describes how to
|
|||
:hidden:
|
||||
|
||||
new_api_provider
|
||||
testing
|
||||
```
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@ This section provides an overview of the distributions available in Llama Stack.
|
|||
:maxdepth: 3
|
||||
list_of_distributions
|
||||
building_distro
|
||||
starting_llama_stack_server
|
||||
customizing_run_yaml
|
||||
importing_as_library
|
||||
configuration
|
||||
|
|
|
|||
|
|
@ -89,4 +89,14 @@ Is associated with the ToolGroup resources.
|
|||
:maxdepth: 1
|
||||
|
||||
tool_runtime/index
|
||||
```
|
||||
|
||||
## Files Providers
|
||||
|
||||
This section contains documentation for all available providers for the **files** API.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
files/index
|
||||
```
|
||||
|
|
@ -1,3 +1,7 @@
|
|||
---
|
||||
orphan: true
|
||||
---
|
||||
|
||||
# remote::bing-search
|
||||
|
||||
## Description
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue