mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-22 20:40:00 +00:00
Fix warnings in builds for documentation
This commit is contained in:
parent
b5a6ecc331
commit
6e17e2ccf2
16 changed files with 55 additions and 0 deletions
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# inline::meta-reference
|
# inline::meta-reference
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# remote::nvidia
|
# remote::nvidia
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
---
|
---
|
||||||
orphan: true
|
orphan: true
|
||||||
---
|
---
|
||||||
|
|
||||||
# HuggingFace SFTTrainer
|
# HuggingFace SFTTrainer
|
||||||
|
|
||||||
[HuggingFace SFTTrainer](https://huggingface.co/docs/trl/en/sft_trainer) is an inline post training provider for Llama Stack. It allows you to run supervised fine tuning on a variety of models using many datasets
|
[HuggingFace SFTTrainer](https://huggingface.co/docs/trl/en/sft_trainer) is an inline post training provider for Llama Stack. It allows you to run supervised fine tuning on a variety of models using many datasets
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# Post_Training Providers
|
# Post_Training Providers
|
||||||
|
|
||||||
This section contains documentation for all available providers for the **post_training** API.
|
This section contains documentation for all available providers for the **post_training** API.
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# inline::huggingface
|
# inline::huggingface
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# inline::torchtune
|
# inline::torchtune
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
---
|
---
|
||||||
orphan: true
|
orphan: true
|
||||||
---
|
---
|
||||||
|
|
||||||
# NVIDIA NEMO
|
# NVIDIA NEMO
|
||||||
|
|
||||||
[NVIDIA NEMO](https://developer.nvidia.com/nemo-framework) is a remote post training provider for Llama Stack. It provides enterprise-grade fine-tuning capabilities through NVIDIA's NeMo Customizer service.
|
[NVIDIA NEMO](https://developer.nvidia.com/nemo-framework) is a remote post training provider for Llama Stack. It provides enterprise-grade fine-tuning capabilities through NVIDIA's NeMo Customizer service.
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# remote::nvidia
|
# remote::nvidia
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
|
|
@ -1,6 +1,7 @@
|
||||||
---
|
---
|
||||||
orphan: true
|
orphan: true
|
||||||
---
|
---
|
||||||
|
|
||||||
# TorchTune
|
# TorchTune
|
||||||
|
|
||||||
[TorchTune](https://github.com/pytorch/torchtune) is an inline post training provider for Llama Stack. It provides a simple and efficient way to fine-tune language models using PyTorch.
|
[TorchTune](https://github.com/pytorch/torchtune) is an inline post training provider for Llama Stack. It provides a simple and efficient way to fine-tune language models using PyTorch.
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# inline::basic
|
# inline::basic
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# inline::braintrust
|
# inline::braintrust
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# inline::llm-as-judge
|
# inline::llm-as-judge
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
|
|
@ -11,4 +11,5 @@ See the [Adding a New API Provider](new_api_provider.md) which describes how to
|
||||||
:hidden:
|
:hidden:
|
||||||
|
|
||||||
new_api_provider
|
new_api_provider
|
||||||
|
testing
|
||||||
```
|
```
|
||||||
|
|
|
||||||
|
|
@ -8,6 +8,7 @@ This section provides an overview of the distributions available in Llama Stack.
|
||||||
:maxdepth: 3
|
:maxdepth: 3
|
||||||
list_of_distributions
|
list_of_distributions
|
||||||
building_distro
|
building_distro
|
||||||
|
starting_llama_stack_server
|
||||||
customizing_run_yaml
|
customizing_run_yaml
|
||||||
importing_as_library
|
importing_as_library
|
||||||
configuration
|
configuration
|
||||||
|
|
|
||||||
|
|
@ -89,4 +89,14 @@ Is associated with the ToolGroup resources.
|
||||||
:maxdepth: 1
|
:maxdepth: 1
|
||||||
|
|
||||||
tool_runtime/index
|
tool_runtime/index
|
||||||
|
```
|
||||||
|
|
||||||
|
## Files Providers
|
||||||
|
|
||||||
|
This section contains documentation for all available providers for the **files** API.
|
||||||
|
|
||||||
|
```{toctree}
|
||||||
|
:maxdepth: 1
|
||||||
|
|
||||||
|
files/index
|
||||||
```
|
```
|
||||||
|
|
@ -1,3 +1,7 @@
|
||||||
|
---
|
||||||
|
orphan: true
|
||||||
|
---
|
||||||
|
|
||||||
# remote::bing-search
|
# remote::bing-search
|
||||||
|
|
||||||
## Description
|
## Description
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue