mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-16 14:57:20 +00:00
format
This commit is contained in:
parent
980f2ae039
commit
acefea7821
4 changed files with 27 additions and 30 deletions
19
docs/source/distribution_dev/index.md
Normal file
19
docs/source/distribution_dev/index.md
Normal file
|
@ -0,0 +1,19 @@
|
|||
# Llama Stack Developer Guide
|
||||
|
||||
## Key Concepts
|
||||
|
||||
### API Provider
|
||||
A Provider is what makes the API real -- they provide the actual implementation backing the API.
|
||||
|
||||
As an example, for Inference, we could have the implementation be backed by open source libraries like `[ torch | vLLM | TensorRT ]` as possible options.
|
||||
|
||||
A provider can also be just a pointer to a remote REST service -- for example, cloud providers or dedicated inference providers could serve these APIs.
|
||||
|
||||
### Distribution
|
||||
A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.
|
||||
|
||||
```{toctree}
|
||||
:maxdepth: 1
|
||||
|
||||
building_distro
|
||||
```
|
Loading…
Add table
Add a link
Reference in a new issue