llama-stack-mirror/distributions
Xi Yan 23210e8679
llama stack distributions / templates / docker refactor (#266)
* docker compose ollama

* comment

* update compose file

* readme for distributions

* readme

* move distribution folders

* move distribution/templates to distributions/

* rename

* kill distribution/templates

* readme

* readme

* build/developer cookbook/new api provider

* developer cookbook

* readme

* readme

* [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)

* fix case where memory bank is registered without provider_id

* memory test

* agents unit test

* Add an option to not use elastic agents for meta-reference inference (#269)

* Allow overridding checkpoint_dir via config

* Small rename

* Make all methods `async def` again; add completion() for meta-reference (#270)

PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)

* Improve an important error message

* update ollama for llama-guard3

* Add vLLM inference provider for OpenAI compatible vLLM server (#178)

This PR adds vLLM inference provider for OpenAI compatible vLLM server.

* Create .readthedocs.yaml

Trying out readthedocs

* Update event_logger.py (#275)

spelling error

* vllm

* build templates

* delete templates

* tmp add back build to avoid merge conflicts

* vllm

* vllm

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: raghotham <rsm@meta.com>
Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
2024-10-21 11:17:53 -07:00
..
bedrock llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
databricks llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
fireworks llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
hf-endpoint llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
hf-serverless llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
meta-reference-gpu llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
ollama llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
tgi llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
together llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
vllm llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
README.md llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00

Llama Stack Distribution

A Distribution is where APIs and Providers are assembled together to provide a consistent whole to the end application developer. You can mix-and-match providers -- some could be backed by local code and some could be remote. As a hobbyist, you can serve a small model locally, but can choose a cloud provider for a large model. Regardless, the higher level APIs your app needs to work with don't need to change at all. You can even imagine moving across the server / mobile-device boundary as well always using the same uniform set of APIs for developing Generative AI applications.

Quick Start Llama Stack Distributions Guide

Distribution Llama Stack Docker Start This Distribution Inference Agents Memory Safety Telemetry
Meta Reference llamastack/distribution-meta-reference-gpu Guide ✔️ ✔️ ✔️ ✔️ ✔️
Ollama llamastack/distribution-ollama Guide ✔️ ✔️ ✔️ ✔️ ✔️
TGI llamastack/distribution-tgi Guide ✔️ ✔️ ✔️ ✔️ ✔️