llama-stack/llama_stack/distribution
Xi Yan 23210e8679
llama stack distributions / templates / docker refactor (#266)
* docker compose ollama

* comment

* update compose file

* readme for distributions

* readme

* move distribution folders

* move distribution/templates to distributions/

* rename

* kill distribution/templates

* readme

* readme

* build/developer cookbook/new api provider

* developer cookbook

* readme

* readme

* [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264)

* fix case where memory bank is registered without provider_id

* memory test

* agents unit test

* Add an option to not use elastic agents for meta-reference inference (#269)

* Allow overridding checkpoint_dir via config

* Small rename

* Make all methods `async def` again; add completion() for meta-reference (#270)

PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def".

The rationale was that this allowed the user (within llama-stack) of this to use it as:

```
async for chunk in api.chat_completion(params)
```

However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like:

```
async for chunk in await api.chat_completion(params)
```

Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :)

* Improve an important error message

* update ollama for llama-guard3

* Add vLLM inference provider for OpenAI compatible vLLM server (#178)

This PR adds vLLM inference provider for OpenAI compatible vLLM server.

* Create .readthedocs.yaml

Trying out readthedocs

* Update event_logger.py (#275)

spelling error

* vllm

* build templates

* delete templates

* tmp add back build to avoid merge conflicts

* vllm

* vllm

---------

Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
Co-authored-by: Ashwin Bharambe <ashwin@meta.com>
Co-authored-by: Yuan Tang <terrytangyuan@gmail.com>
Co-authored-by: raghotham <rsm@meta.com>
Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
2024-10-21 11:17:53 -07:00
..
routers Improve an important error message 2024-10-19 17:19:54 -07:00
server Small rename 2024-10-18 14:41:38 -07:00
templates llama stack distributions / templates / docker refactor (#266) 2024-10-21 11:17:53 -07:00
utils Add an introspection "Api.inspect" API 2024-10-02 15:41:14 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
build.py Kill a derpy import 2024-10-03 11:25:58 -07:00
build_conda_env.sh fix prompt guard (#177) 2024-10-03 11:07:53 -07:00
build_container.sh [CLI] avoid configure twice (#171) 2024-10-03 11:20:54 -07:00
common.sh API Updates (#73) 2024-09-17 19:51:35 -07:00
configure.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
configure_container.sh docker: Check for selinux before using --security-opt (#167) 2024-10-02 10:37:41 -07:00
datatypes.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
distribution.py A bit cleanup to avoid breakages 2024-10-02 21:31:09 -07:00
inspect.py Remove "routing_table" and "routing_key" concepts for the user (#201) 2024-10-10 10:24:13 -07:00
request_headers.py provider_id => provider_type, adapter_id => adapter_type 2024-10-02 14:05:59 -07:00
resolver.py Small rename 2024-10-18 14:41:38 -07:00
start_conda_env.sh API Updates (#73) 2024-09-17 19:51:35 -07:00
start_container.sh docker: Check for selinux before using --security-opt (#167) 2024-10-02 10:37:41 -07:00