mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
llama stack distributions / templates / docker refactor (#266)
* docker compose ollama * comment * update compose file * readme for distributions * readme * move distribution folders * move distribution/templates to distributions/ * rename * kill distribution/templates * readme * readme * build/developer cookbook/new api provider * developer cookbook * readme * readme * [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264) * fix case where memory bank is registered without provider_id * memory test * agents unit test * Add an option to not use elastic agents for meta-reference inference (#269) * Allow overridding checkpoint_dir via config * Small rename * Make all methods `async def` again; add completion() for meta-reference (#270) PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def". The rationale was that this allowed the user (within llama-stack) of this to use it as: ``` async for chunk in api.chat_completion(params) ``` However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like: ``` async for chunk in await api.chat_completion(params) ``` Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :) * Improve an important error message * update ollama for llama-guard3 * Add vLLM inference provider for OpenAI compatible vLLM server (#178) This PR adds vLLM inference provider for OpenAI compatible vLLM server. * Create .readthedocs.yaml Trying out readthedocs * Update event_logger.py (#275) spelling error * vllm * build templates * delete templates * tmp add back build to avoid merge conflicts * vllm * vllm --------- Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com> Co-authored-by: Ashwin Bharambe <ashwin@meta.com> Co-authored-by: Yuan Tang <terrytangyuan@gmail.com> Co-authored-by: raghotham <rsm@meta.com> Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
This commit is contained in:
parent
c995219731
commit
23210e8679
32 changed files with 850 additions and 335 deletions
|
@ -60,15 +60,15 @@ def available_providers() -> List[ProviderSpec]:
|
|||
module="llama_stack.providers.adapters.inference.ollama",
|
||||
),
|
||||
),
|
||||
# remote_provider_spec(
|
||||
# api=Api.inference,
|
||||
# adapter=AdapterSpec(
|
||||
# adapter_type="vllm",
|
||||
# pip_packages=["openai"],
|
||||
# module="llama_stack.providers.adapters.inference.vllm",
|
||||
# config_class="llama_stack.providers.adapters.inference.vllm.VLLMImplConfig",
|
||||
# ),
|
||||
# ),
|
||||
# remote_provider_spec(
|
||||
# api=Api.inference,
|
||||
# adapter=AdapterSpec(
|
||||
# adapter_type="vllm",
|
||||
# pip_packages=["openai"],
|
||||
# module="llama_stack.providers.adapters.inference.vllm",
|
||||
# config_class="llama_stack.providers.adapters.inference.vllm.VLLMImplConfig",
|
||||
# ),
|
||||
# ),
|
||||
remote_provider_spec(
|
||||
api=Api.inference,
|
||||
adapter=AdapterSpec(
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue