mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-06-27 18:50:41 +00:00
* docker compose ollama * comment * update compose file * readme for distributions * readme * move distribution folders * move distribution/templates to distributions/ * rename * kill distribution/templates * readme * readme * build/developer cookbook/new api provider * developer cookbook * readme * readme * [bugfix] fix case for agent when memory bank registered without specifying provider_id (#264) * fix case where memory bank is registered without provider_id * memory test * agents unit test * Add an option to not use elastic agents for meta-reference inference (#269) * Allow overridding checkpoint_dir via config * Small rename * Make all methods `async def` again; add completion() for meta-reference (#270) PR #201 had made several changes while trying to fix issues with getting the stream=False branches of inference and agents API working. As part of this, it made a change which was slightly gratuitous. Namely, making chat_completion() and brethren "def" instead of "async def". The rationale was that this allowed the user (within llama-stack) of this to use it as: ``` async for chunk in api.chat_completion(params) ``` However, it causes unnecessary confusion for several folks. Given that clients (e.g., llama-stack-apps) anyway use the SDK methods (which are completely isolated) this choice was not ideal. Let's revert back so the call now looks like: ``` async for chunk in await api.chat_completion(params) ``` Bonus: Added a completion() implementation for the meta-reference provider. Technically should have been another PR :) * Improve an important error message * update ollama for llama-guard3 * Add vLLM inference provider for OpenAI compatible vLLM server (#178) This PR adds vLLM inference provider for OpenAI compatible vLLM server. * Create .readthedocs.yaml Trying out readthedocs * Update event_logger.py (#275) spelling error * vllm * build templates * delete templates * tmp add back build to avoid merge conflicts * vllm * vllm --------- Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com> Co-authored-by: Ashwin Bharambe <ashwin@meta.com> Co-authored-by: Yuan Tang <terrytangyuan@gmail.com> Co-authored-by: raghotham <rsm@meta.com> Co-authored-by: nehal-a2z <nehal@coderabbit.ai>
1.7 KiB
1.7 KiB
Llama Stack Developer Cookbook
Based on your developer needs, below are references to guides to help you get started.
Hosted Llama Stack Endpoint
- Developer Need: I want to connect to a Llama Stack endpoint to build my applications.
- Effort: 1min
- Guide:
- Checkout our DeepLearning course on building with Llama Stack apps on pre-hosted Llama Stack endpoint.
Local meta-reference Llama Stack Server
- Developer Need: I want to start a local Llama Stack server with my GPU using meta-reference implementations.
- Effort: 5min
- Guide:
- Please see our Getting Started Guide on starting up a meta-reference Llama Stack server.
Llama Stack Server with Remote Providers
- Developer need: I want a Llama Stack distribution with a remote provider.
- Effort: 10min
- Guide
- Please see our Distributions Guide on starting up distributions with remote providers.
On-Device (iOS) Llama Stack
- Developer Need: I want to use Llama Stack on-Device
- Effort: 1.5hr
- Guide:
- Please see our iOS Llama Stack SDK implementations
Assemble your own Llama Stack Distribution
- Developer Need: I want to assemble my own distribution with API providers to my likings
- Effort: 30min
- Guide
- Please see our Building Distribution guide for assembling your own Llama Stack distribution with your choice of API providers.
Adding a New API Provider
- Developer Need: I want to add a new API provider to Llama Stack.
- Effort: 3hr
- Guide
- Please see our Adding a New API Provider guide for adding a new API provider.