mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-08-01 16:24:44 +00:00
changes from PR review
This commit is contained in:
parent
b9cfd98897
commit
97d4ff4227
12 changed files with 3 additions and 2 deletions
|
@ -6,7 +6,7 @@
|
|||
[](https://pypi.org/project/llama-stack/)
|
||||
[](https://discord.gg/llama-stack)
|
||||
|
||||
[**Quick Start**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html)
|
||||
[**Quick Start**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html) | [**Zero2Hero Guide**](https://github.com/meta-llama/llama-stack/tree/main/docs/zero_to_hero_guide)
|
||||
|
||||
This repository contains the Llama Stack API specifications as well as API Providers and Llama Stack Distributions.
|
||||
|
||||
|
@ -103,6 +103,7 @@ Please checkout our [Documentations](https://llama-stack.readthedocs.io/en/lates
|
|||
* Quick guide to start a Llama Stack server.
|
||||
* [Jupyter notebook](./docs/getting_started.ipynb) to walk-through how to use simple text and vision inference llama_stack_client APIs
|
||||
* The complete Llama Stack lesson [Colab notebook](https://colab.research.google.com/drive/1dtVmxotBsI4cGZQNsJRYPrLiDeT0Wnwt) of the new [Llama 3.2 course on Deeplearning.ai](https://learn.deeplearning.ai/courses/introducing-multimodal-llama-3-2/lesson/8/llama-stack).
|
||||
* The [Zero2Hero Guide](https://github.com/meta-llama/llama-stack/tree/main/docs/zero_to_hero_guide) that guide you through all the key components of llama stack with code samples.
|
||||
* [Contributing](CONTRIBUTING.md)
|
||||
* [Adding a new API Provider](https://llama-stack.readthedocs.io/en/latest/api_providers/new_api_provider.html) to walk-through how to add a new API provider.
|
||||
|
||||
|
|
|
@ -121,7 +121,7 @@ export SAFETY_MODEL="meta-llama/Llama-Guard-3-1B"
|
|||
3. **Run the Llama Stack**:
|
||||
- Run the stack with command shared by the API from earlier:
|
||||
```bash
|
||||
llama stack run /Users/username/.llama/distributions/llamastack-ollama/ollama-run.yaml \
|
||||
llama stack run ollama \
|
||||
--port $LLAMA_STACK_PORT \
|
||||
--env INFERENCE_MODEL=$INFERENCE_MODEL \
|
||||
--env SAFETY_MODEL=$SAFETY_MODEL \
|
Loading…
Add table
Add a link
Reference in a new issue