diff --git a/README.md b/README.md index 617e5117b..c2105067e 100644 --- a/README.md +++ b/README.md @@ -9,15 +9,15 @@ [**Quick Start**](https://llama-stack.readthedocs.io/en/latest/getting_started/index.html) | [**Documentation**](https://llama-stack.readthedocs.io/en/latest/index.html) | [**Colab Notebook**](./docs/getting_started.ipynb) - ### ✨🎉 Llama 4 Support 🎉✨ We released [Version 0.2.0](https://github.com/meta-llama/llama-stack/releases/tag/v0.2.0) with support for the Llama 4 herd of models released by Meta. -You can now run Llama 4 models on Llama Stack. +
+ +You can now run Llama 4 models on Llama Stack (click for details) *Note you need 8xH100 GPU-host to run these models* - ```bash pip install -U llama_stack @@ -67,6 +67,9 @@ print(f"Assistant> {response.completion_message.content}") As more providers start supporting Llama 4, you can use them in Llama Stack as well. We are adding to the list. Stay tuned! +
+ + ### Overview Llama Stack standardizes the core building blocks that simplify AI application development. It codifies best practices across the Llama ecosystem. More specifically, it provides