diff --git a/README.md b/README.md index 0a8cf5a3b..092f722c2 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@ ### ✨🎉 Llama 4 Support 🎉✨ -We release [Version 0.2.0](https://github.com/meta-llama/llama-stack/releases/tag/v0.2.0) with support for the Llama 4 herd of models released by Meta. +We released [Version 0.2.0](https://github.com/meta-llama/llama-stack/releases/tag/v0.2.0) with support for the Llama 4 herd of models released by Meta. You can now run Llama 4 models on Llama Stack. diff --git a/docs/source/distributions/importing_as_library.md b/docs/source/distributions/importing_as_library.md index 29a5669b3..967a18b54 100644 --- a/docs/source/distributions/importing_as_library.md +++ b/docs/source/distributions/importing_as_library.md @@ -17,7 +17,7 @@ client = LlamaStackAsLibraryClient( # provider_data is optional, but if you need to pass in any provider specific data, you can do so here. provider_data={"tavily_search_api_key": os.environ["TAVILY_SEARCH_API_KEY"]}, ) -await client.initialize() +client.initialize() ``` This will parse your config and set up any inline implementations and remote clients needed for your implementation.