forked from phoenix-oss/llama-stack-mirror
sync readme.md to index.md (#860)
# What does this PR do? README has some new content that is being synced to index.md
This commit is contained in:
parent
a6a4270eef
commit
7df40da5fa
2 changed files with 11 additions and 1 deletions
|
@ -1,6 +1,12 @@
|
||||||
# Llama Stack
|
# Llama Stack
|
||||||
|
|
||||||
Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of APIs with implementations from leading service providers, enabling seamless transitions between development and production environments.
|
Llama Stack defines and standardizes the core building blocks needed to bring generative AI applications to market. It provides a unified set of APIs with implementations from leading service providers, enabling seamless transitions between development and production environments. More specifically, it provides
|
||||||
|
|
||||||
|
- **Unified API layer** for Inference, RAG, Agents, Tools, Safety, Evals, and Telemetry.
|
||||||
|
- **Plugin architecture** to support the rich ecosystem of implementations of the different APIs in different environments like local development, on-premises, cloud, and mobile.
|
||||||
|
- **Prepackaged verified distributions** which offer a one-stop solution for developers to get started quickly and reliably in any environment
|
||||||
|
- **Multiple developer interfaces** like CLI and SDKs for Python, Node, iOS, and Android
|
||||||
|
- **Standalone applications** as examples for how to build production-grade AI applications with Llama Stack
|
||||||
|
|
||||||
We focus on making it easy to build production applications with the Llama model family - from the latest Llama 3.3 to specialized models like Llama Guard for safety.
|
We focus on making it easy to build production applications with the Llama model family - from the latest Llama 3.3 to specialized models like Llama Guard for safety.
|
||||||
|
|
||||||
|
|
|
@ -46,6 +46,10 @@ Llama Stack addresses these challenges through a service-oriented, API-first app
|
||||||
- Federation and fallback support
|
- Federation and fallback support
|
||||||
- No vendor lock-in
|
- No vendor lock-in
|
||||||
|
|
||||||
|
**Robust Ecosystem**
|
||||||
|
-Llama Stack is already integrated with distribution partners (cloud providers, hardware vendors, and AI-focused companies).
|
||||||
|
-Ecosystem offers tailored infrastructure, software, and services for deploying Llama models.
|
||||||
|
|
||||||
|
|
||||||
### Our Philosophy
|
### Our Philosophy
|
||||||
|
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue