forked from phoenix-oss/llama-stack-mirror
ClientVersion: We don't need each SDK method to support this parameter because you wouldn't be passing a different client version each time you make an API call. ProviderData: although in this case, you _could_ be passing different API keys depending on which SDK call you make, it makes for a confusing experience. It is best to initialize the LlamaStackClient with all the keys which are then passed in each request. |
||
---|---|---|
.. | ||
_static | ||
notebooks | ||
openapi_generator | ||
resources | ||
source | ||
zero_to_hero_guide | ||
contbuild.sh | ||
dog.jpg | ||
getting_started.ipynb | ||
license_header.txt | ||
make.bat | ||
Makefile | ||
readme.md | ||
requirements.txt |
Llama Stack Documentation
Here's a collection of comprehensive guides, examples, and resources for building AI applications with Llama Stack. For the complete documentation, visit our ReadTheDocs page.
Content
Try out Llama Stack's capabilities through our detailed Jupyter notebooks:
- Building AI Applications Notebook - A comprehensive guide to building production-ready AI applications using Llama Stack
- Benchmark Evaluations Notebook - Detailed performance evaluations and benchmarking results
- Zero-to-Hero Guide - Step-by-step guide for getting started with Llama Stack