forked from phoenix-oss/llama-stack-mirror
* move docs -> source * Add files via upload * mv image * Add files via upload * colocate iOS setup doc * delete image * Add files via upload * fix * delete image * Add files via upload * Update developer_cookbook.md * toctree * wip subfolder * docs update * subfolder * updates * name * updates * index * updates * refactor structure * depth * docs * content * docs * getting started * distributions * fireworks * fireworks * update * theme * theme * theme * pdj theme * pytorch theme * css * theme * agents example * format * index * headers * copy button * test tabs * test tabs * fix * tabs * tab * tabs * sphinx_design * quick start commands * size * width * css * css * download models * asthetic fix * tab format * update * css * width * css * docs * tab based * tab * tabs * docs * style * image * css * color * typo * update docs * missing links * list templates * links * links update * troubleshooting * fix * distributions * docs * fix table * kill llamastack-local-gpu/cpu * Update index.md * Update index.md * mv ios_setup.md * Update ios_setup.md * Add remote_or_local.gif * Update ios_setup.md * release notes * typos * Add ios_setup to index * nav bar * hide torctree * ios image * links update * rename * rename * docs * rename * links * distributions * distributions * distributions * distributions * remove release * remote --------- Co-authored-by: dltn <6599399+dltn@users.noreply.github.com> Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
1.5 KiB
1.5 KiB
Meta Reference Quantized Distribution
The llamastack/distribution-meta-reference-quantized-gpu
distribution consists of the following provider configurations.
API | Inference | Agents | Memory | Safety | Telemetry |
---|---|---|---|---|---|
Provider(s) | meta-reference-quantized | meta-reference | meta-reference, remote::pgvector, remote::chroma | meta-reference | meta-reference |
The only difference vs. the meta-reference-gpu
distribution is that it has support for more efficient inference -- with fp8, int4 quantization, etc.
Start the Distribution (Single Node GPU)
Note
This assumes you have access to GPU to start a local server with access to your GPU.
Note
~/.llama
should be the path containing downloaded weights of Llama models.
To download and start running a pre-built docker container, you may use the following commands:
docker run -it -p 5000:5000 -v ~/.llama:/root/.llama \
-v ./run.yaml:/root/my-run.yaml \
--gpus=all \
distribution-meta-reference-quantized-gpu \
--yaml_config /root/my-run.yaml
Alternative (Build and start distribution locally via conda)
- You may checkout the Getting Started for more details on building locally via conda and starting up the distribution.