From 9c0dac5832a39ebd151d90866b77aa5f370d9aa6 Mon Sep 17 00:00:00 2001 From: Dalton Flanagan <6599399+dltn@users.noreply.github.com> Date: Fri, 1 Nov 2024 18:11:26 -0400 Subject: [PATCH] Update ios_setup.md --- docs/source/getting_started/ios_setup.md | 6 ++++++ 1 file changed, 6 insertions(+) diff --git a/docs/source/getting_started/ios_setup.md b/docs/source/getting_started/ios_setup.md index d08f388ee..0acace108 100644 --- a/docs/source/getting_started/ios_setup.md +++ b/docs/source/getting_started/ios_setup.md @@ -5,6 +5,12 @@ We offer both remote and on-device use of Llama Stack in Swift via two component 1. [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift/) 2. [LocalInferenceImpl](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/impls/ios/inference) +```{image} ../../_static/remote_or_local.gif +:alt: Seamlessly switching between local, on-device inference and remote hosted inference +:width: 412px +:align: center +``` + ## Remote Only If you don't want to run inference on-device, then you can connect to any hosted Llama Stack distribution with #1.