mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-30 07:39:38 +00:00
Update ios_setup.md
This commit is contained in:
parent
bc9725ad47
commit
9c0dac5832
1 changed files with 6 additions and 0 deletions
|
@ -5,6 +5,12 @@ We offer both remote and on-device use of Llama Stack in Swift via two component
|
||||||
1. [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift/)
|
1. [llama-stack-client-swift](https://github.com/meta-llama/llama-stack-client-swift/)
|
||||||
2. [LocalInferenceImpl](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/impls/ios/inference)
|
2. [LocalInferenceImpl](https://github.com/meta-llama/llama-stack/tree/main/llama_stack/providers/impls/ios/inference)
|
||||||
|
|
||||||
|
```{image} ../../_static/remote_or_local.gif
|
||||||
|
:alt: Seamlessly switching between local, on-device inference and remote hosted inference
|
||||||
|
:width: 412px
|
||||||
|
:align: center
|
||||||
|
```
|
||||||
|
|
||||||
## Remote Only
|
## Remote Only
|
||||||
|
|
||||||
If you don't want to run inference on-device, then you can connect to any hosted Llama Stack distribution with #1.
|
If you don't want to run inference on-device, then you can connect to any hosted Llama Stack distribution with #1.
|
||||||
|
|
Loading…
Add table
Add a link
Reference in a new issue