diff --git a/docs/source/distributions/ondevice_distro/android_sdk.md b/docs/source/distributions/ondevice_distro/android_sdk.md index 4fa6eaf70..a097a2adf 100644 --- a/docs/source/distributions/ondevice_distro/android_sdk.md +++ b/docs/source/distributions/ondevice_distro/android_sdk.md @@ -24,7 +24,7 @@ The key files in the app are `ExampleLlamaStackLocalInference.kt`, `ExampleLlama Add the following dependency in your `build.gradle.kts` file: ``` dependencies { - implementation("com.llama.llamastack:llama-stack-client-kotlin:0.1.4.2") + implementation("com.llama.llamastack:llama-stack-client-kotlin:0.2.2") } ``` This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/` @@ -37,11 +37,7 @@ For local inferencing, it is required to include the ExecuTorch library into you Include the ExecuTorch library by: 1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/tree/latest-release/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine. -2. Move the script to the top level of your Android app where the app directory resides: -
-
-