Bump kotlin docs to 0.0.54.1

This commit is contained in:
Riandy Riandy 2024-12-07 06:14:14 +08:00
parent e4a2948684
commit 17554805f0

View file

@ -8,7 +8,7 @@ Features:
- Remote Inferencing: Perform inferencing tasks remotely with Llama models hosted on a remote connection (or serverless localhost). - Remote Inferencing: Perform inferencing tasks remotely with Llama models hosted on a remote connection (or serverless localhost).
- Simple Integration: With easy-to-use APIs, a developer can quickly integrate Llama Stack in their Android app. The difference with local vs remote inferencing is also minimal. - Simple Integration: With easy-to-use APIs, a developer can quickly integrate Llama Stack in their Android app. The difference with local vs remote inferencing is also minimal.
Latest Release Notes: [v0.0.54](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.0.54) Latest Release Notes: [v0.0.54.1](https://github.com/meta-llama/llama-stack-client-kotlin/releases/tag/v0.0.54.1)
## Android Demo App ## Android Demo App
Check out our demo app to see how to integrate Llama Stack into your Android app: [Android Demo App](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/android_app) Check out our demo app to see how to integrate Llama Stack into your Android app: [Android Demo App](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/android_app)
@ -22,7 +22,7 @@ The key files in the app are `LlamaStackLocalInference.kt`, `LlamaStackRemoteInf
Add the following dependency in your `build.gradle.kts` file: Add the following dependency in your `build.gradle.kts` file:
``` ```
dependencies { dependencies {
implementation("com.llama.llamastack:llama-stack-client-kotlin:0.0.54") implementation("com.llama.llamastack:llama-stack-client-kotlin:0.0.54.1")
} }
``` ```
This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/` This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/`
@ -34,10 +34,10 @@ If you plan on doing remote inferencing this is sufficient to get started.
For local inferencing, it is required to include the ExecuTorch library into your app. For local inferencing, it is required to include the ExecuTorch library into your app.
Include the ExecuTorch library by: Include the ExecuTorch library by:
1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/blob/release/0.0.54/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine. 1. Download the `download-prebuilt-et-lib.sh` script file from the [llama-stack-client-kotlin-client-local](https://github.com/meta-llama/llama-stack-client-kotlin/blob/release/0.0.54.1/llama-stack-client-kotlin-client-local/download-prebuilt-et-lib.sh) directory to your local machine.
2. Move the script to the top level of your Android app where the app directory resides: 2. Move the script to the top level of your Android app where the app directory resides:
<p align="center"> <p align="center">
<img src="https://raw.githubusercontent.com/meta-llama/llama-stack-client-kotlin/refs/heads/release/0.0.54/doc/img/example_android_app_directory.png" style="width:300px"> <img src="https://raw.githubusercontent.com/meta-llama/llama-stack-client-kotlin/refs/heads/release/0.0.54.1/doc/img/example_android_app_directory.png" style="width:300px">
</p> </p>
3. Run `sh download-prebuilt-et-lib.sh` to create an `app/libs` directory and download the `executorch.aar` in that path. This generates an ExecuTorch library for the XNNPACK delegate with commit: [0a12e33](https://github.com/pytorch/executorch/commit/0a12e33d22a3d44d1aa2af5f0d0673d45b962553). 3. Run `sh download-prebuilt-et-lib.sh` to create an `app/libs` directory and download the `executorch.aar` in that path. This generates an ExecuTorch library for the XNNPACK delegate with commit: [0a12e33](https://github.com/pytorch/executorch/commit/0a12e33d22a3d44d1aa2af5f0d0673d45b962553).
@ -129,7 +129,7 @@ The purpose of this section is to share more details with users that would like
### Prerequisite ### Prerequisite
You must complete the following steps: You must complete the following steps:
1. Clone the repo (`git clone https://github.com/meta-llama/llama-stack-client-kotlin.git -b release/0.0.54`) 1. Clone the repo (`git clone https://github.com/meta-llama/llama-stack-client-kotlin.git -b release/0.0.54.1`)
2. Port the appropriate ExecuTorch libraries over into your Llama Stack Kotlin library environment. 2. Port the appropriate ExecuTorch libraries over into your Llama Stack Kotlin library environment.
``` ```
cd llama-stack-client-kotlin-client-local cd llama-stack-client-kotlin-client-local