Fix pre-commit lint errors

pre-commit run --all-files
This commit is contained in:
Riandy Riandy 2024-12-07 04:52:14 +08:00
parent a40855fcd9
commit 4cbb2e7037

View file

@ -1,6 +1,6 @@
# Llama Stack Client Kotlin API Library # Llama Stack Client Kotlin API Library
We are excited to share a guide for a Kotlin Library that brings front the benefits of Llama Stack to your Android device. This library is a set of SDKs that provide a simple and effective way to integrate AI capabilities into your Android app whether it is local (on-device) or remote inference. We are excited to share a guide for a Kotlin Library that brings front the benefits of Llama Stack to your Android device. This library is a set of SDKs that provide a simple and effective way to integrate AI capabilities into your Android app whether it is local (on-device) or remote inference.
Features: Features:
- Local Inferencing: Run Llama models purely on-device with real-time processing. We currently utilize ExecuTorch as the local inference distributor and may support others in the future. - Local Inferencing: Run Llama models purely on-device with real-time processing. We currently utilize ExecuTorch as the local inference distributor and may support others in the future.
@ -25,7 +25,7 @@ dependencies {
implementation("com.llama.llamastack:llama-stack-client-kotlin:0.0.54") implementation("com.llama.llamastack:llama-stack-client-kotlin:0.0.54")
} }
``` ```
This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/` This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/`
If you plan on doing remote inferencing this is sufficient to get started. If you plan on doing remote inferencing this is sufficient to get started.
@ -56,7 +56,7 @@ Breaking down the demo app, this section will show the core pieces that are used
### Setup Remote Inferencing ### Setup Remote Inferencing
Start a Llama Stack server on localhost. Here is an example of how you can do this using the firework.ai distribution: Start a Llama Stack server on localhost. Here is an example of how you can do this using the firework.ai distribution:
``` ```
conda create -n stack-fireworks python=3.10 conda create -n stack-fireworks python=3.10
conda activate stack-fireworks conda activate stack-fireworks
pip install llama-stack=0.0.54 pip install llama-stack=0.0.54
llama stack build --template fireworks --image-type conda llama stack build --template fireworks --image-type conda
@ -69,7 +69,7 @@ Other inference providers: [Table](https://llama-stack.readthedocs.io/en/latest/
How to set remote localhost in Demo App: [Settings](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/android_app#settings) How to set remote localhost in Demo App: [Settings](https://github.com/meta-llama/llama-stack-apps/tree/main/examples/android_app#settings)
### Initialize the Client ### Initialize the Client
A client serves as the primary interface for interacting with a specific inference type and its associated parameters. Only after client is initialized then you can configure and start inferences. A client serves as the primary interface for interacting with a specific inference type and its associated parameters. Only after client is initialized then you can configure and start inferences.
<table> <table>
<tr> <tr>
@ -94,7 +94,7 @@ client = LlamaStackClientLocalClient
// remoteURL is a string like "http://localhost:5050" // remoteURL is a string like "http://localhost:5050"
client = LlamaStackClientOkHttpClient client = LlamaStackClientOkHttpClient
.builder() .builder()
.baseUrl(remoteURL) .baseUrl(remoteURL)
.build() .build()
``` ```
</td> </td>
@ -244,4 +244,4 @@ We'd like to extend our thanks to the ExecuTorch team for providing their suppor
--- ---
The API interface is generated using the OpenAPI standard with [Stainless](https://www.stainlessapi.com/). The API interface is generated using the OpenAPI standard with [Stainless](https://www.stainlessapi.com/).