From 5bdf6d8530a4e44b60d3c9f66b8f251ad60445a7 Mon Sep 17 00:00:00 2001 From: Riandy Riandy Date: Thu, 5 Dec 2024 11:06:49 +0800 Subject: [PATCH] ran pre-commit hook fix Fix lints --- .../distributions/ondevice_distro/android_sdk.md | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/docs/source/distributions/ondevice_distro/android_sdk.md b/docs/source/distributions/ondevice_distro/android_sdk.md index 3bff16aa9..5a4e67e7e 100644 --- a/docs/source/distributions/ondevice_distro/android_sdk.md +++ b/docs/source/distributions/ondevice_distro/android_sdk.md @@ -1,6 +1,6 @@ # Llama Stack Client Kotlin API Library -We are excited to share a guide for a Kotlin Library that brings front the benefits of Llama Stack to your Android device. This library is a set of SDKs that provide a simple and effective way to integrate AI capabilities into your Android app whether it is local (on-device) or remote inference. +We are excited to share a guide for a Kotlin Library that brings front the benefits of Llama Stack to your Android device. This library is a set of SDKs that provide a simple and effective way to integrate AI capabilities into your Android app whether it is local (on-device) or remote inference. Features: - Local Inferencing: Run Llama models purely on-device with real-time processing. We currently utilize ExecuTorch as the local inference distributor and may support others in the future. @@ -26,7 +26,7 @@ dependencies { implementation("com.llama.llamastack:llama-stack-client-kotlin:0.0.54") } ``` -This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/` +This will download jar files in your gradle cache in a directory like `~/.gradle/caches/modules-2/files-2.1/com.llama.llamastack/` If you plan on doing remote inferencing this is sufficient to get started. @@ -58,7 +58,7 @@ Breaking down the demo app, this section will show the core pieces that are used ### Setup Remote Inferencing Start a Llama Stack server on localhost. Here is an example of how you can do this using the firework.ai distribution: ``` -conda create -n stack-fireworks python=3.10 +conda create -n stack-fireworks python=3.10 conda activate stack-fireworks pip install llama-stack=0.0.54 llama stack build --template fireworks --image-type conda @@ -71,7 +71,7 @@ Other inference providers: [Table](https://llama-stack.readthedocs.io/en/latest/ TODO: Link to Demo App on how to set this remote localhost in the Settings. ### Initialize the Client -A client serves as the primary interface for interacting with a specific inference type and its associated parameters. Only after client is initialized then you can configure and start inferences. +A client serves as the primary interface for interacting with a specific inference type and its associated parameters. Only after client is initialized then you can configure and start inferences. @@ -80,7 +80,7 @@ A client serves as the primary interface for interacting with a specific inferen @@ -243,4 +243,4 @@ This library throws exceptions in a single hierarchy for easy handling: --- -The API interface is generated using the OpenAPI standard with [Stainless](https://www.stainlessapi.com/). \ No newline at end of file +The API interface is generated using the OpenAPI standard with [Stainless](https://www.stainlessapi.com/).
-
   
+
 client = LlamaStackClientLocalClient
                     .builder()
                     .modelPath(modelPath)
@@ -94,7 +94,7 @@ client = LlamaStackClientLocalClient
 ```// remoteURL is a string like "http://localhost:5050"
 client = LlamaStackClientOkHttpClient
                 .builder()
-                .baseUrl(remoteURL) 
+                .baseUrl(remoteURL)
                 .build()
 ```