forked from phoenix-oss/llama-stack-mirror
[docs] update documentations (#356)
* move docs -> source * Add files via upload * mv image * Add files via upload * colocate iOS setup doc * delete image * Add files via upload * fix * delete image * Add files via upload * Update developer_cookbook.md * toctree * wip subfolder * docs update * subfolder * updates * name * updates * index * updates * refactor structure * depth * docs * content * docs * getting started * distributions * fireworks * fireworks * update * theme * theme * theme * pdj theme * pytorch theme * css * theme * agents example * format * index * headers * copy button * test tabs * test tabs * fix * tabs * tab * tabs * sphinx_design * quick start commands * size * width * css * css * download models * asthetic fix * tab format * update * css * width * css * docs * tab based * tab * tabs * docs * style * image * css * color * typo * update docs * missing links * list templates * links * links update * troubleshooting * fix * distributions * docs * fix table * kill llamastack-local-gpu/cpu * Update index.md * Update index.md * mv ios_setup.md * Update ios_setup.md * Add remote_or_local.gif * Update ios_setup.md * release notes * typos * Add ios_setup to index * nav bar * hide torctree * ios image * links update * rename * rename * docs * rename * links * distributions * distributions * distributions * distributions * remove release * remote --------- Co-authored-by: dltn <6599399+dltn@users.noreply.github.com> Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
This commit is contained in:
parent
ac93dd89cf
commit
c810a4184d
37 changed files with 1777 additions and 2154 deletions
|
@ -36,7 +36,7 @@
|
|||
"1. Get Docker container\n",
|
||||
"```\n",
|
||||
"$ docker login\n",
|
||||
"$ docker pull llamastack/llamastack-local-gpu\n",
|
||||
"$ docker pull llamastack/llamastack-meta-reference-gpu\n",
|
||||
"```\n",
|
||||
"\n",
|
||||
"2. pip install the llama stack client package \n",
|
||||
|
@ -61,15 +61,15 @@
|
|||
"```\n",
|
||||
"For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.\n",
|
||||
"$ export LLAMA_CHECKPOINT_DIR=~/.llama\n",
|
||||
"$ llama stack configure llamastack-local-gpu\n",
|
||||
"$ llama stack configure llamastack-meta-reference-gpu\n",
|
||||
"```\n",
|
||||
"Follow the prompts as part of configure.\n",
|
||||
"Here is a sample output \n",
|
||||
"```\n",
|
||||
"$ llama stack configure llamastack-local-gpu\n",
|
||||
"$ llama stack configure llamastack-meta-reference-gpu\n",
|
||||
"\n",
|
||||
"Could not find /home/hjshah/.conda/envs/llamastack-llamastack-local-gpu/llamastack-local-gpu-build.yaml. Trying docker image name instead...\n",
|
||||
"+ podman run --network host -it -v /home/hjshah/.llama/builds/docker:/app/builds llamastack-local-gpu llama stack configure ./llamastack-build.yaml --output-dir /app/builds\n",
|
||||
"Could not find ~/.conda/envs/llamastack-llamastack-meta-reference-gpu/llamastack-meta-reference-gpu-build.yaml. Trying docker image name instead...\n",
|
||||
"+ podman run --network host -it -v ~/.llama/builds/docker:/app/builds llamastack-meta-reference-gpu llama stack configure ./llamastack-build.yaml --output-dir /app/builds\n",
|
||||
"\n",
|
||||
"Configuring API `inference`...\n",
|
||||
"=== Configuring provider `meta-reference` for API inference...\n",
|
||||
|
@ -155,7 +155,7 @@
|
|||
"metadata": {},
|
||||
"outputs": [],
|
||||
"source": [
|
||||
"# For this notebook we will be working with the latest Llama3.2 vision models \n",
|
||||
"# For this notebook we will be working with the latest Llama3.2 vision models\n",
|
||||
"model = \"Llama3.2-11B-Vision-Instruct\""
|
||||
]
|
||||
},
|
||||
|
@ -182,7 +182,7 @@
|
|||
}
|
||||
],
|
||||
"source": [
|
||||
"# Simple text example \n",
|
||||
"# Simple text example\n",
|
||||
"iterator = client.inference.chat_completion(\n",
|
||||
" model=model,\n",
|
||||
" messages=[\n",
|
||||
|
@ -224,13 +224,13 @@
|
|||
],
|
||||
"source": [
|
||||
"import base64\n",
|
||||
"import mimetypes \n",
|
||||
"import mimetypes\n",
|
||||
"\n",
|
||||
"from PIL import Image\n",
|
||||
"\n",
|
||||
"# We define a simple utility function to take a local image and \n",
|
||||
"# convert it to as base64 encoded data url \n",
|
||||
"# that can be passed to the server. \n",
|
||||
"# We define a simple utility function to take a local image and\n",
|
||||
"# convert it to as base64 encoded data url\n",
|
||||
"# that can be passed to the server.\n",
|
||||
"def data_url_from_image(file_path):\n",
|
||||
" mime_type, _ = mimetypes.guess_type(file_path)\n",
|
||||
" if mime_type is None:\n",
|
||||
|
@ -273,7 +273,7 @@
|
|||
" {\n",
|
||||
" \"role\": \"user\",\n",
|
||||
" \"content\": [\n",
|
||||
" { \"image\": { \"uri\": data_url } }, \n",
|
||||
" { \"image\": { \"uri\": data_url } },\n",
|
||||
" \"Write a haiku describing the image\"\n",
|
||||
" ]\n",
|
||||
" }\n",
|
||||
|
|
Loading…
Add table
Add a link
Reference in a new issue