kill llamastack-local-gpu/cpu

This commit is contained in:
Xi Yan 2024-11-01 13:21:59 -07:00
parent 0bc087c81a
commit f1f8aa2029
2 changed files with 13 additions and 13 deletions

View file

@ -17,7 +17,7 @@ services:
depends_on: depends_on:
text-generation-inference: text-generation-inference:
condition: service_healthy condition: service_healthy
image: llamastack/llamastack-local-cpu image: llamastack/llamastack-tgi
network_mode: "host" network_mode: "host"
volumes: volumes:
- ~/.llama:/root/.llama - ~/.llama:/root/.llama

View file

@ -36,7 +36,7 @@
"1. Get Docker container\n", "1. Get Docker container\n",
"```\n", "```\n",
"$ docker login\n", "$ docker login\n",
"$ docker pull llamastack/llamastack-local-gpu\n", "$ docker pull llamastack/llamastack-meta-reference-gpu\n",
"```\n", "```\n",
"\n", "\n",
"2. pip install the llama stack client package \n", "2. pip install the llama stack client package \n",
@ -61,15 +61,15 @@
"```\n", "```\n",
"For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.\n", "For GPU inference, you need to set these environment variables for specifying local directory containing your model checkpoints, and enable GPU inference to start running docker container.\n",
"$ export LLAMA_CHECKPOINT_DIR=~/.llama\n", "$ export LLAMA_CHECKPOINT_DIR=~/.llama\n",
"$ llama stack configure llamastack-local-gpu\n", "$ llama stack configure llamastack-meta-reference-gpu\n",
"```\n", "```\n",
"Follow the prompts as part of configure.\n", "Follow the prompts as part of configure.\n",
"Here is a sample output \n", "Here is a sample output \n",
"```\n", "```\n",
"$ llama stack configure llamastack-local-gpu\n", "$ llama stack configure llamastack-meta-reference-gpu\n",
"\n", "\n",
"Could not find /home/hjshah/.conda/envs/llamastack-llamastack-local-gpu/llamastack-local-gpu-build.yaml. Trying docker image name instead...\n", "Could not find ~/.conda/envs/llamastack-llamastack-meta-reference-gpu/llamastack-meta-reference-gpu-build.yaml. Trying docker image name instead...\n",
"+ podman run --network host -it -v /home/hjshah/.llama/builds/docker:/app/builds llamastack-local-gpu llama stack configure ./llamastack-build.yaml --output-dir /app/builds\n", "+ podman run --network host -it -v ~/.llama/builds/docker:/app/builds llamastack-meta-reference-gpu llama stack configure ./llamastack-build.yaml --output-dir /app/builds\n",
"\n", "\n",
"Configuring API `inference`...\n", "Configuring API `inference`...\n",
"=== Configuring provider `meta-reference` for API inference...\n", "=== Configuring provider `meta-reference` for API inference...\n",