llama-stack/llama_stack/providers/remote/inference
Hardik Shah a84e7669f0
feat: Add a new template for dell (#978)
- Added new template `dell` and its documentation 
- Update docs 
- [minor] uv fix i came across 
- codegen for all templates 

Tested with 

```bash
export INFERENCE_PORT=8181
export DEH_URL=http://0.0.0.0:$INFERENCE_PORT
export INFERENCE_MODEL=meta-llama/Llama-3.1-8B-Instruct
export CHROMADB_HOST=localhost
export CHROMADB_PORT=6601
export CHROMA_URL=[http://$CHROMADB_HOST:$CHROMADB_PORT](about:blank)
export CUDA_VISIBLE_DEVICES=0
export LLAMA_STACK_PORT=8321

# build the stack template 
llama stack build --template=dell 

# start the TGI inference server 
podman run --rm -it --network host -v $HOME/.cache/huggingface:/data -e HF_TOKEN=$HF_TOKEN -p $INFERENCE_PORT:$INFERENCE_PORT --gpus $CUDA_VISIBLE_DEVICES [ghcr.io/huggingface/text-generation-inference](http://ghcr.io/huggingface/text-generation-inference) --dtype bfloat16 --usage-stats off --sharded false --cuda-memory-fraction 0.7 --model-id $INFERENCE_MODEL --port $INFERENCE_PORT --hostname 0.0.0.0

# start chroma-db for vector-io ( aka RAG )
podman run --rm -it --network host --name chromadb -v .:/chroma/chroma -e IS_PERSISTENT=TRUE chromadb/chroma:latest --port $CHROMADB_PORT --host $(hostname)

# build docker 
llama stack build --template=dell --image-type=container

# run llama stack server ( via docker )
podman run -it \
--network host \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ~/.llama:/root/.llama \
# NOTE: mount the llama-stack / llama-model directories if testing local changes 
-v /home/hjshah/git/llama-stack:/app/llama-stack-source -v /home/hjshah/git/llama-models:/app/llama-models-source \ localhost/distribution-dell:dev \
--port $LLAMA_STACK_PORT  \
--env INFERENCE_MODEL=$INFERENCE_MODEL \
--env DEH_URL=$DEH_URL \
--env CHROMA_URL=$CHROMA_URL

# test the server 
cd <PATH_TO_LLAMA_STACK_REPO>
LLAMA_STACK_BASE_URL=http://0.0.0.0:$LLAMA_STACK_PORT pytest -s -v tests/client-sdk/agents/test_agents.py

```

---------

Co-authored-by: Hardik Shah <hjshah@fb.com>
2025-02-06 14:14:39 -08:00
..
bedrock Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
cerebras Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
databricks Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
fireworks Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
groq Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
nvidia feat: Add a new template for dell (#978) 2025-02-06 14:14:39 -08:00
ollama Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
runpod Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
sambanova Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
sample [remove import *] clean up import *'s (#689) 2024-12-27 15:45:44 -08:00
tgi Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
together Support sys_prompt behavior in inference (#937) 2025-02-03 23:35:16 -08:00
vllm Fix incorrect handling of chat completion endpoint in remote::vLLM (#951) 2025-02-06 10:45:19 -08:00
__init__.py impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00