llama-stack-mirror/llama_stack
Russell Bryant f73e247ba1
Inline vLLM inference provider (#181)
This is just like `local` using `meta-reference` for everything except
it uses `vllm` for inference.

Docker works, but So far, `conda` is a bit easier to use with the vllm
provider. The default container base image does not include all the
necessary libraries for all vllm features. More cuda dependencies are
necessary.

I started changing this base image used in this template, but it also
required changes to the Dockerfile, so it was getting too involved to
include in the first PR.

Working so far:

* `python -m llama_stack.apis.inference.client localhost 5000 --model Llama3.2-1B-Instruct --stream True`
* `python -m llama_stack.apis.inference.client localhost 5000 --model Llama3.2-1B-Instruct --stream False`

Example:

```
$ python -m llama_stack.apis.inference.client localhost 5000 --model Llama3.2-1B-Instruct --stream False
User>hello world, write me a 2 sentence poem about the moon
Assistant>
The moon glows bright in the midnight sky
A beacon of light,
```

I have only tested these models:

* `Llama3.1-8B-Instruct` - across 4 GPUs (tensor_parallel_size = 4)
* `Llama3.2-1B-Instruct` - on a single GPU (tensor_parallel_size = 1)
2024-10-05 23:34:16 -07:00
..
apis inference: Add model option to client (#170) 2024-10-03 11:18:57 -07:00
cli avoid jq since non-standard on macOS 2024-10-04 10:11:43 -04:00
distribution Inline vLLM inference provider (#181) 2024-10-05 23:34:16 -07:00
providers Inline vLLM inference provider (#181) 2024-10-05 23:34:16 -07:00
scripts Add a test for CLI, but not fully done so disabled 2024-09-19 13:27:07 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00