Commit graph

115 commits

Author SHA1 Message Date
Xi Yan
4d5f7459aa
[bugfix] Fix logprobs on meta-reference impl (#213)
* fix log probs

* add back LogProbsConfig

* error handling

* bugfix
2024-10-07 19:42:39 -07:00
Mindaugas
53d440e952
Fix ValueError in case chunks are empty (#206) 2024-10-07 08:55:06 -07:00
Russell Bryant
a4e775c465
download: improve help text (#204) 2024-10-07 08:40:04 -07:00
Ashwin Bharambe
4263764493 Fix adapter_id -> adapter_type for Weaviate 2024-10-07 06:46:32 -07:00
Zain Hasan
f4f7618120
add Weaviate memory adapter (#95) 2024-10-06 22:21:50 -07:00
Xi Yan
27587f32bc fix db path 2024-10-06 11:46:08 -07:00
Xi Yan
cfe3ad33b3 fix db path 2024-10-06 11:45:35 -07:00
Prithu Dasgupta
7abab7604b
add databricks provider (#83)
* add databricks provider

* update provider and test
2024-10-05 23:35:54 -07:00
Russell Bryant
f73e247ba1
Inline vLLM inference provider (#181)
This is just like `local` using `meta-reference` for everything except
it uses `vllm` for inference.

Docker works, but So far, `conda` is a bit easier to use with the vllm
provider. The default container base image does not include all the
necessary libraries for all vllm features. More cuda dependencies are
necessary.

I started changing this base image used in this template, but it also
required changes to the Dockerfile, so it was getting too involved to
include in the first PR.

Working so far:

* `python -m llama_stack.apis.inference.client localhost 5000 --model Llama3.2-1B-Instruct --stream True`
* `python -m llama_stack.apis.inference.client localhost 5000 --model Llama3.2-1B-Instruct --stream False`

Example:

```
$ python -m llama_stack.apis.inference.client localhost 5000 --model Llama3.2-1B-Instruct --stream False
User>hello world, write me a 2 sentence poem about the moon
Assistant>
The moon glows bright in the midnight sky
A beacon of light,
```

I have only tested these models:

* `Llama3.1-8B-Instruct` - across 4 GPUs (tensor_parallel_size = 4)
* `Llama3.2-1B-Instruct` - on a single GPU (tensor_parallel_size = 1)
2024-10-05 23:34:16 -07:00
Mindaugas
9d16129603
Add 'url' property to Redis KV config (#192) 2024-10-05 11:26:26 -07:00
Dalton Flanagan
441052b0fd avoid jq since non-standard on macOS 2024-10-04 10:11:43 -04:00
AshleyT3
734f59d3b8
Check that the model is found before use. (#182) 2024-10-03 23:24:47 -07:00
Ashwin Bharambe
f913b57397 fix fp8 imports 2024-10-03 14:40:21 -07:00
Ashwin Bharambe
7f49315822 Kill a derpy import 2024-10-03 11:25:58 -07:00
Xi Yan
62d266f018
[CLI] avoid configure twice (#171)
* avoid configure twice

* cleanup tmp config

* update output msg

* address comment

* update msg

* script update
2024-10-03 11:20:54 -07:00
Russell Bryant
06db9213b1
inference: Add model option to client (#170)
I was running this client for testing purposes and being able to
specify which model to use is a convenient addition. This change makes
that possible.
2024-10-03 11:18:57 -07:00
Ashwin Bharambe
210b71b0ba
fix prompt guard (#177)
Several other fixes to configure. Add support for 1b/3b models in ollama.
2024-10-03 11:07:53 -07:00
Xi Yan
b9b1e8b08b
[bugfix] conda path lookup (#179)
* fix conda lookup

* comments
2024-10-03 10:45:16 -07:00
Ashwin Bharambe
e9f6150588 A bit cleanup to avoid breakages 2024-10-02 21:31:09 -07:00
Ashwin Bharambe
988a9cada3 Don't ask for Api.inspect in stack build 2024-10-02 21:10:56 -07:00
Ashwin Bharambe
19ce6bf009 Don't validate prompt-guard anymore 2024-10-02 20:43:57 -07:00
Xi Yan
703ab9385f fix routing table key list 2024-10-02 18:23:31 -07:00
Ashwin Bharambe
8d049000e3 Add an introspection "Api.inspect" API 2024-10-02 15:41:14 -07:00
Adrian Cole
01d93be948
Adds markdown-link-check and fixes a broken link (#165)
Signed-off-by: Adrian Cole <adrian.cole@elastic.co>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2024-10-02 14:26:20 -07:00
Ashwin Bharambe
fe4aabd690 provider_id => provider_type, adapter_id => adapter_type 2024-10-02 14:05:59 -07:00
Ashwin Bharambe
df68db644b Refactoring distribution/distribution.py
This file was becoming too large and unclear what it housed. Split it
into pieces.
2024-10-02 14:03:02 -07:00
Ashwin Bharambe
546f05bd3f No automatic pager 2024-10-02 12:26:09 -07:00
Russell Bryant
204eb6d810
docker: Check for selinux before using --security-opt (#167)
Before using `--security-opt label=disable`, check that SELinux is
enabled. Otherwise, the option is not relevant.

This fixes errors on Mac.

Closes #166

Signed-off-by: Russell Bryant <rbryant@redhat.com>
2024-10-02 10:37:41 -07:00
Ashwin Bharambe
227b69e6e6 Fix sample memory impl 2024-10-02 10:13:09 -07:00
Ashwin Bharambe
335dea849a fix sample impls 2024-10-02 10:10:31 -07:00
Ashwin Bharambe
bf0d111c53 Fix build script 2024-10-02 10:04:23 -07:00
Ashwin Bharambe
4a75d922a9 Make Llama Guard 1B the default 2024-10-02 09:48:26 -07:00
Ashwin Bharambe
cc5029a716 Add special case for prompt guard 2024-10-02 08:43:12 -07:00
Ashwin Bharambe
eb2d8a31a5
Add a RoutableProvider protocol, support for multiple routing keys (#163)
* Update configure.py to use multiple routing keys for safety
* Refactor distribution/datatypes into a providers/datatypes
* Cleanup
2024-09-30 17:30:21 -07:00
Xi Yan
73decb3781 re-build from name 2024-09-30 16:22:52 -07:00
Xi Yan
4897bf2f85 allow --name to re-build from config 2024-09-30 16:18:12 -07:00
Xi Yan
d28c3dfe0f
[CLI] simplify docker run (#159)
* bake run.yaml inside docker, simplify run

* add docker template examples

* delete generated Dockerfile

* unique deps

* clean up debug

* default entrypoint

* address comments, update output msg

* update msg

* build output msg

* configure msg

* unique special_deps

* remove quotes in configure
2024-09-30 15:04:04 -07:00
Russell Bryant
8db49de961
docker: Install in editable mode for dev purposes (#160)
While rebuilding a stack using the `docker` image type and having
`LLAMA_STACK_DIR` set so it installs `llama_stack` from my local
source, I noticed that once built, it just used the image build cache
and didn't pull in changes to my source.

1. Install in editable mode (`pip install -e`) for dev purposes.

2. Mount the source into the container for `configure` and `run` so
   that the editable install works.

Signed-off-by: Russell Bryant <rbryant@redhat.com>
2024-09-30 11:56:31 -07:00
Russell Bryant
cb36be320f
Fix podman+selinux compatibility (#132)
When I ran `llama stack configure` for my `docker` based stack on my
system using podman + SELinux (CentOS Stream 9), The `podman run`
command failed due to SELinux blocking access to the volume mount.

As a simple fix, disable SELinux label checking.

Signed-off-by: Russell Bryant <rbryant@redhat.com>
2024-09-29 20:19:44 -07:00
moritalous
2bd785354d
fix broken bedrock inference provider (#151) 2024-09-29 20:17:58 -07:00
Byung Chun Kim
2f096ca509
accepts not model itself. (#153) 2024-09-29 20:16:50 -07:00
Ashwin Bharambe
5bf679cab6
Pull (extract) provider data from the provider instead of pushing from the top (#148) 2024-09-29 20:00:51 -07:00
Xi Yan
f6a6598d1a
[bugfix] fix #146 (#147)
* more robust image type

* lint
2024-09-28 17:47:00 -07:00
Xi Yan
6a8c2ae1df
[CLI] remove dependency on CONDA_PREFIX in CLI (#144)
* remove dependency on CONDA_PREFIX in CLI

* lint

* typo

* more robust
2024-09-28 16:46:47 -07:00
Ashwin Bharambe
fe460ba103 Avoid importing a lot of stuff 2024-09-28 16:06:10 -07:00
Xi Yan
4ae8c63a2b pre-commit lint 2024-09-28 16:04:41 -07:00
Ashwin Bharambe
ced5fb6388 Small cleanup for together safety implementation 2024-09-28 15:47:35 -07:00
Yogish Baliga
940968ee3f
fixing safety inference and safety adapter for new API spec. Pinned t… (#105)
* fixing safety inference and safety adapter for new API spec. Pinned the llama_models version to 0.0.24 as the latest version 0.0.35 has the model descriptor name changed. I was getting the missing package error during runtime as well, hence added the dependency to requirements.txt

* support Llama 3.2 models in Together inference adapter and cleanup Together safety adapter

* fixing model names

* adding vision guard to Together safety
2024-09-28 15:45:38 -07:00
Ashwin Bharambe
0a3999a9a4
Use inference APIs for executing Llama Guard (#121)
We should use Inference APIs to execute Llama Guard instead of directly needing to use HuggingFace modeling related code. The actual inference consideration is handled by Inference.
2024-09-28 15:40:06 -07:00
Xi Yan
6236634d84
[bugfix] fix duplicate api endpoints (#139)
* fix server api to serve

* remove print
2024-09-27 15:32:50 -07:00