Commit graph

70 commits

Author SHA1 Message Date
Ashwin Bharambe
8c351fe432 build: Bump version to 0.1.8 2025-03-23 16:01:10 -07:00
Ashwin Bharambe
93cfade8c9 ci: Bump version to 0.1.7 2025-03-14 15:21:26 -07:00
yyymeta
a626b7bce3
feat: [new open benchmark] BFCL_v3 (#1578)
# What does this PR do?
create a new dataset BFCL_v3 from
https://gorilla.cs.berkeley.edu/blogs/13_bfcl_v3_multi_turn.html

overall each question asks the model to perform a task described in
natural language, and additionally a set of available functions and
their schema are given for the model to choose from. the model is
required to write the function call form including function name and
parameters , to achieve the stated purpose. the results are validated
against provided ground truth, to make sure that the generated function
call and the ground truth function call are syntactically and
semantically equivalent, by checking their AST .



## Test Plan

start server by 

```
llama stack run ./llama_stack/templates/ollama/run.yaml
```

then send traffic
```
 llama-stack-client eval run-benchmark "bfcl"  --model-id   meta-llama/Llama-3.2-3B-Instruct    --output-dir /tmp/gpqa    --num-examples   2
```




[//]: # (## Documentation)
2025-03-14 12:50:49 -07:00
Ashwin Bharambe
bc8daf7fea
fix: include jinja2 as a core llama-stack dependency (#1529)
We removed `llama-models` as a dep which was pulling this in for us
previously. This did not get caught in the release process because the
distros we use for testing (fireworks / together) pull that in via
sentence transformers which we don't use in all distros (notably
ollama.)

See #1511 

## Test Plan

Ran `llama-stack-ops/actions/test-and-cut/main.sh` with
`ONLY_TEST_DONT_CUT=1 COMMIT_ID=origin/fix_jinja2` and by making it
build the ollama docker. Ran the docker to ensure it does not error out
with jinja2 dependency error. (Unfortunately there is another error with
sqlite_vec there.)
2025-03-10 14:59:11 -07:00
Ashwin Bharambe
0db3a2f511 fix: run pre-commit due to release script bumps 2025-03-07 16:31:42 -08:00
ehhuang
1257288361
build: add 'tiktoken' to deps (#1483)
Summary:

Test Plan:
2025-03-07 12:36:02 -08:00
Sébastien Han
ffa32af930
build: bump llama-stack-client version (#1469)
## What does this PR do?

Use 0.1.5.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-07 11:42:38 -08:00
Ashwin Bharambe
8bbd52bb9f
chore: remove dependency on llama_models completely (#1344) 2025-03-01 12:48:08 -08:00
Charlie Doern
de878e15a9
fix: pre-commit updates (#1243)
# What does this PR do?

PR #1139 caused pre-commit failures on main likely due to improper
rebase before merge. run pre-commit on main and commit the changes

see runs here:
3775148428

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-02-24 17:20:29 -08:00
Sébastien Han
9bbe34694d
ci: add mypy for static type checking (#1101)
# What does this PR do?

- Enable mypy to run in the CI on a subset of the repository
- Fix a few mypy errors
- Run mypy from pre-commit

Signed-off-by: Sébastien Han <seb@redhat.com>
 
[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-02-21 13:15:40 -08:00
Sébastien Han
69eebaf5bf
build: add missing dev dependencies for unit tests (#1004)
# What does this PR do?
Added necessary dependencies to ensure successful execution of unit
tests. Without these, the following command would fail due to missing
imports:

```
uv run pytest -v -k "ollama" \
     --inference-model=llama3.2:3b-instruct-fp16
     llama_stack/providers/tests/inference/test_model_registration.py
```

Signed-off-by: Sébastien Han <seb@redhat.com>

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
Run:

```
ollama run llama3.2:3b-instruct-fp16 --keepalive 2m &
uv run pytest -v -k "ollama" --inference-model=llama3.2:3b-instruct-fp16 llama_stack/providers/tests/inference/test_model_registration.py

```

You can observe that some tests pass while others fail, but the test
runs successfully.

[//]: # (## Documentation)
[//]: # (- [ ] Added a Changelog entry if the change is significant)

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Ashwin Bharambe <ashwin.bharambe@gmail.com>
2025-02-19 22:26:11 -08:00
Sébastien Han
00613d9014
build: resync uv and deps on 0.1.3 (#1108)
# What does this PR do?

The bot just updated the project to 0.1.3 in

https://github.com/meta-llama/llama-stack/commits?author=github-actions%5Bbot%5D
but the deps need to be synced.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-02-14 12:26:04 -08:00
Ashwin Bharambe
314ee09ae3
chore: move all Llama Stack types from llama-models to llama-stack (#1098)
llama-models should have extremely minimal cruft. Its sole purpose
should be didactic -- show the simplest implementation of the llama
models and document the prompt formats, etc.

This PR is the complement to
https://github.com/meta-llama/llama-models/pull/279

## Test Plan

Ensure all `llama` CLI `model` sub-commands work:

```bash
llama model list
llama model download --model-id ...
llama model prompt-format -m ...
```

Ran tests:
```bash
cd tests/client-sdk
LLAMA_STACK_CONFIG=fireworks pytest -s -v inference/
LLAMA_STACK_CONFIG=fireworks pytest -s -v vector_io/
LLAMA_STACK_CONFIG=fireworks pytest -s -v agents/
```

Create a fresh venv `uv venv && source .venv/bin/activate` and run
`llama stack build --template fireworks --image-type venv` followed by
`llama stack run together --image-type venv` <-- the server runs

Also checked that the OpenAPI generator can run and there is no change
in the generated files as a result.

```bash
cd docs/openapi_generator
sh run_openapi_generator.sh
```
2025-02-14 09:10:59 -08:00
Sarthak Deshpande
80ba9deab1
chore: Updated requirements.txt (#1017)
# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]

Updated requirements.txt

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)
[//]: # (- [ ] Added a Changelog entry if the change is significant)

---------

Co-authored-by: sarthakdeshpande <sarthak.deshpande@engati.com>
2025-02-08 11:50:35 -08:00
Ashwin Bharambe
f98efe68c9
Misc fixes (#944)
- Make sure torch + torchvision go together as deps, otherwise bad stuff
happens
- Add a pre-commit for requirements.txt
2025-02-03 14:08:47 -08:00
Ashwin Bharambe
6344b2429b Kill requirements.txt 2025-01-31 22:38:58 -08:00
Ashwin Bharambe
05d73dd4fd Bump version to 0.1.0 2025-01-24 09:50:07 -08:00
Ashwin Bharambe
d6fcdefec7 Bump version to 0.0.63 2024-12-17 23:15:27 -08:00
Ashwin Bharambe
eea478618d Bump version to 0.0.62 2024-12-17 18:19:47 -08:00
Ashwin Bharambe
02b43be9d7 Bump version to 0.0.61 2024-12-10 10:18:44 -08:00
Ashwin Bharambe
1ad691bb04 Bump version to 0.0.60 2024-12-09 22:19:51 -08:00
Ashwin Bharambe
baae4f7b51 Bump version to 0.0.59 2024-12-09 21:22:20 -08:00
Ashwin Bharambe
2c5c73f7ca Bump version to 0.0.58 2024-12-06 08:36:00 -08:00
dltn
4c7b1a8fb3 Bump version to 0.0.57 2024-12-02 19:48:46 -08:00
Dinesh Yeduguru
fe48b9fb8c Bump version to 0.0.56 2024-11-30 12:27:31 -08:00
Ashwin Bharambe
45fd73218a Bump version to 0.0.55 2024-11-23 09:03:58 -08:00
Ashwin Bharambe
2137b0af40 Bump version to 0.0.54 2024-11-21 16:28:30 -08:00
Ashwin Bharambe
dd5466e17d Bump version to 0.0.53 2024-11-19 16:44:15 -08:00
Ashwin Bharambe
394519d68a Add llama-stack-client as a legitimate dependency for llama-stack 2024-11-19 11:44:35 -08:00
Xi Yan
f6aaa9c708 Bump version to 0.0.50 2024-11-08 17:28:39 -08:00
Ashwin Bharambe
3ca294c359 Bump version to 0.0.49 2024-11-04 20:38:00 -08:00
Xi Yan
4d60ab8531 Bump version to 0.0.48 2024-11-04 17:37:32 -08:00
Ashwin Bharambe
8a3b64d1be Bump version to 0.0.47 2024-10-27 22:30:38 -07:00
Ashwin Bharambe
426d821e7f Bump version to 0.0.46 2024-10-25 13:10:55 -07:00
Ashwin Bharambe
0538cc297e Bump version to 0.0.45 2024-10-24 12:14:18 -07:00
Ashwin Bharambe
8aa8847b4a Bump version to 0.0.44 2024-10-24 08:41:39 -07:00
Xi Yan
dbb5ce43fc Bump version to 0.0.43 2024-10-21 19:10:01 -07:00
Xi Yan
209cd3d35e Bump version to 0.0.42 2024-10-14 11:13:04 -07:00
Ashwin Bharambe
89d24a07f0 Bump version to 0.0.41 2024-10-10 10:27:03 -07:00
Ashwin Bharambe
bfb0e92034 Bump version to 0.0.40 2024-10-04 09:33:43 -07:00
Ashwin Bharambe
dc75aab547 Add setuptools dependency 2024-10-04 09:30:54 -07:00
Dalton Flanagan
441052b0fd avoid jq since non-standard on macOS 2024-10-04 10:11:43 -04:00
Dalton Flanagan
9bf2e354ae CLI now requires jq 2024-10-04 10:05:59 -04:00
Ashwin Bharambe
8d41e6caa9 Bump version to 0.0.39 2024-10-03 11:31:03 -07:00
Ashwin Bharambe
c02a90e4c8 Bump version to 0.0.38 2024-10-03 05:42:47 -07:00
Ashwin Bharambe
9b93ee2c2b Bump version to 0.0.37 2024-10-02 10:15:08 -07:00
Ashwin Bharambe
a80b707ff8 Ensure we always ask for pydantic>=2 2024-10-02 06:29:06 -07:00
Ashwin Bharambe
c8fa26482d Bump version to 0.0.36 2024-09-25 11:58:15 -07:00
Ashwin Bharambe
a227edb480 Bump version to 0.0.35 2024-09-25 10:34:59 -07:00
Ashwin Bharambe
56aed59eb4
Support for Llama3.2 models and Swift SDK (#98) 2024-09-25 10:29:58 -07:00