Commit graph

1746 commits

Author SHA1 Message Date
wesley chun
0431a6e90b
docs: colorize Discord badge & add icon in README (#1865)
Update "chat" badge on README to make it more visible for visitors;
changing the look from


![image](https://github.com/user-attachments/assets/630be671-a937-4841-8009-93e8eea1cbe1)

... to ...


![image](https://github.com/user-attachments/assets/cfcb946a-e266-48da-bd50-c994cf1e3a9d)
2025-04-08 14:42:47 -04:00
ehhuang
031a40bec0
fix: type (#1898)
# What does this PR do?


## Test Plan
2025-04-08 09:07:25 -07:00
Michael Clifford
c6e93e32f6
feat: Updated playground rag to use session id for persistent conversation (#1870)
# What does this PR do?

This PR updates the [playground RAG
example](llama_stack/distribution/ui/page/playground/rag.py) so that the
agent is able to use its builtin conversation history. Here we are using
streamlit's `cache_resource` functionality to prevent the agent from
re-initializing after every interaction as well as storing its
session_id in the `session_state`. This allows the agent in the RAG
example to behave more closely to how it works using the python-client
directly.

[//]: # (If resolving an issue, uncomment and update the line below)
Closes #1869 

## Test Plan

Without these changes, if you ask it "What is 2 + 2"? followed by the
question "What did I just ask?" It will provide an obviously incorrect
answer.

With these changes, you can ask the same series of questions and it will
provide the correct answer.

[//]: # (## Documentation)

Signed-off-by: Michael Clifford <mcliffor@redhat.com>
2025-04-08 09:46:13 +02:00
ehhuang
7b4eb0967e
test: verification on provider's OAI endpoints (#1893)
# What does this PR do?


## Test Plan
export MODEL=accounts/fireworks/models/llama4-scout-instruct-basic;
LLAMA_STACK_CONFIG=verification pytest -s -v tests/integration/inference
--vision-model $MODEL --text-model $MODEL
2025-04-07 23:06:28 -07:00
Ashwin Bharambe
530d4bdfe1
refactor: move all llama code to models/llama out of meta reference (#1887)
# What does this PR do?

Move around bits. This makes the copies from llama-models _much_ easier
to maintain and ensures we don't entangle meta-reference specific
tidbits into llama-models code even by accident.

Also, kills the meta-reference-quantized-gpu distro and rolls
quantization deps into meta-reference-gpu.

## Test Plan

```
LLAMA_MODELS_DEBUG=1 \
  with-proxy llama stack run meta-reference-gpu \
  --env INFERENCE_MODEL=meta-llama/Llama-4-Scout-17B-16E-Instruct \
   --env INFERENCE_CHECKPOINT_DIR=<DIR> \
   --env MODEL_PARALLEL_SIZE=4 \
   --env QUANTIZATION_TYPE=fp8_mixed
```

Start a server with and without quantization. Point integration tests to
it using:

```
pytest -s -v  tests/integration/inference/test_text_inference.py \
   --stack-config http://localhost:8321 --text-model meta-llama/Llama-4-Scout-17B-16E-Instruct
```
2025-04-07 15:03:58 -07:00
Matthew Farrellee
c52ccc4bbd
docs: update importing_as_library.md (#1863)
LlamaStackAsLibraryClient.initialize is not async, cannot be await'd
2025-04-07 12:31:04 +02:00
Francisco Arceo
c1973f6528
docs: Fix typo in README.md (#1880)
# What does this PR do?
Fix typo
2025-04-07 11:58:33 +02:00
Hardik Shah
28e262ecdc
feat: make multi-turn tool call tests work with llama4 (#1886)
Running full Tool Calling required some updates to work e2e.
- Remove `python_start` and `python_end` tags 
- Tool Call messages and Tool Resposne messages should end with
`<|eom|>`
- System prompt needed updates 
```
You are a helpful assisant who can can answer general questions or invoke tools when necessary.
In addition to tool calls, you should also augment your responses by using the tool outputs.
```

### Test Plan 
- Start server with meta-reference 
```
LLAMA_STACK_DISABLE_VERSION_CHECK=1 LLAMA_MODELS_DEBUG=1 INFERENCE_MODEL=meta-llama/$MODEL  llama stack run meta-reference-gpu 
``` 
- Added **NEW** tests with 5 test cases for multi-turn tool calls 
```
pytest -s -v --stack-config http://localhost:8321 tests/integration/inference/test_text_inference.py --text-model meta-llama/Llama-4-Scout-17B-16E-Instruct
``` 
- Also verified all vision and agent tests pass
2025-04-06 19:14:21 -07:00
Ashwin Bharambe
5a31e66a91 fix: update llama-stack-client dependency to fix integration tests 2025-04-06 19:11:05 -07:00
ehhuang
378f0de439
docs: llama4 getting started nb (#1878)
# What does this PR do?


## Test Plan
2025-04-06 18:51:34 -07:00
Ashwin Bharambe
3f92b2bf85 fix: kill the usage of python_start and python_end tokens 2025-04-05 19:00:26 -07:00
Ashwin Bharambe
3021c87271 fix: bump version to 0.2.1 for bugfix release 2025-04-05 16:05:37 -07:00
raghotham
fd7ab37c14
docs: fixing sphinx imports (#1884)
# What does this PR do?
[Provide a short summary of what this PR does and why. Link to relevant
issues if applicable.]

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)
2025-04-05 14:21:45 -07:00
Hardik Shah
e2213265bc
docs: Update README.md (#1879)
to mention GPU requirement
2025-04-05 12:15:55 -07:00
Ashwin Bharambe
b8f1561956
feat: introduce llama4 support (#1877)
As title says. Details in README, elsewhere.
2025-04-05 11:53:35 -07:00
Francisco Arceo
23a99a4b22
docs: Minor updates to docs to make them a little friendlier to new users (#1871)
# What does this PR do?
This PR modifies some of the docs to help them map to (1) the mental
model of software engineers building AI models starting with RAG and
then moving to Agents and (2) aligning the navbar somewhat closer to the
diagram on the home page.

## Test Plan
N/A Tested locally.

# Documentation
Take a look at the screen shot for below and after.
## Before 
![Screenshot 2025-04-03 at 10 39
32 PM](https://github.com/user-attachments/assets/c4dc9998-3e46-43b0-8425-892c94ec3a6a)

## After
![Screenshot 2025-04-03 at 10 38
37 PM](https://github.com/user-attachments/assets/05670fcd-e56b-42dd-8af2-07b81f941d40)

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-04-04 08:10:35 -04:00
Ihar Hrachyshka
66d6c2580e
chore: more mypy checks (ollama, vllm, ...) (#1777)
# What does this PR do?

- **chore: mypy for strong_typing**
- **chore: mypy for remote::vllm**
- **chore: mypy for remote::ollama**
- **chore: mypy for providers.datatype**

---------

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-04-01 17:12:39 +02:00
Ihar Hrachyshka
d5e0f32485
ci: pin github actions to hashes (#1776)
# What does this PR do?

Let dependabot move them with PRs (and human oversight).

Fixes #1775

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-04-01 17:09:39 +02:00
Francisco Arceo
19f504e9e2
docs: Updating docs to source from CONTRIBUTING.md (#1850)
# What does this PR do?
Another for https://github.com/meta-llama/llama-stack/issues/1815

This links the `CONTRIBUTING.md` file directly so that we don't have to
maintain two different files.

Also I updated the title for RAG under Building AI Applications.

## Changes 
Look of what the Contributing page looks like, proof it sources directly
from the markdown file.

![Screenshot 2025-04-01 at 12 43
51 AM](https://github.com/user-attachments/assets/f7021d29-eec3-44ad-a5b3-55c4480ea9ac)

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-04-01 14:50:04 +02:00
Rashmi Pawar
c169c164b3
fix: NVIDIA embedding results in InternalServerError (#1851)
Closes #1819 

## Test Plan

```bash
pytest -v tests/integration/inference/test_embedding.py  --stack-config=http://localhost:5002 --embedding-model=nvidia/llama-3.2-nv-embedqa-1b-v2
=============================================================================== test session starts ================================================================================
platform linux -- Python 3.10.0, pytest-8.3.5, pluggy-1.5.0 -- /home/ubuntu/miniconda/envs/nvidia-1/bin/python
cachedir: .pytest_cache
rootdir: /home/ubuntu/llama-stack
configfile: pyproject.toml
plugins: anyio-4.9.0
collected 23 items                                                                                                                                                                 

tests/integration/inference/test_embedding.py::test_embedding_text[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-list[string]] PASSED                                                [  4%]
tests/integration/inference/test_embedding.py::test_embedding_text[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-list[text]] PASSED                                                  [  8%]
tests/integration/inference/test_embedding.py::test_embedding_image[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-list[url,base64]] XFAIL (nvidia/llama-3.2-nv-embedqa-1b-v2 doe...) [ 13%]
tests/integration/inference/test_embedding.py::test_embedding_image[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-list[url,string,base64,text]] XFAIL (nvidia/llama-3.2-nv-embed...) [ 17%]
tests/integration/inference/test_embedding.py::test_embedding_truncation[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-long-end] PASSED                                              [ 21%]
tests/integration/inference/test_embedding.py::test_embedding_truncation[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-long-start] PASSED                                            [ 26%]
tests/integration/inference/test_embedding.py::test_embedding_truncation[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-short-end] PASSED                                             [ 30%]
tests/integration/inference/test_embedding.py::test_embedding_truncation[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-short-start] PASSED                                           [ 34%]
tests/integration/inference/test_embedding.py::test_embedding_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-long-text-None] PASSED                                  [ 39%]
tests/integration/inference/test_embedding.py::test_embedding_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-long-text-none] PASSED                                  [ 43%]
tests/integration/inference/test_embedding.py::test_embedding_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-long-str-None] PASSED                                   [ 47%]
tests/integration/inference/test_embedding.py::test_embedding_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-long-str-none] PASSED                                   [ 52%]
tests/integration/inference/test_embedding.py::test_embedding_output_dimension[emb=nvidia/llama-3.2-nv-embedqa-1b-v2] PASSED                                                 [ 56%]
tests/integration/inference/test_embedding.py::test_embedding_task_type[emb=nvidia/llama-3.2-nv-embedqa-1b-v2] PASSED                                                        [ 60%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-None] PASSED                                             [ 65%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-none] PASSED                                             [ 69%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-end] PASSED                                              [ 73%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-start] PASSED                                            [ 78%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-NONE] PASSED                                       [ 82%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-END] PASSED                                        [ 86%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-START] PASSED                                      [ 91%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-left] PASSED                                       [ 95%]
tests/integration/inference/test_embedding.py::test_embedding_text_truncation_error[emb=nvidia/llama-3.2-nv-embedqa-1b-v2-right] PASSED                                      [100%]

===================================================================== 21 passed, 2 xfailed, 1 warning in 7.18s =====================================================================
```

[//]: # (## Documentation)

cc: @dglogo @mattf @sumitb
2025-04-01 13:31:29 +02:00
Ihar Hrachyshka
0a895c70d1
fix(api): don't return list for runtime tools (#1686)
# What does this PR do?

Don't return list for runtime tools. Instead return Response object for
pagination and consistency with other APIs.

---------

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-04-01 09:53:11 +02:00
Ashwin Bharambe
b440a1dc42
test: make sure integration tests runs against the server (#1743)
Previously, the integration tests started the server, but never really
used it because `--stack-config=ollama` uses the ollama template and the
inline "llama stack as library" client, not the HTTP client.

This PR makes sure we test it both ways.

We also add agents tests to the mix.

## Test Plan 

GitHub

---------

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Sébastien Han <seb@redhat.com>
2025-03-31 22:38:47 +02:00
Sébastien Han
2ffa2b77ed
refactor: extract pagination logic into shared helper function (#1770)
# What does this PR do?

Move pagination logic from LocalFS and HuggingFace implementations into
a common helper function to ensure consistent pagination behavior across
providers. This reduces code duplication and centralizes pagination
logic in one place.


## Test Plan

Run this script:

```
from llama_stack_client import LlamaStackClient

# Initialize the client
client = LlamaStackClient(base_url="http://localhost:8321")

# Register a dataset
response = client.datasets.register(
    purpose="eval/messages-answer",  # or "eval/question-answer" or "post-training/messages"
    source={"type": "uri", "uri": "huggingface://datasets/llamastack/simpleqa?split=train"},
    dataset_id="my_dataset",  # optional, will be auto-generated if not provided
    metadata={"description": "My evaluation dataset"},  # optional
)

# Verify the dataset was registered by listing all datasets
datasets = client.datasets.list()
print(f"Registered datasets: {[d.identifier for d in datasets]}")

# You can then access the data using the datasetio API
# rows = client.datasets.iterrows(dataset_id="my_dataset", start_index=1, limit=2)
rows = client.datasets.iterrows(dataset_id="my_dataset")
print(f"Data: {rows.data}")
```

And play with `start_index` and `limit`.

[//]: # (## Documentation)

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-31 13:08:29 -07:00
Francisco Arceo
d495922949
docs: Updated documentation and Sphinx configuration (#1845)
# What does this PR do?

The goal of this PR is to make the pages easier to navigate by surfacing
the child pages on the navbar, updating some of the copy, moving some of
the files around.

Some changes:
1. Clarifying Titles
2. Restructuring "Distributions" more formally in its own page to be
consistent with Providers and adding some clarity to the child pages to
surface them and make them easier to navigate
3. Updated sphinx config to not collapse navigation by default
4. Updated copyright year to be calculated dynamically 
5. Moved `docs/source/distributions/index.md` ->
`docs/source/distributions/starting_llama_stack_server.md`

Another for https://github.com/meta-llama/llama-stack/issues/1815

## Test Plan
Tested locally and pages build (screen shots for example).

## Documentation
###  Before:
![Screenshot 2025-03-31 at 1 09
21 PM](https://github.com/user-attachments/assets/98e34f76-f0d9-4055-8e2c-441b1e7d8f6a)

### After:
![Screenshot 2025-03-31 at 1 08
52 PM](https://github.com/user-attachments/assets/dfb6b8ad-3a1d-46b6-8f54-0c553664093f)

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-03-31 13:08:05 -07:00
Francisco Arceo
60430da48a
docs: Update readme for integration tests (#1846)
# What does this PR do?
Update README for integration tests

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-03-31 22:00:02 +02:00
Francisco Arceo
9b478f3756
docs: Adding darkmode to documentation (#1843)
# What does this PR do?
docs: Adding darkmode to documentation


## Test Plan
Tested locally. 

Here's the look:
![Screenshot 2025-03-31 at 9 43
05 AM](https://github.com/user-attachments/assets/5989dbc8-ba03-4710-ad8d-6d4b9ac79786)


## Issues

Related to https://github.com/meta-llama/llama-stack/issues/1815 

Closes https://github.com/meta-llama/llama-stack/issues/1844

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-03-31 08:31:53 -07:00
Yuan Tang
7e51a83eac
docs: Add link to integration tests instructions and minor clarification (#1838)
# What does this PR do?

* Added `--text-model` in example command.
* Added link to integration tests instruction and a note on specifying
models.

This is to avoid confusion when all tests are skipped because no model
is provided.

Signed-off-by: Yuan Tang <terrytangyuan@gmail.com>
2025-03-31 11:37:42 +02:00
Xi Yan
90efafafb7
chore: change context to content for agent (#1840) 2025-03-30 10:33:58 -07:00
ehhuang
3a2314dcef
fix(telemetry): library client does not log span (#1833) 2025-03-29 14:55:31 -07:00
Anamika
d8a8a734b5
fix: update sink name for traces and metrics in LlamaStack 0.1.8 (#1836)
# What does this PR do?
This PR updates the sink name configuration for traces and metrics in
LlamaStack to align with the latest changes introduced in version 0.1.8.
Previously, when using the `otel` sink along with other sinks (like
`console` and `sqlite`), the system threw a **ValueError**, with the
message:

```shell
Value error, 'otel' is not a valid TelemetrySink [type=value_error, input_value='console,otel,sqlite', input_type=str]
For further information visit https://errors.pydantic.dev/2.10/v/value_error
``` 

## Test Plan
- **Test 1:**  
Ran the LlamaStack server with a configuration containing
`console,otel,sqlite` as sinks.
   - **Expected result:** No errors related to invalid sink names.
   - **Result:** The system ran without throwing a `ValueError`.

- **Test 2:**  
Verified that the `otel_trace`, `otel_metric` sink now works in
combination with other sinks (`console`, `sqlite`).
- **Expected result:** Telemetry data is correctly sent to all specified
sinks without errors.
- **Result:** All telemetry data was successfully sent to the specified
sinks.
2025-03-29 10:09:08 -07:00
Matthew Farrellee
a4c086cee0
fix: skip apis with no providers during llama stack build (#1835)
# What does this PR do?
closes #1834 

## Test Plan
`llama stack build` successfully
2025-03-29 08:39:35 -07:00
ehhuang
a182705ade
fix(telemetry): query_spans (#1831)
# What does this PR do?
https://github.com/meta-llama/llama-stack/pull/1828 removed
__root_span__ attribute which is still needed

## Test Plan
added telemetry integration test


LLAMA_STACK_CONFIG=http://localhost:5001 pytest -s -v
tests/integration/telemetry --safety-shield meta-llama/Llama-Guard-3-8B
--text-model accounts/fireworks/models/llama-v3p3-70b-instruct
2025-03-28 20:58:17 -07:00
Francisco Arceo
74a2584cdb
chore: Updating Milvus Client calls to be non-blocking (#1830)
# What does this PR do?
This PR converts blocking Milvus Client calls to non-blocking.

Another one for https://github.com/meta-llama/llama-stack/issues/1489

## Test Plan

I ran the integration tests from
https://github.com/meta-llama/llama-stack/pull/1467 with:
```python
pytest -s -v tests/integration/vector_io/test_vector_io.py \
  --stack-config inference=sentence-transformers,vector_io=inline::milvus \
  --embedding-model all-miniLM-L6-V2  --env MILVUS_DB_PATH=/tmp/moo.db

INFO     2025-03-28 21:35:22,726 tests.integration.conftest:41 tests: Setting DISABLE_CODE_SANDBOX=1 for macOS          
/Users/farceo/dev/llama-stack/.venv/lib/python3.10/site-packages/pytest_asyncio/plugin.py:207: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

  warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
=============================================================================================================================================================================================================================================================== test session starts ===============================================================================================================================================================================================================================================================
platform darwin -- Python 3.10.16, pytest-8.3.4, pluggy-1.5.0 -- /Users/farceo/dev/llama-stack/.venv/bin/python3
cachedir: .pytest_cache
metadata: {'Python': '3.10.16', 'Platform': 'macOS-15.3.1-arm64-arm-64bit', 'Packages': {'pytest': '8.3.4', 'pluggy': '1.5.0'}, 'Plugins': {'cov': '6.0.0', 'html': '4.1.1', 'metadata': '3.1.1', 'asyncio': '0.25.3', 'anyio': '4.8.0', 'nbval': '0.11.0'}}
rootdir: /Users/farceo/dev/llama-stack
configfile: pyproject.toml
plugins: cov-6.0.0, html-4.1.1, metadata-3.1.1, asyncio-0.25.3, anyio-4.8.0, nbval-0.11.0
asyncio: mode=strict, asyncio_default_fixture_loop_scope=None
collected 7 items                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 

tests/integration/vector_io/test_vector_io.py::test_vector_db_retrieve[emb=all-miniLM-L6-V2] PASSED
tests/integration/vector_io/test_vector_io.py::test_vector_db_register[emb=all-miniLM-L6-V2] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=all-miniLM-L6-V2-test_case0] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=all-miniLM-L6-V2-test_case1] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=all-miniLM-L6-V2-test_case2] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=all-miniLM-L6-V2-test_case3] PASSED
tests/integration/vector_io/test_vector_io.py::test_insert_chunks[emb=all-miniLM-L6-V2-test_case4] PASSED

========================================================================================================================================================================================================================================================= 7 passed, 2 warnings in 40.33s ==========================================================================================================================================================================================================================================================
```

[//]: # (## Documentation)

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-03-28 22:14:07 -04:00
github-actions[bot]
daa34909a0 build: Bump version to 0.1.9 2025-03-29 00:22:35 +00:00
github-actions[bot]
b7ab1a9710 build: Bump version to 0.1.19 2025-03-29 00:18:38 +00:00
ehhuang
e58c7f6c37
fix(telemetry): root span not yet received (#1828)
# What does this PR do?
closes #1725 

In https://github.com/meta-llama/llama-stack/pull/1759's attempt to make
trace_id consistent in llama stack and otel exports, it incorrectly sets
the span_id in context, which causes the root span to have a parent ID,
leading to the issue in #1725.

This PR reverts #1759's change to set the parent context. We will need
to follow up with a proper way to do this.

## Test Plan
<img width="1868" alt="image"
src="https://github.com/user-attachments/assets/15e9ac18-8541-461d-b261-c4e124388cc3"
/>
2025-03-28 14:40:17 -07:00
Xi Yan
7e7bea66ba
fix: skip code interp (#1827)
# What does this PR do?
- this is a flaky test dependent on model output

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
<img width="853" alt="image"
src="https://github.com/user-attachments/assets/e7607877-22a9-48e3-adac-e991d1070ec0"
/>


[//]: # (## Documentation)
2025-03-28 12:58:08 -07:00
Francisco Arceo
af6594f670
fix: Adding chunk_size_in_tokens to playground rag_tool insert (#1826)
# What does this PR do?
Adding chunk_size_in_tokens to playground rag_tool insert.

# Closes #1825 

## Test Plan
Tested locally.

[//]: # (## Documentation)

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-03-28 15:56:25 -04:00
Francisco Arceo
37b6da37ba
docs: Document sqlite-vec faiss comparison (#1821)
# What does this PR do?
This PR documents and benchmarks the performance tradeoffs between
sqlite-vec and FAISS inline VectorDB providers.

# Closes https://github.com/meta-llama/llama-stack/issues/1165

## Test Plan

The test was run using this script:

<details>
<summary>CLICK TO SHOW SCRIPT 👋  </summary>

```python

import cProfile
import os
import uuid
import time
import random
import string
import matplotlib.pyplot as plt
import pandas as pd
from termcolor import cprint
from llama_stack_client.types import Document
from llama_stack.distribution.library_client import LlamaStackAsLibraryClient
from memory_profiler import profile
from line_profiler import LineProfiler

os.environ["INFERENCE_MODEL"] = "llama3.2:3b-instruct-fp16"
os.environ["LLAMA_STACK_CONFIG"] = "ollama"

def generate_random_chars(count=400):
    return ''.join(random.choices(string.ascii_letters, k=count))

def generate_documents(num_docs: int, num_chars: int):
    documents = [
        Document(
            document_id=f"doc-{i}",
            content=f"Document content for document {i} - {generate_random_chars(count=num_chars)}",
            mime_type="text/plain",
            metadata={},
        )
        for i in range(num_docs)
    ]
    return documents


@profile
def benchmark_write(client, vector_db_id, documents, batch_size=100):
    write_times = []
    for i in range(0, len(documents), batch_size):
        batch = documents[i:i + batch_size]
        start_time = time.time()
        client.tool_runtime.rag_tool.insert(
            documents=batch,
            vector_db_id=vector_db_id,
            chunk_size_in_tokens=512,
        )
        end_time = time.time()
        write_times.append(end_time - start_time)

    return write_times

@profile
def benchmark_read(client, provider_id, vector_db_id, user_prompts):
    response_times = []
    for prompt in user_prompts:
        start_time = time.time()
        response = client.vector_io.query(
            vector_db_id=vector_db_id,
            query=prompt,
        )
        end_time = time.time()
        response_times.append(end_time - start_time)
    return response_times

def profile_functions():
    profiler = LineProfiler()
    profiler.add_function(benchmark_write)
    profiler.add_function(benchmark_read)
    return profiler


def plot_results(output, batch_size):
    # Create a DataFrame for easy manipulation
    df_sqlite = pd.DataFrame(output['sqlite-vec'])
    df_faiss = pd.DataFrame(output['faiss'])

    df_sqlite['write_times'] *= 1000
    df_faiss['write_times'] *= 1000

    avg_write_sqlite = df_sqlite['write_times'].mean()
    avg_write_faiss = df_faiss['write_times'].mean()
    avg_read_sqlite = df_sqlite['read_times'].mean()
    avg_read_faiss = df_faiss['read_times'].mean()

    plt.figure(figsize=(12, 6))
    plt.hist(df_sqlite['write_times'], bins=10, alpha=0.5, color='blue', label='sqlite-vec Write Times')
    plt.hist(df_faiss['write_times'], bins=10, alpha=0.5, color='red', label='faiss Write Times')
    plt.axvline(avg_write_sqlite, color='blue', linestyle='--',
                label=f'Average Write Time (sqlite-vec): {avg_write_sqlite:.3f} ms')
    plt.axvline(avg_write_faiss, color='red', linestyle='--',
                label=f'Average Write Time (faiss): {avg_write_faiss:.3f} ms')
    plt.title(f'Histogram of Write Times for sqlite-vec and faiss\nn = {df_faiss.shape[0]} with batch size = {batch_size}')
    plt.xlabel('Time (milliseconds)')
    plt.ylabel('Density')
    plt.legend()
    plt.savefig('write_time_comparison.png')
    plt.close()

    plt.figure(figsize=(12, 6))
    plt.hist(df_sqlite['read_times'], bins=10, alpha=0.5, color='blue', label='sqlite-vec Read Times')
    plt.hist(df_faiss['read_times'], bins=10, alpha=0.5, color='red', label='faiss Read Times')
    plt.axvline(avg_read_sqlite, color='blue', linestyle='--',
                label=f'Average Read Time (sqlite-vec): {avg_read_sqlite:.3f} ms')
    plt.axvline(avg_read_faiss, color='red', linestyle='--',
                label=f'Average Read Time (faiss): {avg_read_faiss:.3f} ms')
    plt.title(f'Histogram of Read Times for sqlite-vec and faiss\nn = {df_faiss.shape[0]}')
    plt.xlabel('Time (milliseconds)')
    plt.ylabel('Density')
    plt.legend()
    plt.savefig('read_time_comparison.png')
    plt.close()

    plt.figure(figsize=(12, 6))
    plt.hist(df_sqlite['read_times'], bins=10, alpha=0.5, color='blue', label='sqlite-vec Read Times')
    plt.hist(df_faiss['read_times'], bins=10, alpha=0.5, color='red', label='faiss Read Times')
    plt.axvline(avg_read_sqlite, color='blue', linestyle='--',
                label=f'Average Read Time (sqlite-vec): {avg_read_sqlite:.3f} ms')
    plt.axvline(avg_read_faiss, color='red', linestyle='--',
                label=f'Average Read Time (faiss): {avg_read_faiss:.3f} ms')
    plt.title(f'Histogram of Read Times for sqlite-vec and faiss\nn = {df_faiss.shape[0]}')
    plt.xlabel('Time (milliseconds)')
    plt.ylabel('Density')
    plt.legend()
    plt.savefig('read_time_comparison.png')
    plt.close()

    plt.figure(figsize=(12, 6))
    plt.plot(df_sqlite.index, df_sqlite['write_times'],
             marker='o', markersize=4, linestyle='-', color='blue',
             label='sqlite-vec Write Times')
    plt.plot(df_faiss.index, df_faiss['write_times'],
             marker='x', markersize=4, linestyle='-', color='red',
             label='faiss Write Times')

    plt.title(f'Write Times by Operation Sequence\n(batch size = {batch_size})')
    plt.xlabel('Write Operation Sequence')
    plt.ylabel('Time (milliseconds)')
    plt.legend()
    plt.grid(True, linestyle='--', alpha=0.7)
    plt.tight_layout()
    plt.savefig('write_time_sequence.png')
    plt.close()
    # Print out the summary table
    print("\nPerformance Summary for sqlite-vec:")
    print(df_sqlite)

    # Print out the summary table
    print("\nPerformance Summary for faiss:")
    print(df_faiss)


def main():
    # Initialize the client
    client = LlamaStackAsLibraryClient("ollama")
    vector_db_id = f"test-vector-db-{uuid.uuid4().hex}"
    _ = client.initialize()

    # Generate a large dataset
    num_chars = 50
    num_docs = 100
    num_writes = 100
    write_batch_size = 100
    num_reads = 100

    documents = generate_documents(num_docs * write_batch_size, num_chars)
    user_prompts = [
        f"Tell me about document {i}" for i in range(1, num_reads + 1)
    ]

    providers = ["sqlite-vec", "faiss"]
    output = {
        provider_id: {"write_times": None, "read_times": None} for provider_id in providers
    }

    # Benchmark writes and reads for SQLite and Faiss
    for provider_id in providers:
        cprint(f"Benchmarking provider: {provider_id}", "yellow")
        client.vector_dbs.register(
            provider_id=provider_id,
            vector_db_id=vector_db_id,
            embedding_model="all-MiniLM-L6-v2",
            embedding_dimension=384,
        )
        write_times = benchmark_write(client, vector_db_id, documents, write_batch_size)

        average_write_time_ms = sum(write_times) / len(write_times) * 1000.
        cprint(f"Average write time for {provider_id} is {average_write_time_ms:.2f} milliseconds for {num_writes} runs", "blue")

        cprint(f"Benchmarking reads for provider: {provider_id}", "yellow")
        read_times = benchmark_read(client, provider_id, vector_db_id, user_prompts)

        average_read_time_ms = sum(read_times) / len(read_times) * 1000.
        cprint(f"Average read time for {provider_id} is {average_read_time_ms:.2f} milliseconds for {num_reads} runs", "blue")

        client.vector_dbs.unregister(vector_db_id=vector_db_id)
        output[provider_id]['write_times'] = write_times
        output[provider_id]['read_times'] = read_times
    # Generate plots and summary
    plot_results(output, write_batch_size)


if __name__ == "__main__":
    cProfile.run('main()', 'profile_output.prof')
```
</details>

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-03-28 17:41:33 +01:00
Sébastien Han
a4f458e1c1
ci: add myself to CODEOWNERS (#1823)
Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-28 09:37:42 -07:00
Ihar Hrachyshka
18bac27d4e
fix: Use CONDA_DEFAULT_ENV presence as a flag to use conda mode (#1555)
# What does this PR do?

This is the second attempt to switch to system packages by default. Now
with a hack to detect conda environment - in which case conda image-type
is used.

Note: Conda will only be used when --image-name is unset *and*
CONDA_DEFAULT_ENV is set. This means that users without conda will
correctly fall back to using system packages when no --image-* arguments
are passed at all.

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan

Uses virtualenv:

```
$ llama stack build --template ollama --image-type venv
$ llama stack run --image-type venv ~/.llama/distributions/ollama/ollama-run.yaml
[...]
Using virtual environment: /home/ec2-user/src/llama-stack/schedule/.local
[...]
```

Uses system packages (virtualenv already initialized):

```
$ llama stack run ~/.llama/distributions/ollama/ollama-run.yaml
[...]
INFO     2025-03-27 20:46:22,882 llama_stack.cli.stack.run:142 server: No image type or image name provided. Assuming environment packages.
[...]
```

Attempt to run from environment packages without necessary packages
installed:
```
$ python -m venv barebones
$ . ./barebones/bin/activate
$ pip install -e . # to install llama command
$ llama stack run ~/.llama/distributions/ollama/ollama-run.yaml
[...]
ModuleNotFoundError: No module named 'fastapi'
```

^ failed as expected because the environment doesn't have necessary
packages installed.

Now install some packages in the new environment:

```
$ pip install fastapi opentelemetry-api opentelemetry-sdk opentelemetry-exporter-otlp aiosqlite ollama openai datasets faiss-cpu mcp autoevals
$ llama stack run ~/.llama/distributions/ollama/ollama-run.yaml
[...]
Uvicorn running on http://['::', '0.0.0.0']:8321 (Press CTRL+C to quit)
```

Now see if setting CONDA_DEFAULT_ENV will change what happens by
default:

```
$ export CONDA_DEFAULT_ENV=base
$ llama stack run ~/.llama/distributions/ollama/ollama-run.yaml
[...]
Using conda environment: base
Conda environment base does not exist.
[...]
```

---------

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-27 17:13:22 -04:00
Xi Yan
b5c27f77ad
chore: clean up distro doc (#1804)
# What does this PR do?
- hide distro doc (docker needs to be thoroughly tested). 

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
- docs

[//]: # (## Documentation)
2025-03-27 12:12:14 -07:00
Ihar Hrachyshka
81393afb35
chore: require data field for all List*Response models (#1799)
# What does this PR do?

No violators are currently in-tree. This is just hardening the api specs
for future consistency.

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-27 18:15:16 +01:00
Dmitry Rogozhkin
935e706b15
docs: fix remote-vllm instructions (#1805)
# What does this PR do?

* Fix location of `run.yaml` relative to the cloned llama stack
repository
* Drop `-it` from `docker run` commands as its not needed running
services

## Test Plan

* Verified running the llama stack following updated instruction

CC: @ashwinb

Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
2025-03-27 10:19:51 -04:00
Antonin Stefanutti
9d9ab7e7dd
chore: Remove style tags from log formatter (#1808)
# What does this PR do?

Set a formatter for log file handler that does not pollute log messages
with color tags.

## Test Plan

Successfully tested with `LLAMA_STACK_LOG_FILE=server.log llama stack
run ...`
2025-03-27 10:18:21 -04:00
Sébastien Han
e3578b1c1b
chore: remove distributions dir (#1809)
# What does this PR do?

Followup on https://github.com/meta-llama/llama-stack/pull/1801. Move
the deps files to llama_stack/templates.

Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-27 09:03:39 -04:00
Sébastien Han
626313b4c8
fix: resolve precommit error (#1810)
Signed-off-by: Sébastien Han <seb@redhat.com>
2025-03-27 08:16:00 -04:00
Xi Yan
cfd30d2ad5
fix: update agents test (#1796)
# What does this PR do?
- we no longer query vector db when uploading documents as attachments

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
```
pytest --stack-config="http://localhost:8321" -v tests/integration/agents/test_agents.py --text-model meta-llama/Llama-3.3-70B-Instruct
```

```
pytest --stack-config=fireworks -v tests/integration/agents/test_agents.py --text-model meta-llama/Llama-3.3-70B-Instruct --record-responses
```
<img width="1160" alt="image"
src="https://github.com/user-attachments/assets/90700f79-c002-4474-bb41-7bc0a39dc91c"
/>


[//]: # (## Documentation)
2025-03-26 22:00:43 -07:00
Ihar Hrachyshka
193e531216
chore: re-enable isort enforcement (#1802)
# What does this PR do?

Re-enable isort enforcement.

It was disabled in 1a73f8305b, probably by
mistake.

Signed-off-by: Ihar Hrachyshka <ihar.hrachyshka@gmail.com>
2025-03-26 15:22:17 -07:00
Xi Yan
742020b94a
chore: remove distributions folder (#1801)
# What does this PR do?

- the distribution folder is referencing template, and have dead docker
compose scripts

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan


[//]: # (## Documentation)
2025-03-26 15:07:54 -07:00