llama-stack-mirror/tests/unit
Francisco Arceo 82f13fe83e
feat: Add ChunkMetadata to Chunk (#2497)
# What does this PR do?
Adding `ChunkMetadata` so we can properly delete embeddings later.

More specifically, this PR refactors and extends the chunk metadata
handling in the vector database and introduces a distinction between
metadata used for model context and backend-only metadata required for
chunk management, storage, and retrieval. It also improves chunk ID
generation and propagation throughout the stack, enhances test coverage,
and adds new utility modules.

```python
class ChunkMetadata(BaseModel):
    """
    `ChunkMetadata` is backend metadata for a `Chunk` that is used to store additional information about the chunk that
        will NOT be inserted into the context during inference, but is required for backend functionality.
        Use `metadata` in `Chunk` for metadata that will be used during inference.
    """
    document_id: str | None = None
    chunk_id: str | None = None
    source: str | None = None
    created_timestamp: int | None = None
    updated_timestamp: int | None = None
    chunk_window: str | None = None
    chunk_tokenizer: str | None = None
    chunk_embedding_model: str | None = None
    chunk_embedding_dimension: int | None = None
    content_token_count: int | None = None
    metadata_token_count: int | None = None
```
Eventually we can migrate the document_id out of the `metadata` field.
I've introduced the changes so that `ChunkMetadata` is backwards
compatible with `metadata`.

<!-- If resolving an issue, uncomment and update the line below -->
Closes https://github.com/meta-llama/llama-stack/issues/2501 

## Test Plan
Added unit tests

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
2025-06-25 15:55:23 -04:00
..
cli chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
distribution feat: fine grained access control policy (#2264) 2025-06-03 14:51:12 -07:00
files feat: support pagination in inference/responses stores (#2397) 2025-06-16 22:43:35 -07:00
models chore: remove usage of load_tiktoken_bpe (#2276) 2025-06-02 07:33:37 -07:00
providers feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
rag feat: Add ChunkMetadata to Chunk (#2497) 2025-06-25 15:55:23 -04:00
registry feat: fine grained access control policy (#2264) 2025-06-03 14:51:12 -07:00
server feat: Add url field to PaginatedResponse and populate it using route … (#2419) 2025-06-16 11:19:48 +02:00
utils feat: support auth attributes in inference/responses stores (#2389) 2025-06-20 10:24:45 -07:00
__init__.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
conftest.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
fixtures.py chore: Add fixtures to conftest.py (#2067) 2025-05-06 13:57:48 +02:00
README.md chore: bump python supported version to 3.12 (#2475) 2025-06-24 09:22:04 +05:30

Llama Stack Unit Tests

You can run the unit tests by running:

source .venv/bin/activate
./scripts/unit-tests.sh [PYTEST_ARGS]

Any additional arguments are passed to pytest. For example, you can specify a test directory, a specific test file, or any pytest flags (e.g., -vvv for verbosity). If no test directory is specified, it defaults to "tests/unit", e.g:

./scripts/unit-tests.sh tests/unit/registry/test_registry.py -vvv

If you'd like to run for a non-default version of Python (currently 3.12), pass PYTHON_VERSION variable as follows:

source .venv/bin/activate
PYTHON_VERSION=3.13 ./scripts/unit-tests.sh