mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-17 12:49:49 +00:00
Signed-off-by: Francisco Javier Arceo <farceo@redhat.com> chore: Enable keyword search for Milvus inline (#3073) With https://github.com/milvus-io/milvus-lite/pull/294 - Milvus Lite supports keyword search using BM25. While introducing keyword search we had explicitly disabled it for inline milvus. This PR removes the need for the check, and enables `inline::milvus` for tests. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> Run llama stack with `inline::milvus` enabled: ``` pytest tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes --stack-config=http://localhost:8321 --embedding-model=all-MiniLM-L6-v2 -v ``` ``` INFO 2025-08-07 17:06:20,932 tests.integration.conftest:64 tests: Setting DISABLE_CODE_SANDBOX=1 for macOS =========================================================================================== test session starts ============================================================================================ platform darwin -- Python 3.12.11, pytest-7.4.4, pluggy-1.5.0 -- /Users/vnarsing/miniconda3/envs/stack-client/bin/python cachedir: .pytest_cache metadata: {'Python': '3.12.11', 'Platform': 'macOS-14.7.6-arm64-arm-64bit', 'Packages': {'pytest': '7.4.4', 'pluggy': '1.5.0'}, 'Plugins': {'asyncio': '0.23.8', 'cov': '6.0.0', 'timeout': '2.2.0', 'socket': '0.7.0', 'html': '3.1.1', 'langsmith': '0.3.39', 'anyio': '4.8.0', 'metadata': '3.0.0'}} rootdir: /Users/vnarsing/go/src/github/meta-llama/llama-stack configfile: pyproject.toml plugins: asyncio-0.23.8, cov-6.0.0, timeout-2.2.0, socket-0.7.0, html-3.1.1, langsmith-0.3.39, anyio-4.8.0, metadata-3.0.0 asyncio: mode=Mode.AUTO collected 3 items tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes[None-None-all-MiniLM-L6-v2-None-384-vector] PASSED [ 33%] tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes[None-None-all-MiniLM-L6-v2-None-384-keyword] PASSED [ 66%] tests/integration/vector_io/test_openai_vector_stores.py::test_openai_vector_store_search_modes[None-None-all-MiniLM-L6-v2-None-384-hybrid] PASSED [100%] ============================================================================================ 3 passed in 4.75s ============================================================================================= ``` Signed-off-by: Varsha Prasad Narsing <varshaprasad96@gmail.com> Co-authored-by: Francisco Arceo <arceofrancisco@gmail.com> chore: Fixup main pre commit (#3204) build: Bump version to 0.2.18 chore: Faster npm pre-commit (#3206) Adds npm to pre-commit.yml installation and caches ui Removes node installation during pre-commit. <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* --> Signed-off-by: Francisco Javier Arceo <farceo@redhat.com> chiecking in for tonight, wip moving to agents api Signed-off-by: Francisco Javier Arceo <farceo@redhat.com> remove log Signed-off-by: Francisco Javier Arceo <farceo@redhat.com> updated Signed-off-by: Francisco Javier Arceo <farceo@redhat.com> fix: disable ui-prettier & ui-eslint (#3207) chore(pre-commit): add pre-commit hook to enforce llama_stack logger usage (#3061) This PR adds a step in pre-commit to enforce using `llama_stack` logger. Currently, various parts of the code base uses different loggers. As a custom `llama_stack` logger exist and used in the codebase, it is better to standardize its utilization. Signed-off-by: Mustafa Elbehery <melbeher@redhat.com> Co-authored-by: Matthew Farrellee <matt@cs.wisc.edu> fix: fix ```openai_embeddings``` for asymmetric embedding NIMs (#3205) NVIDIA asymmetric embedding models (e.g., `nvidia/llama-3.2-nv-embedqa-1b-v2`) require an `input_type` parameter not present in the standard OpenAI embeddings API. This PR adds the `input_type="query"` as default and updates the documentation to suggest using the `embedding` API for passage embeddings. <!-- If resolving an issue, uncomment and update the line below --> Resolves #2892 ``` pytest -s -v tests/integration/inference/test_openai_embeddings.py --stack-config="inference=nvidia" --embedding-model="nvidia/llama-3.2-nv-embedqa-1b-v2" --env NVIDIA_API_KEY={nvidia_api_key} --env NVIDIA_BASE_URL="https://integrate.api.nvidia.com" ``` cleaning up Signed-off-by: Francisco Javier Arceo <farceo@redhat.com> updating session manager to cache messages locally Signed-off-by: Francisco Javier Arceo <farceo@redhat.com> fix linter Signed-off-by: Francisco Javier Arceo <farceo@redhat.com> more cleanup Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
201 lines
6.8 KiB
YAML
201 lines
6.8 KiB
YAML
exclude: 'build/'
|
|
|
|
default_language_version:
|
|
python: python3.12
|
|
node: "22"
|
|
|
|
repos:
|
|
- repo: https://github.com/pre-commit/pre-commit-hooks
|
|
rev: v5.0.0 # Latest stable version
|
|
hooks:
|
|
- id: check-merge-conflict
|
|
args: ['--assume-in-merge']
|
|
- id: trailing-whitespace
|
|
exclude: '\.py$' # Exclude Python files as Ruff already handles them
|
|
- id: check-added-large-files
|
|
args: ['--maxkb=1000']
|
|
- id: end-of-file-fixer
|
|
exclude: '^(.*\.svg|.*\.md)$'
|
|
- id: no-commit-to-branch
|
|
- id: check-yaml
|
|
args: ["--unsafe"]
|
|
- id: detect-private-key
|
|
- id: mixed-line-ending
|
|
args: [--fix=lf] # Forces to replace line ending by LF (line feed)
|
|
- id: check-executables-have-shebangs
|
|
- id: check-json
|
|
- id: check-shebang-scripts-are-executable
|
|
- id: check-symlinks
|
|
- id: check-toml
|
|
|
|
- repo: https://github.com/Lucas-C/pre-commit-hooks
|
|
rev: v1.5.5
|
|
hooks:
|
|
- id: insert-license
|
|
files: \.py$|\.sh$
|
|
args:
|
|
- --license-filepath
|
|
- docs/license_header.txt
|
|
|
|
- repo: https://github.com/astral-sh/ruff-pre-commit
|
|
rev: v0.12.2
|
|
hooks:
|
|
- id: ruff
|
|
args: [ --fix ]
|
|
exclude: ^llama_stack/strong_typing/.*$
|
|
- id: ruff-format
|
|
|
|
- repo: https://github.com/adamchainz/blacken-docs
|
|
rev: 1.19.1
|
|
hooks:
|
|
- id: blacken-docs
|
|
additional_dependencies:
|
|
- black==24.3.0
|
|
|
|
- repo: https://github.com/astral-sh/uv-pre-commit
|
|
rev: 0.7.20
|
|
hooks:
|
|
- id: uv-lock
|
|
|
|
- repo: https://github.com/pre-commit/mirrors-mypy
|
|
rev: v1.16.1
|
|
hooks:
|
|
- id: mypy
|
|
additional_dependencies:
|
|
- uv==0.6.2
|
|
- mypy
|
|
- pytest
|
|
- rich
|
|
- types-requests
|
|
- pydantic
|
|
pass_filenames: false
|
|
|
|
# - repo: https://github.com/tcort/markdown-link-check
|
|
# rev: v3.11.2
|
|
# hooks:
|
|
# - id: markdown-link-check
|
|
# args: ['--quiet']
|
|
|
|
- repo: local
|
|
hooks:
|
|
- id: distro-codegen
|
|
name: Distribution Template Codegen
|
|
additional_dependencies:
|
|
- uv==0.7.8
|
|
entry: uv run --group codegen ./scripts/distro_codegen.py
|
|
language: python
|
|
pass_filenames: false
|
|
require_serial: true
|
|
files: ^llama_stack/templates/.*$|^llama_stack/providers/.*/inference/.*/models\.py$
|
|
- id: provider-codegen
|
|
name: Provider Codegen
|
|
additional_dependencies:
|
|
- uv==0.7.8
|
|
entry: uv run --group codegen ./scripts/provider_codegen.py
|
|
language: python
|
|
pass_filenames: false
|
|
require_serial: true
|
|
files: ^llama_stack/providers/.*$
|
|
- id: openapi-codegen
|
|
name: API Spec Codegen
|
|
additional_dependencies:
|
|
- uv==0.7.8
|
|
entry: sh -c 'uv run ./docs/openapi_generator/run_openapi_generator.sh > /dev/null'
|
|
language: python
|
|
pass_filenames: false
|
|
require_serial: true
|
|
files: ^llama_stack/apis/|^docs/openapi_generator/
|
|
- id: check-workflows-use-hashes
|
|
name: Check GitHub Actions use SHA-pinned actions
|
|
entry: ./scripts/check-workflows-use-hashes.sh
|
|
language: system
|
|
pass_filenames: false
|
|
require_serial: true
|
|
always_run: true
|
|
files: ^\.github/workflows/.*\.ya?ml$
|
|
- id: check-init-py
|
|
name: Check for missing __init__.py files
|
|
entry: ./scripts/check-init-py.sh
|
|
language: system
|
|
pass_filenames: false
|
|
require_serial: true
|
|
always_run: true
|
|
files: ^llama_stack/.*$
|
|
- id: forbid-pytest-asyncio
|
|
name: Block @pytest.mark.asyncio and @pytest_asyncio.fixture
|
|
entry: bash
|
|
language: system
|
|
types: [python]
|
|
pass_filenames: true
|
|
args:
|
|
- -c
|
|
- |
|
|
grep -EnH '^[^#]*@pytest\.mark\.asyncio|@pytest_asyncio\.fixture' "$@" && {
|
|
echo;
|
|
echo "❌ Do not use @pytest.mark.asyncio or @pytest_asyncio.fixture."
|
|
echo " pytest is already configured with async-mode=auto."
|
|
echo;
|
|
exit 1;
|
|
} || true
|
|
- id: generate-ci-docs
|
|
name: Generate CI documentation
|
|
additional_dependencies:
|
|
- uv==0.7.8
|
|
entry: uv run ./scripts/gen-ci-docs.py
|
|
language: python
|
|
pass_filenames: false
|
|
require_serial: true
|
|
files: ^.github/workflows/.*$
|
|
# ui-prettier and ui-eslint are disabled until we can avoid `npm ci`, which is slow and may fail -
|
|
# npm error `npm ci` can only install packages when your package.json and package-lock.json or npm-shrinkwrap.json are in sync. Please update your lock file with `npm install` before continuing.
|
|
# npm error Invalid: lock file's llama-stack-client@0.2.17 does not satisfy llama-stack-client@0.2.18
|
|
# and until we have infra for installing prettier and next via npm -
|
|
# Lint UI code with ESLint.....................................................Failed
|
|
# - hook id: ui-eslint
|
|
# - exit code: 127
|
|
# > ui@0.1.0 lint
|
|
# > next lint --fix --quiet
|
|
# sh: line 1: next: command not found
|
|
#
|
|
# - id: ui-prettier
|
|
# name: Format UI code with Prettier
|
|
# entry: bash -c 'cd llama_stack/ui && npm ci && npm run format'
|
|
# language: system
|
|
# files: ^llama_stack/ui/.*\.(ts|tsx)$
|
|
# pass_filenames: false
|
|
# require_serial: true
|
|
# - id: ui-eslint
|
|
# name: Lint UI code with ESLint
|
|
# entry: bash -c 'cd llama_stack/ui && npm run lint -- --fix --quiet'
|
|
# language: system
|
|
# files: ^llama_stack/ui/.*\.(ts|tsx)$
|
|
# pass_filenames: false
|
|
# require_serial: true
|
|
|
|
- id: check-log-usage
|
|
name: Ensure 'llama_stack.log' usage for logging
|
|
entry: bash
|
|
language: system
|
|
types: [python]
|
|
pass_filenames: true
|
|
args:
|
|
- -c
|
|
- |
|
|
matches=$(grep -EnH '^[^#]*\b(import\s+logging|from\s+logging\b)' "$@" | grep -v -e '#\s*allow-direct-logging' || true)
|
|
if [ -n "$matches" ]; then
|
|
# GitHub Actions annotation format
|
|
while IFS=: read -r file line_num rest; do
|
|
echo "::error file=$file,line=$line_num::Do not use 'import logging' or 'from logging import' in $file. Use the custom log instead: from llama_stack.log import get_logger; logger = get_logger(). If direct logging is truly needed, add: # allow-direct-logging"
|
|
done <<< "$matches"
|
|
exit 1
|
|
fi
|
|
exit 0
|
|
|
|
ci:
|
|
autofix_commit_msg: 🎨 [pre-commit.ci] Auto format from pre-commit.com hooks
|
|
autoupdate_commit_msg: ⬆ [pre-commit.ci] pre-commit autoupdate
|
|
autofix_prs: true
|
|
autoupdate_branch: ''
|
|
autoupdate_schedule: weekly
|
|
skip: []
|
|
submodules: false
|