mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-04 04:04:14 +00:00
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 4s
Test External Providers Installed via Module / test-external-providers-from-module (venv) (push) Has been skipped
Integration Tests (Replay) / Integration Tests (, , , client=, vision=) (push) Failing after 0s
Test Llama Stack Build / build-single-provider (push) Failing after 2s
Pre-commit / pre-commit (push) Failing after 4s
SqlStore Integration Tests / test-postgres (3.13) (push) Failing after 5s
Test Llama Stack Build / build-ubi9-container-distribution (push) Failing after 3s
Test Llama Stack Build / generate-matrix (push) Failing after 5s
Test Llama Stack Build / build (push) Has been skipped
Vector IO Integration Tests / test-matrix (push) Failing after 6s
Test Llama Stack Build / build-custom-container-distribution (push) Failing after 5s
Python Package Build Test / build (3.13) (push) Failing after 4s
Test External API and Providers / test-external (venv) (push) Failing after 4s
Unit Tests / unit-tests (3.12) (push) Failing after 4s
Update ReadTheDocs / update-readthedocs (push) Failing after 4s
Python Package Build Test / build (3.12) (push) Failing after 7s
Unit Tests / unit-tests (3.13) (push) Failing after 5s
UI Tests / ui-tests (22) (push) Failing after 6s
SqlStore Integration Tests / test-postgres (3.12) (push) Failing after 14s
Implements optional idempotency for batch creation using `idem_tok` parameter: * **Core idempotency**: Same token + parameters returns existing batch * **Conflict detection**: Same token + different parameters raises HTTP 409 ConflictError * **Metadata order independence**: Different key ordering doesn't affect idempotency **API changes:** - Add optional `idem_tok` parameter to `create_batch()` method - Enhanced API documentation with idempotency extensions **Implementation:** - Reference provider supports idempotent batch creation - ConflictError for proper HTTP 409 status code mapping - Comprehensive parameter validation **Testing:** - Unit tests: focused tests covering core scenarios with parametrized conflict detection - Integration tests: tests validating real OpenAI client behavior This enables client-side retry safety and prevents duplicate batch creation when using the same idempotency token, following REST API closes #3144
54 lines
1.7 KiB
Python
54 lines
1.7 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
"""Shared fixtures for batches provider unit tests."""
|
|
|
|
import tempfile
|
|
from pathlib import Path
|
|
from unittest.mock import AsyncMock
|
|
|
|
import pytest
|
|
|
|
from llama_stack.providers.inline.batches.reference.batches import ReferenceBatchesImpl
|
|
from llama_stack.providers.inline.batches.reference.config import ReferenceBatchesImplConfig
|
|
from llama_stack.providers.utils.kvstore import kvstore_impl
|
|
from llama_stack.providers.utils.kvstore.config import SqliteKVStoreConfig
|
|
|
|
|
|
@pytest.fixture
|
|
async def provider():
|
|
"""Create a test provider instance with temporary database."""
|
|
with tempfile.TemporaryDirectory() as tmpdir:
|
|
db_path = Path(tmpdir) / "test_batches.db"
|
|
kvstore_config = SqliteKVStoreConfig(db_path=str(db_path))
|
|
config = ReferenceBatchesImplConfig(kvstore=kvstore_config)
|
|
|
|
# Create kvstore and mock APIs
|
|
kvstore = await kvstore_impl(config.kvstore)
|
|
mock_inference = AsyncMock()
|
|
mock_files = AsyncMock()
|
|
mock_models = AsyncMock()
|
|
|
|
provider = ReferenceBatchesImpl(config, mock_inference, mock_files, mock_models, kvstore)
|
|
await provider.initialize()
|
|
|
|
# unit tests should not require background processing
|
|
provider.process_batches = False
|
|
|
|
yield provider
|
|
|
|
await provider.shutdown()
|
|
|
|
|
|
@pytest.fixture
|
|
def sample_batch_data():
|
|
"""Sample batch data for testing."""
|
|
return {
|
|
"input_file_id": "file_abc123",
|
|
"endpoint": "/v1/chat/completions",
|
|
"completion_window": "24h",
|
|
"metadata": {"test": "true", "priority": "high"},
|
|
}
|