llama-stack-mirror/tests/unit/providers
Matthew Farrellee 68877f331e feat: Add optional idempotency support to batches API
Implements optional idempotency for batch creation using `idem_tok` parameter:

* **Core idempotency**: Same token + parameters returns existing batch
* **Conflict detection**: Same token + different parameters raises HTTP 409 ConflictError
* **Metadata order independence**: Different key ordering doesn't affect idempotency

**API changes:**
- Add optional `idem_tok` parameter to `create_batch()` method
- Enhanced API documentation with idempotency extensions

**Implementation:**
- Reference provider supports idempotent batch creation
- ConflictError for proper HTTP 409 status code mapping
- Comprehensive parameter validation

**Testing:**
- Unit tests: focused tests covering core scenarios with parametrized conflict detection
- Integration tests: tests validating real OpenAI client behavior

This enables client-side retry safety and prevents duplicate batch creation
when using the same idempotency token, following REST API
2025-08-08 08:08:08 -04:00
..
agent fix: Fix list_sessions() (#3114) 2025-08-13 07:46:26 -07:00
agents refactor(responses): move stuff into some utils and add unit tests (#3158) 2025-08-15 00:05:36 +00:00
batches feat: Add optional idempotency support to batches API 2025-08-08 08:08:08 -04:00
inference feat: Add clear error message when API key is missing (#2992) 2025-07-31 16:33:16 -04:00
nvidia chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00
utils fix: use ChatCompletionMessageFunctionToolCall (#3142) 2025-08-14 10:27:00 -07:00
vector_io feat: Implement hybrid search in Milvus (#2644) 2025-08-07 09:42:03 +02:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00