llama-stack-mirror/tests/unit/providers
skamenan7 a14f79a362 fix(vector-io): handle missing document_id in insert_chunks
Fixed KeyError when chunks don't have document_id in metadata or chunk_metadata.
Updated logging to safely extract document_id using getattr and RAG memory
to handle different document_id locations. Added test for missing document_id scenarios.

Fixes issue #3494 where /v1/vector-io/insert would crash with KeyError.
2025-09-25 13:59:10 -04:00
..
agent fix: Fix list_sessions() (#3114) 2025-08-13 07:46:26 -07:00
agents fix: ensure assistant message is followed by tool call message as expected by openai (#3224) 2025-08-22 10:42:03 -07:00
batches feat(batches, completions): add /v1/completions support to /v1/batches (#3309) 2025-09-05 11:59:57 -07:00
files feat(files, s3, expiration): add expires_after support to S3 files provider (#3283) 2025-08-29 16:17:24 -07:00
inference fix(dev): fix vllm inference recording (await models.list) (#3524) 2025-09-23 12:56:33 -04:00
nvidia feat: create HTTP DELETE API endpoints to unregister ScoringFn and Benchmark resources in Llama Stack (#3371) 2025-09-15 12:43:38 -07:00
utils feat: include all models from provider's /v1/models (#3471) 2025-09-18 05:17:11 -04:00
vector_io fix(vector-io): handle missing document_id in insert_chunks 2025-09-25 13:59:10 -04:00
test_bedrock.py fix: AWS Bedrock inference profile ID conversion for region-specific endpoints (#3386) 2025-09-11 11:41:53 +02:00
test_configs.py chore(rename): move llama_stack.distribution to llama_stack.core (#2975) 2025-07-30 23:30:53 -07:00