llama-stack-mirror/llama_stack/distribution/routers
skamenan7 6634b21a76 Merge upstream/main and resolve conflicts
Resolved merge conflicts in:
- Documentation files: updated vector IO provider docs to include both kvstore fields and embedding model configuration
- Config files: merged kvstore requirements from upstream with embedding model fields
- Dependencies: updated to latest client versions while preserving llama-models dependency
- Regenerated lockfiles to ensure consistency

All embedding model configuration features preserved while incorporating upstream changes.
2025-07-16 19:57:02 -04:00
..
__init__.py feat: support auth attributes in inference/responses stores (#2389) 2025-06-20 10:24:45 -07:00
datasets.py chore: split routers into individual files (datasets) (#2249) 2025-05-24 22:11:43 -07:00
eval_scoring.py chore: split routers into individual files (inference, tool, vector_io, eval_scoring) (#2258) 2025-05-24 22:59:07 -07:00
inference.py chore: remove nested imports (#2515) 2025-06-26 08:01:05 +05:30
safety.py chore: split routers into individual files (safety) 2025-05-24 22:00:32 -07:00
tool_runtime.py fix(tools): do not index tools, only index toolgroups (#2261) 2025-05-25 13:27:52 -07:00
vector_io.py Merge upstream/main and resolve conflicts 2025-07-16 19:57:02 -04:00