mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-23 02:22:25 +00:00
Resolved merge conflicts in: - Documentation files: updated vector IO provider docs to include both kvstore fields and embedding model configuration - Config files: merged kvstore requirements from upstream with embedding model fields - Dependencies: updated to latest client versions while preserving llama-models dependency - Regenerated lockfiles to ensure consistency All embedding model configuration features preserved while incorporating upstream changes. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| embedding_mixin.py | ||
| inference_store.py | ||
| litellm_openai_mixin.py | ||
| model_registry.py | ||
| openai_compat.py | ||
| prompt_adapter.py | ||
| stream_utils.py | ||