llama-stack-mirror/llama_stack/providers/utils/inference
skamenan7 6634b21a76 Merge upstream/main and resolve conflicts
Resolved merge conflicts in:
- Documentation files: updated vector IO provider docs to include both kvstore fields and embedding model configuration
- Config files: merged kvstore requirements from upstream with embedding model fields
- Dependencies: updated to latest client versions while preserving llama-models dependency
- Regenerated lockfiles to ensure consistency

All embedding model configuration features preserved while incorporating upstream changes.
2025-07-16 19:57:02 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
inference_store.py feat: support auth attributes in inference/responses stores (#2389) 2025-06-20 10:24:45 -07:00
litellm_openai_mixin.py feat: create dynamic model registration for OpenAI and Llama compat remote inference providers (#2745) 2025-07-16 12:49:38 -04:00
model_registry.py feat: add infrastructure to allow inference model discovery (#2710) 2025-07-14 11:38:53 -07:00
openai_compat.py fix: Resolve Llama4 tool calling 500 errors (Issue #2584) 2025-07-15 11:47:05 -04:00
prompt_adapter.py fix: address reviewer feedback - improve conditional imports and remove provider alias logic\n\n- Improve conditional import approach with better documentation\n- Remove provider-specific alias logic from sku_list.py\n- Conditional imports are necessary because llama4 requires torch\n- Addresses @ashwinb and @raghotham feedback while maintaining compatibility 2025-07-15 13:21:33 -04:00
stream_utils.py feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30