llama-stack-mirror/llama_stack/providers/utils/inference
2025-07-14 14:42:54 -04:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py feat: New OpenAI compat embeddings API (#2314) 2025-05-31 22:11:47 -07:00
inference_store.py feat: support auth attributes in inference/responses stores (#2389) 2025-06-20 10:24:45 -07:00
litellm_openai_mixin.py chore: standardize unsupported model error #2517 (#2518) 2025-06-27 14:26:58 -04:00
model_registry.py feat: add infrastructure to allow inference model discovery (#2710) 2025-07-14 11:38:53 -07:00
openai_compat.py fix: Resolve Llama4 tool calling 500 errors (Issue #2584) 2025-07-14 14:42:54 -04:00
prompt_adapter.py fix: address reviewer feedback - improve conditional imports and remove provider alias logic\n\n- Improve conditional import approach with better documentation\n- Remove provider-specific alias logic from sku_list.py\n- Conditional imports are necessary because llama4 requires torch\n- Addresses @ashwinb and @raghotham feedback while maintaining compatibility 2025-07-14 14:42:54 -04:00
stream_utils.py feat: drop python 3.10 support (#2469) 2025-06-19 12:07:14 +05:30