llama-stack-mirror/llama_stack/providers
Matthew Farrellee 639bc912d5 chore: create OpenAIMixin for inference providers with an OpenAI-compat API that need to implement openai_* methods
use demonstrated by refactoring OpenAIInferenceAdapter, NVIDIAInferenceAdapter (adds embedding support) and LlamaCompatInferenceAdapter
2025-07-21 07:27:27 -04:00
..
inline feat: enable auth for LocalFS Files Provider (#2773) 2025-07-18 19:11:01 -07:00
registry chore: kill inline::vllm (#2824) 2025-07-18 15:52:18 -07:00
remote chore: create OpenAIMixin for inference providers with an OpenAI-compat API that need to implement openai_* methods 2025-07-21 07:27:27 -04:00
utils chore: create OpenAIMixin for inference providers with an OpenAI-compat API that need to implement openai_* methods 2025-07-21 07:27:27 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py docs: auto generated documentation for providers (#2543) 2025-06-30 15:13:20 +02:00