llama-stack/llama_stack/providers/inline/inference/meta_reference
ehhuang 2976b5d992
fix: OAI compat endpoint for meta reference inference provider (#1962)
Test plan:
python tests/verifications/generate_report.py --providers
fireworks,together,llama_meta_ref,openai

Co-authored-by: Eric Huang <erichuang@fb.com>
2025-04-17 11:16:04 -07:00
..
__init__.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
common.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
config.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
generators.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
inference.py fix: OAI compat endpoint for meta reference inference provider (#1962) 2025-04-17 11:16:04 -07:00
model_parallel.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00
parallel_utils.py feat: add batch inference API to llama stack inference (#1945) 2025-04-12 11:41:12 -07:00