llama-stack-mirror/llama_stack/providers/inline/inference
2024-11-21 08:23:32 -08:00
..
meta_reference precommit 2024-11-21 08:23:32 -08:00
vllm Since we are pushing for HF repos, we should accept them in inference configs (#497) 2024-11-20 16:14:37 -08:00
__init__.py precommit 2024-11-08 17:58:58 -08:00