llama-stack-mirror/llama_stack/providers/inline/inference
2024-11-20 22:03:44 -08:00
..
meta_reference Since we are pushing for HF repos, we should accept them in inference configs (#497) 2024-11-20 16:14:37 -08:00
vllm Update more distribution docs to be simpler and partially codegen'ed 2024-11-20 22:03:44 -08:00
__init__.py precommit 2024-11-08 17:58:58 -08:00