mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-21 21:08:40 +00:00
Allow users to specify the inference model through the INFERENCE_MODEL environment variable instead of hardcoding it, with fallback to ollama/llama3.2:3b if not set. Signed-off-by: Costa Shulyupin <costa.shul@redhat.com> Signed-off-by: Costa Shulyupin <costa.shul@redhat.com> |
||
|---|---|---|
| .. | ||
| demo_script.py | ||
| detailed_tutorial.mdx | ||
| libraries.mdx | ||
| quickstart.mdx | ||