llama-stack-mirror/llama_stack/providers/remote/inference/ollama
Ben Browning ef684ff178 Fix openai_completion tests for ollama
When called via the OpenAI API, ollama is responding with more brief
responses than when called via its native API. This adjusts the
prompting for its OpenAI calls to ask it to be more verbose.
2025-04-09 15:47:02 -04:00
..
__init__.py Auto-generate distro yamls + docs (#468) 2024-11-18 14:57:06 -08:00
config.py Remove os.getenv() from ollama config 2025-02-20 14:30:32 -08:00
models.py refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
ollama.py Fix openai_completion tests for ollama 2025-04-09 15:47:02 -04:00