llama-stack-mirror/llama_stack/providers/adapters/inference/ollama
2024-10-21 22:26:33 -07:00
..
__init__.py fix prompt guard (#177) 2024-10-03 11:07:53 -07:00
ollama.py add completion() for ollama (#280) 2024-10-21 22:26:33 -07:00