llama-stack-mirror/llama_stack
Ben Browning 497c97487f Pass Ollama config into inference adapter vs config attributes
This simplifies the passing of config attributes when constructing the
Ollama inference adapter so that we just pass the config in instead of
every attribute on the config as a separate parameter.

Signed-off-by: Ben Browning <bbrownin@redhat.com>
2025-06-25 09:04:45 -04:00
..
apis feat: Add search_mode support to OpenAI vector store API (#2500) 2025-06-24 20:38:47 -04:00
cli fix: stack build (#2485) 2025-06-20 15:15:43 -07:00
distribution fix: Ollama should be optional in starter distro 2025-06-25 09:04:45 -04:00
models ci: add python package build test (#2457) 2025-06-19 18:57:32 +05:30
providers Pass Ollama config into inference adapter vs config attributes 2025-06-25 09:04:45 -04:00
strong_typing chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
templates fix: Ollama should be optional in starter distro 2025-06-25 09:04:45 -04:00
ui fix(ui): ensure initial data fetch only happens once (#2486) 2025-06-24 12:22:55 +02:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py ci: fix external provider test (#2438) 2025-06-12 16:14:32 +02:00
schema_utils.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00