Commit graph

7 commits

Author SHA1 Message Date
Dinesh Yeduguru
948f6ece6e fixes for all providers 2024-11-12 14:30:07 -08:00
Dinesh Yeduguru
71219b4937 ollama 2024-11-12 14:29:22 -08:00
Dinesh Yeduguru
5b2282afd4 ollama and databricks 2024-11-12 14:29:22 -08:00
Dinesh Yeduguru
d69f4f8635 fix model provider validation and inference params 2024-11-12 14:29:22 -08:00
Ashwin Bharambe
c1f7ba3aed
Split safety into (llama-guard, prompt-guard, code-scanner) (#400)
Splits the meta-reference safety implementation into three distinct providers:

- inline::llama-guard
- inline::prompt-guard
- inline::code-scanner

Note that this PR is a backward incompatible change to the llama stack server. I have added deprecation_error field to ProviderSpec -- the server reads it and immediately barfs. This is used to direct the user with a specific message on what action to perform. An automagical "config upgrade" is a bit too much work to implement right now :/

(Note that we will be gradually prefixing all inline providers with inline:: -- I am only doing this for this set of new providers because otherwise existing configuration files will break even more badly.)
2024-11-11 09:29:18 -08:00
Dinesh Yeduguru
ec644d3418
migrate model to Resource and new registration signature (#410)
* resource oriented object design for models

* add back llama_model field

* working tests

* register singature fix

* address feedback

---------

Co-authored-by: Dinesh Yeduguru <dineshyv@fb.com>
2024-11-08 16:12:57 -08:00
Ashwin Bharambe
994732e2e0
impls -> inline, adapters -> remote (#381) 2024-11-06 14:54:05 -08:00
Renamed from llama_stack/providers/adapters/inference/ollama/ollama.py (Browse further)