llama-stack-mirror/llama_stack/providers
Sajikumar JS 1bb1d9b2ba
feat: Add watsonx inference adapter (#1895)
# What does this PR do?
IBM watsonx ai added as the inference [#1741
](https://github.com/meta-llama/llama-stack/issues/1741)

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

---------

Co-authored-by: Sajikumar JS <sajikumar.js@ibm.com>
2025-04-25 11:29:21 -07:00
..
inline fix: meta ref inference (#2022) 2025-04-24 13:03:35 -07:00
registry feat: Add watsonx inference adapter (#1895) 2025-04-25 11:29:21 -07:00
remote feat: Add watsonx inference adapter (#1895) 2025-04-25 11:29:21 -07:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils feat: new system prompt for llama4 (#2031) 2025-04-25 11:29:08 -07:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py feat: add health to all providers through providers endpoint (#1418) 2025-04-14 11:59:36 +02:00