llama-stack-mirror/llama_stack/templates/watsonx
Gordon Sim 966b482b2e feat: allow the interface on which the server will listen to be configured
Signed-off-by: Gordon Sim <gsim@redhat.com>
2025-05-16 20:04:57 +01:00
..
__init__.py feat: Add watsonx inference adapter (#1895) 2025-04-25 11:29:21 -07:00
build.yaml fix: Adding Embedding model to watsonx inference (#2118) 2025-05-12 10:58:22 -07:00
doc_template.md feat: Add watsonx inference adapter (#1895) 2025-04-25 11:29:21 -07:00
run.yaml feat: allow the interface on which the server will listen to be configured 2025-05-16 20:04:57 +01:00
watsonx.py fix: Adding Embedding model to watsonx inference (#2118) 2025-05-12 10:58:22 -07:00