llama-stack-mirror/llama_stack
Sajikumar JS 6cf6791de1
fix: updated watsonx inference chat apis with new repo changes (#2033)
# What does this PR do?
There are new changes in repo which needs to add some additional
functions to the inference which is fixed. Also need one additional
params to pass some extra arguments to watsonx.ai

[//]: # (If resolving an issue, uncomment and update the line below)
[//]: # (Closes #[issue-number])

## Test Plan
[Describe the tests you ran to verify your changes with result
summaries. *Provide clear instructions so the plan can be easily
re-executed.*]

[//]: # (## Documentation)

---------

Co-authored-by: Sajikumar JS <sajikumar.js@ibm.com>
2025-04-26 10:17:52 -07:00
..
apis feat(agents): add agent naming functionality (#1922) 2025-04-17 07:02:47 -07:00
cli feat(cli): add interactive tab completion for image type selection (#2027) 2025-04-25 16:57:42 +02:00
distribution fix: add endpoint route debugs 2025-04-25 10:40:12 -07:00
models docs: update prompt_format.md for llama4 (#2035) 2025-04-25 15:52:15 -07:00
providers fix: updated watsonx inference chat apis with new repo changes (#2033) 2025-04-26 10:17:52 -07:00
strong_typing chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00
templates docs: Fix missing --gpu all flag in Docker run commands (#2026) 2025-04-25 12:17:31 -07:00
__init__.py export LibraryClient 2024-12-13 12:08:00 -08:00
env.py refactor(test): move tools, evals, datasetio, scoring and post training tests (#1401) 2025-03-04 14:53:47 -08:00
log.py chore: Remove style tags from log formatter (#1808) 2025-03-27 10:18:21 -04:00
schema_utils.py fix: dont check protocol compliance for experimental methods 2025-04-12 16:26:32 -07:00