mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-30 19:23:52 +00:00
When called via the OpenAI API, ollama is responding with more brief responses than when called via its native API. This adjusts the prompting for its OpenAI calls to ask it to be more verbose. |
||
|---|---|---|
| .. | ||
| client-sdk/post_training | ||
| external-provider/llama-stack-provider-ollama | ||
| integration | ||
| unit | ||
| verifications | ||
| __init__.py | ||