llama-stack-mirror/llama_stack/providers
Ben Browning ef684ff178 Fix openai_completion tests for ollama
When called via the OpenAI API, ollama is responding with more brief
responses than when called via its native API. This adjusts the
prompting for its OpenAI calls to ask it to be more verbose.
2025-04-09 15:47:02 -04:00
..
inline Mark inline vllm as OpenAI unsupported inference 2025-04-09 15:47:02 -04:00
registry test: verification on provider's OAI endpoints (#1893) 2025-04-07 23:06:28 -07:00
remote Fix openai_completion tests for ollama 2025-04-09 15:47:02 -04:00
tests refactor: move all llama code to models/llama out of meta reference (#1887) 2025-04-07 15:03:58 -07:00
utils OpenAI completion prompt can also include tokens 2025-04-09 15:47:02 -04:00
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
datatypes.py chore: more mypy checks (ollama, vllm, ...) (#1777) 2025-04-01 17:12:39 +02:00