mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
We would like to run all OpenAI compatibility tests using only the openai-client library. This is most friendly for contributors since they can run tests without needing to update the client-sdks (which is getting easier but still a long pole.) This is the first step in enabling that -- no using "library client" for any of the Responses tests. This seems like a reasonable trade-off since the usage of an embeddeble library client for Responses (or any OpenAI-compatible) behavior seems to be not very common. To do this, we needed to enable MCP tests (which only worked in library client mode) for server mode. |
||
|---|---|---|
| .. | ||
| github | ||
| telemetry | ||
| check-init-py.sh | ||
| check-workflows-use-hashes.sh | ||
| cleanup_recordings.py | ||
| diagnose_recordings.py | ||
| distro_codegen.py | ||
| docker.sh | ||
| gen-changelog.py | ||
| gen-ci-docs.py | ||
| generate_ci_matrix.py | ||
| generate_prompt_format.py | ||
| get_setup_env.py | ||
| install.sh | ||
| integration-tests.sh | ||
| normalize_recordings.py | ||
| provider_codegen.py | ||
| run-ui-linter.sh | ||
| unit-tests.sh | ||
| uv-run-with-index.sh | ||