mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 09:53:45 +00:00
- Fixed broken import in openai_responses.py validation code Changed: llama_stack.apis.agents.openai_responses → llama_stack_api.openai_responses - Removed unnecessary skip from test_mcp_tools_in_inference Test already has proper client type check (LlamaStackAsLibraryClient) The library client DOES have register_tool_group() method |
||
|---|---|---|
| .. | ||
| cli | ||
| core | ||
| distributions | ||
| models | ||
| providers | ||
| testing | ||
| __init__.py | ||
| env.py | ||
| log.py | ||