llama-stack-mirror/tests/unit/providers/utils
Charlie Doern 7574f147b6 feat: implement OpenAI chat completion for meta_reference provider
- Add chat_completion() method to LlamaGenerator supporting OpenAI request format
- Implement openai_chat_completion() in MetaReferenceInferenceImpl
- Fix ModelRunner task dispatch to handle chat_completion tasks
- Add convert_openai_message_to_raw_message() utility for message conversion
- Add unit tests for message conversion and model-parallel dispatch
- Remove unused CompletionRequestWithRawContent references

Signed-off-by: Charlie Doern <cdoern@redhat.com>
2025-11-08 14:33:19 -05:00
..
inference feat: implement OpenAI chat completion for meta_reference provider 2025-11-08 14:33:19 -05:00
memory revert: "chore(cleanup)!: remove tool_runtime.rag_tool" (#3877) 2025-10-21 11:22:06 -07:00
__init__.py fix: add check for interleavedContent (#1973) 2025-05-06 09:55:07 -07:00
test_form_data.py fix(expires_after): make sure multipart/form-data is properly parsed (#3612) 2025-09-30 16:14:03 -04:00
test_model_registry.py feat: add static embedding metadata to dynamic model listings for providers using OpenAIMixin (#3547) 2025-09-25 17:17:00 -04:00
test_openai_compat_conversion.py feat(tools)!: substantial clean up of "Tool" related datatypes (#3627) 2025-10-02 15:12:03 -07:00
test_scheduler.py chore: default to pytest asyncio-mode=auto (#2730) 2025-07-11 13:00:24 -07:00