llama-stack-mirror/tests
Ashwin Bharambe 4fb583b407
fix: check that llama stack client plain can be used as a subst for OpenAI client (#2032)
With https://github.com/meta-llama/llama-stack-client-python/pull/226,
now we have llama-stack-client be able to used as a substitute for
OpenAI client (duck-typed) so you don't need to change downstream
library code.

<img width="1399" alt="image"
src="https://github.com/user-attachments/assets/abab6bfd-e6ff-4a7d-a965-fd93e3c105d7"
/>
2025-04-25 12:23:33 -07:00
..
client-sdk/post_training feat: Add nemo customizer (#1448) 2025-03-25 11:01:10 -07:00
external-provider/llama-stack-provider-ollama feat: allow building distro with external providers (#1967) 2025-04-18 17:18:28 +02:00
integration fix: check that llama stack client plain can be used as a subst for OpenAI client (#2032) 2025-04-25 12:23:33 -07:00
unit feat: NVIDIA allow non-llama model registration (#1859) 2025-04-24 17:13:33 -07:00
verifications fix: Return HTTP 400 for OpenAI API validation errors (#2002) 2025-04-23 17:48:32 +02:00
__init__.py refactor(test): introduce --stack-config and simplify options (#1404) 2025-03-05 17:02:02 -08:00