forked from phoenix-oss/llama-stack-mirror
## What does this PR do? We noticed that the passthrough inference provider doesn't work agent due to the type mis-match between client and server. We manually cast the llama stack client type to llama stack server type to fix the issue. ## test run `python -m examples.agents.hello localhost 8321` within llama-stack-apps <img width="1073" alt="Screenshot 2025-03-11 at 8 43 44 PM" src="https://github.com/user-attachments/assets/bd1bdd31-606a-420c-a249-95f6184cc0b1" /> fix https://github.com/meta-llama/llama-stack/issues/1560 |
||
---|---|---|
.. | ||
__init__.py | ||
config.py | ||
passthrough.py |