llama-stack/llama_stack/apis/inference
Russell Bryant 06db9213b1
inference: Add model option to client (#170)
I was running this client for testing purposes and being able to
specify which model to use is a convenient addition. This change makes
that possible.
2024-10-03 11:18:57 -07:00
..
__init__.py API Updates (#73) 2024-09-17 19:51:35 -07:00
client.py inference: Add model option to client (#170) 2024-10-03 11:18:57 -07:00
event_logger.py pre-commit lint 2024-09-28 16:04:41 -07:00
inference.py Use inference APIs for running llama guard 2024-09-24 17:02:57 -07:00