llama-stack-mirror/llama_toolchain/inference
2024-08-03 20:58:00 -07:00
..
api Distribution server now functioning 2024-08-02 13:37:40 -07:00
quantization Distribution server now functioning 2024-08-02 13:37:40 -07:00
__init__.py Initial commit 2024-07-23 08:32:33 -07:00
adapters.py Distribution server now functioning 2024-08-02 13:37:40 -07:00
api_instance.py Distribution server now functioning 2024-08-02 13:37:40 -07:00
client.py implement full-passthrough in the server 2024-08-03 14:15:20 -07:00
event_logger.py Added Ollama as an inference impl (#20) 2024-07-31 22:08:37 -07:00
generation.py Distribution server now functioning 2024-08-02 13:37:40 -07:00
inference.py cleanup, moving stuff to common, nuke utils 2024-08-03 20:58:00 -07:00
model_parallel.py Distribution server now functioning 2024-08-02 13:37:40 -07:00
ollama.py getting closer to a distro definition, distro install + configure works 2024-08-01 23:12:43 -07:00
parallel_utils.py Initial commit 2024-07-23 08:32:33 -07:00