mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-28 04:12:01 +00:00
Implement remote ramalama provider using AsyncOpenAI as the client since ramalama doesn't have its own Async library. Ramalama is similar to ollama, as it is a lightweight local inference server. However, it runs by default in a containerized mode. RAMALAMA_URL is http://localhost:8080 by default Signed-off-by: Charlie Doern <cdoern@redhat.com> |
||
|---|---|---|
| .. | ||
| apis | ||
| cli | ||
| distribution | ||
| models | ||
| providers | ||
| strong_typing | ||
| templates | ||
| __init__.py | ||
| env.py | ||
| log.py | ||
| schema_utils.py | ||