mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-31 05:30:01 +00:00
Add support for RamaLama
RamaLama is a fully Open Source AI Model tool that facilitate local management of AI Models. https://github.com/containers/ramalama It is fully open source and supports pulling models from HuggingFace, Ollama, OCI Images, and via URI file://, http://, https:// It uses the llama.cpp and vllm AI engines for running the MODELS. It also defaults to running the models inside of containers. Signed-off-by: Daniel J Walsh <dwalsh@redhat.com>
This commit is contained in:
parent
5f90be5388
commit
9120e07d9d
7 changed files with 665 additions and 0 deletions
31
llama_stack/templates/ramalama/build.yaml
Normal file
31
llama_stack/templates/ramalama/build.yaml
Normal file
|
|
@ -0,0 +1,31 @@
|
|||
version: '2'
|
||||
distribution_spec:
|
||||
description: Use (an external) RamaLama server for running LLM inference
|
||||
providers:
|
||||
inference:
|
||||
- remote::ramalama
|
||||
vector_io:
|
||||
- inline::faiss
|
||||
- remote::chromadb
|
||||
- remote::pgvector
|
||||
safety:
|
||||
- inline::llama-guard
|
||||
agents:
|
||||
- inline::meta-reference
|
||||
telemetry:
|
||||
- inline::meta-reference
|
||||
eval:
|
||||
- inline::meta-reference
|
||||
datasetio:
|
||||
- remote::huggingface
|
||||
- inline::localfs
|
||||
scoring:
|
||||
- inline::basic
|
||||
- inline::llm-as-judge
|
||||
- inline::braintrust
|
||||
tool_runtime:
|
||||
- remote::brave-search
|
||||
- remote::tavily-search
|
||||
- inline::code-interpreter
|
||||
- inline::rag-runtime
|
||||
image_type: conda
|
||||
Loading…
Add table
Add a link
Reference in a new issue