LM Studio inference integration

Co-authored-by: Rugved Somwanshi <rugved@lmstudio.ai>
This commit is contained in:
Neil Mehta 2025-03-14 15:21:15 -04:00 committed by Matt Clayton
parent 1bb1d9b2ba
commit 461eec425d
16 changed files with 1096 additions and 0 deletions

View file

@ -0,0 +1,70 @@
<!-- This file was auto-generated by distro_codegen.py, please edit source -->
# LM Studio Distribution
The `llamastack/distribution-lmstudio` distribution consists of the following provider configurations.
| API | Provider(s) |
|-----|-------------|
| agents | `inline::meta-reference` |
| datasetio | `remote::huggingface`, `inline::localfs` |
| eval | `inline::meta-reference` |
| inference | `remote::lmstudio` |
| safety | `inline::llama-guard` |
| scoring | `inline::basic`, `inline::llm-as-judge`, `inline::braintrust` |
| telemetry | `inline::meta-reference` |
| tool_runtime | `remote::tavily-search`, `inline::code-interpreter`, `inline::rag-runtime` |
| vector_io | `inline::faiss`, `remote::chromadb`, `remote::pgvector` |
### Environment Variables
The following environment variables can be configured:
- `LLAMA_STACK_PORT`: Port for the Llama Stack distribution server (default: `5001`)
### Models
The following models are available by default:
- `meta-llama-3-8b-instruct `
- `meta-llama-3-70b-instruct `
- `meta-llama-3.1-8b-instruct `
- `meta-llama-3.1-70b-instruct `
- `llama-3.2-1b-instruct `
- `llama-3.2-3b-instruct `
- `llama-3.2-70b-instruct `
- `nomic-embed-text-v1.5 `
- `all-minilm-l6-v2 `
## Set up LM Studio
Download LM Studio from [https://lmstudio.ai/download](https://lmstudio.ai/download). Start the server by opening LM Studio and navigating to the `Developer` Tab, or, run the CLI command `lms server start`.
## Running Llama Stack with LM Studio
You can do this via Conda (build code) or Docker which has a pre-built image.
### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-lmstudio \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT
```
### Via Conda
```bash
llama stack build --template lmstudio --image-type conda
llama stack run ./run.yaml \
--port 5001
```