LM Studio inference integration

Co-authored-by: Rugved Somwanshi <rugved@lmstudio.ai>
This commit is contained in:
Neil Mehta 2025-03-14 15:21:15 -04:00 committed by Matt Clayton
parent 1bb1d9b2ba
commit 461eec425d
16 changed files with 1096 additions and 0 deletions

View file

@ -0,0 +1,58 @@
# LM Studio Distribution
The `llamastack/distribution-{{ name }}` distribution consists of the following provider configurations.
{{ providers_table }}
{% if run_config_env_vars %}
### Environment Variables
The following environment variables can be configured:
{% for var, (default_value, description) in run_config_env_vars.items() %}
- `{{ var }}`: {{ description }} (default: `{{ default_value }}`)
{% endfor %}
{% endif %}
{% if default_models %}
### Models
The following models are available by default:
{% for model in default_models %}
- `{{ model.model_id }} {{ model.doc_string }}`
{% endfor %}
{% endif %}
## Set up LM Studio
Download LM Studio from [https://lmstudio.ai/download](https://lmstudio.ai/download). Start the server by opening LM Studio and navigating to the `Developer` Tab, or, run the CLI command `lms server start`.
## Running Llama Stack with LM Studio
You can do this via Conda (build code) or Docker which has a pre-built image.
### Via Docker
This method allows you to get started quickly without having to build the distribution code.
```bash
LLAMA_STACK_PORT=5001
docker run \
-it \
-p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
-v ./run.yaml:/root/my-run.yaml \
llamastack/distribution-{{ name }} \
--yaml-config /root/my-run.yaml \
--port $LLAMA_STACK_PORT
```
### Via Conda
```bash
llama stack build --template lmstudio --image-type conda
llama stack run ./run.yaml \
--port 5001
```