llama-stack-mirror/llama_stack/templates/lmstudio/doc_template.md
Neil Mehta 461eec425d LM Studio inference integration
Co-authored-by: Rugved Somwanshi <rugved@lmstudio.ai>
2025-04-25 14:47:21 -04:00

1.4 KiB

LM Studio Distribution

The llamastack/distribution-{{ name }} distribution consists of the following provider configurations.

{{ providers_table }}

{% if run_config_env_vars %}

Environment Variables

The following environment variables can be configured:

{% for var, (default_value, description) in run_config_env_vars.items() %}

  • {{ var }}: {{ description }} (default: {{ default_value }}) {% endfor %} {% endif %}

{% if default_models %}

Models

The following models are available by default:

{% for model in default_models %}

  • {{ model.model_id }} {{ model.doc_string }} {% endfor %} {% endif %}

Set up LM Studio

Download LM Studio from https://lmstudio.ai/download. Start the server by opening LM Studio and navigating to the Developer Tab, or, run the CLI command lms server start.

Running Llama Stack with LM Studio

You can do this via Conda (build code) or Docker which has a pre-built image.

Via Docker

This method allows you to get started quickly without having to build the distribution code.

LLAMA_STACK_PORT=5001
docker run \
  -it \
  -p $LLAMA_STACK_PORT:$LLAMA_STACK_PORT \
  -v ./run.yaml:/root/my-run.yaml \
  llamastack/distribution-{{ name }} \
  --yaml-config /root/my-run.yaml \
  --port $LLAMA_STACK_PORT

Via Conda

llama stack build --template lmstudio --image-type conda
llama stack run ./run.yaml \
  --port 5001