mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-07-13 16:46:09 +00:00
Some checks failed
Integration Auth Tests / test-matrix (oauth2_token) (push) Failing after 2s
Integration Tests / test-matrix (library, 3.12, agents) (push) Failing after 4s
Integration Tests / test-matrix (library, 3.12, post_training) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.13, datasets) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.13, tool_runtime) (push) Failing after 6s
Integration Tests / test-matrix (server, 3.12, post_training) (push) Failing after 6s
Integration Tests / test-matrix (server, 3.13, datasets) (push) Failing after 6s
Integration Tests / test-matrix (server, 3.13, post_training) (push) Failing after 5s
Integration Tests / test-matrix (server, 3.13, providers) (push) Failing after 5s
Integration Tests / test-matrix (library, 3.12, scoring) (push) Failing after 17s
Integration Tests / test-matrix (library, 3.12, inference) (push) Failing after 26s
Integration Tests / test-matrix (library, 3.13, agents) (push) Failing after 21s
Vector IO Integration Tests / test-matrix (3.12, inline::faiss) (push) Failing after 11s
Integration Tests / test-matrix (library, 3.13, scoring) (push) Failing after 20s
Integration Tests / test-matrix (library, 3.13, inspect) (push) Failing after 18s
Integration Tests / test-matrix (library, 3.13, vector_io) (push) Failing after 20s
Integration Tests / test-matrix (server, 3.13, inspect) (push) Failing after 12s
Vector IO Integration Tests / test-matrix (3.12, inline::milvus) (push) Failing after 12s
Integration Tests / test-matrix (library, 3.12, datasets) (push) Failing after 36s
Integration Tests / test-matrix (library, 3.13, providers) (push) Failing after 22s
Vector IO Integration Tests / test-matrix (3.12, remote::pgvector) (push) Failing after 9s
Integration Tests / test-matrix (library, 3.12, providers) (push) Failing after 21s
Vector IO Integration Tests / test-matrix (3.12, remote::chromadb) (push) Failing after 10s
Integration Tests / test-matrix (server, 3.12, inference) (push) Failing after 22s
Integration Tests / test-matrix (server, 3.12, scoring) (push) Failing after 15s
Vector IO Integration Tests / test-matrix (3.13, inline::milvus) (push) Failing after 9s
Integration Tests / test-matrix (server, 3.13, scoring) (push) Failing after 5s
Integration Tests / test-matrix (server, 3.12, datasets) (push) Failing after 32s
Vector IO Integration Tests / test-matrix (3.13, inline::faiss) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, inspect) (push) Failing after 23s
Vector IO Integration Tests / test-matrix (3.12, inline::sqlite-vec) (push) Failing after 10s
Integration Tests / test-matrix (server, 3.13, tool_runtime) (push) Failing after 7s
Integration Tests / test-matrix (server, 3.12, inspect) (push) Failing after 19s
Vector IO Integration Tests / test-matrix (3.13, inline::sqlite-vec) (push) Failing after 9s
Vector IO Integration Tests / test-matrix (3.13, remote::chromadb) (push) Failing after 8s
Integration Tests / test-matrix (library, 3.13, inference) (push) Failing after 22s
Integration Tests / test-matrix (server, 3.12, agents) (push) Failing after 16s
Integration Tests / test-matrix (server, 3.13, agents) (push) Failing after 17s
Integration Tests / test-matrix (library, 3.12, vector_io) (push) Failing after 24s
Integration Tests / test-matrix (server, 3.12, providers) (push) Failing after 20s
Integration Tests / test-matrix (server, 3.13, inference) (push) Failing after 18s
Integration Tests / test-matrix (server, 3.12, vector_io) (push) Failing after 20s
Integration Tests / test-matrix (server, 3.13, vector_io) (push) Failing after 10s
Integration Tests / test-matrix (library, 3.12, tool_runtime) (push) Failing after 34s
Integration Tests / test-matrix (library, 3.13, post_training) (push) Failing after 33s
Integration Tests / test-matrix (server, 3.12, tool_runtime) (push) Failing after 30s
Python Package Build Test / build (3.12) (push) Failing after 9s
Test External Providers / test-external-providers (venv) (push) Failing after 8s
Vector IO Integration Tests / test-matrix (3.13, remote::pgvector) (push) Failing after 12s
Unit Tests / unit-tests (3.13) (push) Failing after 8s
Python Package Build Test / build (3.13) (push) Failing after 39s
Update ReadTheDocs / update-readthedocs (push) Failing after 41s
Unit Tests / unit-tests (3.12) (push) Failing after 46s
Pre-commit / pre-commit (push) Successful in 1m30s
# What does this PR do? <!-- Provide a short summary of what this PR does and why. Link to relevant issues if applicable. --> - we are using `all-minilm:l6-v2` but the model we download from ollama is `all-minilm:latest` latest: https://ollama.com/library/all-minilm:latest 1b226e2802db l6-v2: https://ollama.com/library/all-minilm:l6-v2 pin 1b226e2802db - even currently they are exactly the same model but if [all-minilm:l12-v2](https://ollama.com/library/all-minilm:l12-v2) is updated, "latest" might not be the same for l6-v2. - the only change in this PR is pin the model id in ollama - also update detailed_tutorial with "starter" to replace deprecated "ollama". <!-- If resolving an issue, uncomment and update the line below --> <!-- Closes #[issue-number] --> ## Test Plan <!-- Describe the tests you ran to verify your changes with result summaries. *Provide clear instructions so the plan can be easily re-executed.* --> ``` >INFERENCE_MODEL="meta-llama/Llama-3.2-3B-Instruct" >llama stack build --run --template ollama --image-type venv ... Build Successful! You can find the newly-built template here: /home/wenzhou/zdtsw-forking/lls/llama-stack/llama_stack/templates/ollama/run.yaml .... - metadata: embedding_dimension: 384 model_id: all-MiniLM-L6-v2 model_type: !!python/object/apply:llama_stack.apis.models.models.ModelType - embedding provider_id: ollama provider_model_id: all-minilm:l6-v2 ... ``` test ``` >llama-stack-client inference chat-completion --message "Write me a 2-sentence poem about the moon" INFO:httpx:HTTP Request: GET http://localhost:8321/v1/models "HTTP/1.1 200 OK" INFO:httpx:HTTP Request: POST http://localhost:8321/v1/openai/v1/chat/completions "HTTP/1.1 200 OK" OpenAIChatCompletion( id='chatcmpl-04f99071-3da2-44ba-a19f-03b5b7fc70b7', choices=[ OpenAIChatCompletionChoice( finish_reason='stop', index=0, message=OpenAIChatCompletionChoiceMessageOpenAIAssistantMessageParam( role='assistant', content="Here is a 2-sentence poem about the moon:\n\nSilver crescent in the midnight sky,\nLuna's gentle face, a beauty to the eye.", name=None, tool_calls=None, refusal=None, annotations=None, audio=None, function_call=None ), logprobs=None ) ], created=1751644429, model='llama3.2:3b-instruct-fp16', object='chat.completion', service_tier=None, system_fingerprint='fp_ollama', usage={'completion_tokens': 33, 'prompt_tokens': 36, 'total_tokens': 69, 'completion_tokens_details': None, 'prompt_tokens_details': None} ) ``` --------- Signed-off-by: Wen Zhou <wenzhou@redhat.com>
103 lines
3 KiB
Python
103 lines
3 KiB
Python
# Copyright (c) Meta Platforms, Inc. and affiliates.
|
|
# All rights reserved.
|
|
#
|
|
# This source code is licensed under the terms described in the LICENSE file in
|
|
# the root directory of this source tree.
|
|
|
|
from llama_stack.apis.models import ModelType
|
|
from llama_stack.models.llama.sku_types import CoreModelId
|
|
from llama_stack.providers.utils.inference.model_registry import (
|
|
ProviderModelEntry,
|
|
build_hf_repo_model_entry,
|
|
build_model_entry,
|
|
)
|
|
|
|
MODEL_ENTRIES = [
|
|
build_hf_repo_model_entry(
|
|
"llama3.1:8b-instruct-fp16",
|
|
CoreModelId.llama3_1_8b_instruct.value,
|
|
),
|
|
build_model_entry(
|
|
"llama3.1:8b",
|
|
CoreModelId.llama3_1_8b_instruct.value,
|
|
),
|
|
build_hf_repo_model_entry(
|
|
"llama3.1:70b-instruct-fp16",
|
|
CoreModelId.llama3_1_70b_instruct.value,
|
|
),
|
|
build_model_entry(
|
|
"llama3.1:70b",
|
|
CoreModelId.llama3_1_70b_instruct.value,
|
|
),
|
|
build_hf_repo_model_entry(
|
|
"llama3.1:405b-instruct-fp16",
|
|
CoreModelId.llama3_1_405b_instruct.value,
|
|
),
|
|
build_model_entry(
|
|
"llama3.1:405b",
|
|
CoreModelId.llama3_1_405b_instruct.value,
|
|
),
|
|
build_hf_repo_model_entry(
|
|
"llama3.2:1b-instruct-fp16",
|
|
CoreModelId.llama3_2_1b_instruct.value,
|
|
),
|
|
build_model_entry(
|
|
"llama3.2:1b",
|
|
CoreModelId.llama3_2_1b_instruct.value,
|
|
),
|
|
build_hf_repo_model_entry(
|
|
"llama3.2:3b-instruct-fp16",
|
|
CoreModelId.llama3_2_3b_instruct.value,
|
|
),
|
|
build_model_entry(
|
|
"llama3.2:3b",
|
|
CoreModelId.llama3_2_3b_instruct.value,
|
|
),
|
|
build_hf_repo_model_entry(
|
|
"llama3.2-vision:11b-instruct-fp16",
|
|
CoreModelId.llama3_2_11b_vision_instruct.value,
|
|
),
|
|
build_model_entry(
|
|
"llama3.2-vision:latest",
|
|
CoreModelId.llama3_2_11b_vision_instruct.value,
|
|
),
|
|
build_hf_repo_model_entry(
|
|
"llama3.2-vision:90b-instruct-fp16",
|
|
CoreModelId.llama3_2_90b_vision_instruct.value,
|
|
),
|
|
build_model_entry(
|
|
"llama3.2-vision:90b",
|
|
CoreModelId.llama3_2_90b_vision_instruct.value,
|
|
),
|
|
build_hf_repo_model_entry(
|
|
"llama3.3:70b",
|
|
CoreModelId.llama3_3_70b_instruct.value,
|
|
),
|
|
# The Llama Guard models don't have their full fp16 versions
|
|
# so we are going to alias their default version to the canonical SKU
|
|
build_hf_repo_model_entry(
|
|
"llama-guard3:8b",
|
|
CoreModelId.llama_guard_3_8b.value,
|
|
),
|
|
build_hf_repo_model_entry(
|
|
"llama-guard3:1b",
|
|
CoreModelId.llama_guard_3_1b.value,
|
|
),
|
|
ProviderModelEntry(
|
|
provider_model_id="all-minilm:l6-v2",
|
|
aliases=["all-minilm"],
|
|
model_type=ModelType.embedding,
|
|
metadata={
|
|
"embedding_dimension": 384,
|
|
"context_length": 512,
|
|
},
|
|
),
|
|
ProviderModelEntry(
|
|
provider_model_id="nomic-embed-text",
|
|
model_type=ModelType.embedding,
|
|
metadata={
|
|
"embedding_dimension": 768,
|
|
"context_length": 8192,
|
|
},
|
|
),
|
|
]
|