change multi-line import to single-line

This commit is contained in:
kimbwook 2025-10-15 11:10:06 +09:00
parent 3f66f55771
commit 53f09a7a65
No known key found for this signature in database
GPG key ID: 13B032C99CBD373A
6 changed files with 29 additions and 35 deletions

View file

@ -1,7 +1,7 @@
--- ---
description: "Agents description: "Agents
APIs for creating and interacting with agentic systems." APIs for creating and interacting with agentic systems."
sidebar_label: Agents sidebar_label: Agents
title: Agents title: Agents
--- ---
@ -12,6 +12,6 @@ title: Agents
Agents Agents
APIs for creating and interacting with agentic systems. APIs for creating and interacting with agentic systems.
This section contains documentation for all available providers for the **agents** API. This section contains documentation for all available providers for the **agents** API.

View file

@ -1,14 +1,14 @@
--- ---
description: "The Batches API enables efficient processing of multiple requests in a single operation, description: "The Batches API enables efficient processing of multiple requests in a single operation,
particularly useful for processing large datasets, batch evaluation workflows, and particularly useful for processing large datasets, batch evaluation workflows, and
cost-effective inference at scale. cost-effective inference at scale.
The API is designed to allow use of openai client libraries for seamless integration. The API is designed to allow use of openai client libraries for seamless integration.
This API provides the following extensions: This API provides the following extensions:
- idempotent batch creation - idempotent batch creation
Note: This API is currently under active development and may undergo changes." Note: This API is currently under active development and may undergo changes."
sidebar_label: Batches sidebar_label: Batches
title: Batches title: Batches
--- ---
@ -18,14 +18,14 @@ title: Batches
## Overview ## Overview
The Batches API enables efficient processing of multiple requests in a single operation, The Batches API enables efficient processing of multiple requests in a single operation,
particularly useful for processing large datasets, batch evaluation workflows, and particularly useful for processing large datasets, batch evaluation workflows, and
cost-effective inference at scale. cost-effective inference at scale.
The API is designed to allow use of openai client libraries for seamless integration. The API is designed to allow use of openai client libraries for seamless integration.
This API provides the following extensions: This API provides the following extensions:
- idempotent batch creation - idempotent batch creation
Note: This API is currently under active development and may undergo changes. Note: This API is currently under active development and may undergo changes.
This section contains documentation for all available providers for the **batches** API. This section contains documentation for all available providers for the **batches** API.

View file

@ -1,7 +1,7 @@
--- ---
description: "Files description: "Files
This API is used to upload documents that can be used with other Llama Stack APIs." This API is used to upload documents that can be used with other Llama Stack APIs."
sidebar_label: Files sidebar_label: Files
title: Files title: Files
--- ---
@ -12,6 +12,6 @@ title: Files
Files Files
This API is used to upload documents that can be used with other Llama Stack APIs. This API is used to upload documents that can be used with other Llama Stack APIs.
This section contains documentation for all available providers for the **files** API. This section contains documentation for all available providers for the **files** API.

View file

@ -1,11 +1,11 @@
--- ---
description: "Inference description: "Inference
Llama Stack Inference API for generating completions, chat completions, and embeddings. Llama Stack Inference API for generating completions, chat completions, and embeddings.
This API provides the raw interface to the underlying models. Two kinds of models are supported: This API provides the raw interface to the underlying models. Two kinds of models are supported:
- LLM models: these models generate \"raw\" and \"chat\" (conversational) completions. - LLM models: these models generate \"raw\" and \"chat\" (conversational) completions.
- Embedding models: these models generate embeddings to be used for semantic search." - Embedding models: these models generate embeddings to be used for semantic search."
sidebar_label: Inference sidebar_label: Inference
title: Inference title: Inference
--- ---
@ -16,10 +16,10 @@ title: Inference
Inference Inference
Llama Stack Inference API for generating completions, chat completions, and embeddings. Llama Stack Inference API for generating completions, chat completions, and embeddings.
This API provides the raw interface to the underlying models. Two kinds of models are supported: This API provides the raw interface to the underlying models. Two kinds of models are supported:
- LLM models: these models generate "raw" and "chat" (conversational) completions. - LLM models: these models generate "raw" and "chat" (conversational) completions.
- Embedding models: these models generate embeddings to be used for semantic search. - Embedding models: these models generate embeddings to be used for semantic search.
This section contains documentation for all available providers for the **inference** API. This section contains documentation for all available providers for the **inference** API.

View file

@ -1,7 +1,7 @@
--- ---
description: "Safety description: "Safety
OpenAI-compatible Moderations API." OpenAI-compatible Moderations API."
sidebar_label: Safety sidebar_label: Safety
title: Safety title: Safety
--- ---
@ -12,6 +12,6 @@ title: Safety
Safety Safety
OpenAI-compatible Moderations API. OpenAI-compatible Moderations API.
This section contains documentation for all available providers for the **safety** API. This section contains documentation for all available providers for the **safety** API.

View file

@ -22,22 +22,16 @@ from llama_stack.apis.vector_io import (
) )
from llama_stack.log import get_logger from llama_stack.log import get_logger
from llama_stack.providers.datatypes import Api, VectorDBsProtocolPrivate from llama_stack.providers.datatypes import Api, VectorDBsProtocolPrivate
from llama_stack.providers.inline.vector_io.chroma import ( from llama_stack.providers.inline.vector_io.chroma import ChromaVectorIOConfig as InlineChromaVectorIOConfig
ChromaVectorIOConfig as InlineChromaVectorIOConfig,
)
from llama_stack.providers.utils.kvstore import kvstore_impl from llama_stack.providers.utils.kvstore import kvstore_impl
from llama_stack.providers.utils.kvstore.api import KVStore from llama_stack.providers.utils.kvstore.api import KVStore
from llama_stack.providers.utils.memory.openai_vector_store_mixin import ( from llama_stack.providers.utils.memory.openai_vector_store_mixin import OpenAIVectorStoreMixin
OpenAIVectorStoreMixin,
)
from llama_stack.providers.utils.memory.vector_store import ( from llama_stack.providers.utils.memory.vector_store import (
ChunkForDeletion, ChunkForDeletion,
EmbeddingIndex, EmbeddingIndex,
VectorDBWithIndex, VectorDBWithIndex,
) )
from llama_stack.providers.utils.vector_io.vector_utils import ( from llama_stack.providers.utils.vector_io.vector_utils import WeightedInMemoryAggregator
WeightedInMemoryAggregator,
)
from .config import ChromaVectorIOConfig as RemoteChromaVectorIOConfig from .config import ChromaVectorIOConfig as RemoteChromaVectorIOConfig