mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-12-03 18:00:36 +00:00
RAG aka file search is implemented via the Responses API by specifying the file-search tool. The backend implementation remains unchanged. This PR merely removes the directly exposed API surface which allowed users to directly perform searches from the client. This facility is now available via the `client.vector_store.search()` OpenAI compatible API. |
||
|---|---|---|
| .. | ||
| openapi.yml | ||
| README.md | ||
These are the source-of-truth configuration files used to generate the Stainless client SDKs via Stainless.
openapi.yml: this is the OpenAPI specification for the Llama Stack API.openapi.stainless.yml: this is the Stainless configuration which instructs Stainless how to generate the client SDKs.
A small side note: notice the .yml suffixes since Stainless uses that suffix typically for its configuration files.
These files go hand-in-hand. As of now, only the openapi.yml file is automatically generated using the run_openapi_generator.sh script.