feat: Add static file import system for docs (#3882)

# What does this PR do?

Add static file import system for docs

- Use `remark-code-import` plugin to embed code at build time
- Support importing Python code with syntax highlighting using
`raw-loader` + `ReactMarkdown`

One caveat is that currently when embedding markdown with code used the
syntax highlighting isn't behaving but I'll investigate that in a follow
up.

## Test Plan

Python Example:
<img width="1372" height="995" alt="Screenshot 2025-10-23 at 9 22 18 PM"
src="https://github.com/user-attachments/assets/656d2c78-4d9b-45a4-bd5e-3f8490352b85"
/>

Markdown example:
<img width="1496" height="1070" alt="Screenshot 2025-10-23 at 9 22
38 PM"
src="https://github.com/user-attachments/assets/6c0a07ec-ff7c-45aa-b05f-8c46acd4445c"
/>

---------

Signed-off-by: Francisco Javier Arceo <farceo@redhat.com>
This commit is contained in:
Francisco Arceo 2025-10-24 14:01:33 -04:00 committed by GitHub
parent 8265d4efc8
commit 4566eebe05
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
9 changed files with 683 additions and 914 deletions

View file

@ -24,6 +24,9 @@ ollama run llama3.2:3b --keepalive 60m
#### Step 2: Run the Llama Stack server
```python file=./demo_script.py title="demo_script.py"
```
We will use `uv` to install dependencies and run the Llama Stack server.
```bash
# Install dependencies for the starter distribution
@ -35,27 +38,6 @@ OLLAMA_URL=http://localhost:11434 uv run --with llama-stack llama stack run star
#### Step 3: Run the demo
Now open up a new terminal and copy the following script into a file named `demo_script.py`.
```python
import io, requests
from openai import OpenAI
url="https://www.paulgraham.com/greatwork.html"
client = OpenAI(base_url="http://localhost:8321/v1/", api_key="none")
vs = client.vector_stores.create()
response = requests.get(url)
pseudo_file = io.BytesIO(str(response.content).encode('utf-8'))
uploaded_file = client.files.create(file=(url, pseudo_file, "text/html"), purpose="assistants")
client.vector_stores.files.create(vector_store_id=vs.id, file_id=uploaded_file.id)
resp = client.responses.create(
model="openai/gpt-4o",
input="How do you do great work? Use the existing knowledge_search tool.",
tools=[{"type": "file_search", "vector_store_ids": [vs.id]}],
include=["file_search_call.results"],
)
We will use `uv` to run the script
```
uv run --with llama-stack-client,fire,requests demo_script.py