mirror of
https://github.com/meta-llama/llama-stack.git
synced 2025-10-22 08:17:18 +00:00
# What does this PR do? Fix segfault with load model The cc-vec integration failed with segfault when used with default embedding model on macOS `model_id: nomic-ai/nomic-embed-text-v1.5` and `provider_id: sentence-transformers` Checked crash report and see this is due to torch OPENMP settings. Constrainting to 1 thread works without crashes. ## Test Plan Tested with cc-vec integration 1. start server llama stack run starter 2. Do the setup in https://github.com/raghotham/cc-vec to set env variables and try `uv run cc-vec index --url-patterns "%.github.io" --vector-store-name "ml-research" --limit 50 --chunk-size 800 --overlap 400` |
||
---|---|---|
.. | ||
apis | ||
cli | ||
core | ||
distributions | ||
models | ||
providers | ||
strong_typing | ||
testing | ||
ui | ||
__init__.py | ||
env.py | ||
log.py | ||
schema_utils.py |