Commit graph

3 commits

Author SHA1 Message Date
Sixian Yi
67450e4024
bug fixes on inference tests (#774)
# What does this PR do?

Fixes two issues on providers/test/inference

- [ ] Addresses issue (#issue)


## Test Plan

### Before
```

===================================================================================== FAILURES =====================================================================================
__________________________________ TestVisionModelInference.test_vision_chat_completion_streaming[llama_vision-fireworks][llama_vision] ___________________________________
providers/tests/inference/test_vision_inference.py:145: in test_vision_chat_completion_streaming
    content = "".join(
E   TypeError: sequence item 0: expected str instance, TextDelta found
------------------------------------------------------------------------------ Captured log teardown -------------------------------------------------------------------------------
ERROR    asyncio:base_events.py:1858 Task was destroyed but it is pending!
task: <Task pending name='Task-5' coro=<<async_generator_athrow without __name__>()>>
============================================================================= short test summary info ==============================================================================
FAILED providers/tests/inference/test_vision_inference.py::TestVisionModelInference::test_vision_chat_completion_streaming[llama_vision-fireworks] - TypeError: sequence item 0: expected str instance, TextDelta found
============================================================== 1 failed, 2 passed, 33 deselected, 7 warnings in 3.59s ==============================================================
(base) sxyi@sxyi-mbp llama_stack % 
```

### After 
```
(base) sxyi@sxyi-mbp llama_stack % pytest -k "fireworks"  /Users/sxyi/llama-stack/llama_stack/providers/tests/inference/test_vision_inference.py
/Library/Frameworks/Python.framework/Versions/3.13/lib/python3.13/site-packages/pytest_asyncio/plugin.py:208: PytestDeprecationWarning: The configuration option "asyncio_default_fixture_loop_scope" is unset.
The event loop scope for asynchronous fixtures will default to the fixture caching scope. Future versions of pytest-asyncio will default the loop scope for asynchronous fixtures to function scope. Set the default fixture loop scope explicitly in order to avoid unexpected behavior in the future. Valid fixture loop scopes are: "function", "class", "module", "package", "session"

  warnings.warn(PytestDeprecationWarning(_DEFAULT_FIXTURE_LOOP_SCOPE_UNSET))
=============================================================================== test session starts ================================================================================
platform darwin -- Python 3.13.0, pytest-8.3.3, pluggy-1.5.0
rootdir: /Users/sxyi/llama-stack
configfile: pyproject.toml
plugins: asyncio-0.24.0, html-4.1.1, metadata-3.1.1, dependency-0.6.0, anyio-4.6.2.post1
asyncio: mode=Mode.STRICT, default_loop_scope=None
collected 36 items / 33 deselected / 3 selected                                                                                                                                    

providers/tests/inference/test_vision_inference.py ...                                                                                                                       [100%]

=================================================================== 3 passed, 33 deselected, 7 warnings in 3.75s ===================================================================
(base) sxyi@sxyi-mbp llama_stack % 
```

## Before submitting

- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Ran pre-commit to handle lint / formatting issues.
- [ ] Read the [contributor
guideline](https://github.com/meta-llama/llama-stack/blob/main/CONTRIBUTING.md),
      Pull Request section?
- [ ] Updated relevant documentation.
- [ ] Wrote necessary unit or integration tests.
2025-01-15 15:39:05 -08:00
Dinesh Yeduguru
516e1a3e59
add embedding model by default to distribution templates (#617)
# What does this PR do?
Adds the sentence transformer provider and the `all-MiniLM-L6-v2`
embedding model to the default models to register in the run.yaml for
all providers.

## Test Plan
llama stack build --template together --image-type conda
llama stack run
~/.llama/distributions/llamastack-together/together-run.yaml
2024-12-13 12:48:00 -08:00
Dinesh Yeduguru
96e158eaac
Make embedding generation go through inference (#606)
This PR does the following:
1) adds the ability to generate embeddings in all supported inference
providers.
2) Moves all the memory providers to use the inference API and improved
the memory tests to setup the inference stack correctly and use the
embedding models

This is a merge from #589 and #598
2024-12-12 11:47:50 -08:00