llama-stack-mirror/llama_stack/providers/utils/inference
mergify[bot] a6a600f845
fix(inference): respect table_name config in InferenceStore (backport #4371) (#4372)
# What does this PR do?

The InferenceStore class was ignoring the table_name field from
InferenceStoreReference and always using the hardcoded value
"chat_completions". This meant that any custom table_name configured in
the run config (e.g., "inference_store" in run-with-postgres-store.yaml)
was silently ignored.

This change updates all SQL operations in InferenceStore to use
self.reference.table_name instead of the hardcoded string, ensuring the
configured table name is properly respected.

A new test has been added to verify that custom table names work
correctly for storing, retrieving, and listing chat completions.




## Test Plan

CI

<hr>This is an automatic backport of pull request #4371 done by
[Mergify](https://mergify.com).

Signed-off-by: Sébastien Han <seb@redhat.com>
Co-authored-by: Sébastien Han <seb@redhat.com>
2025-12-11 15:07:35 +01:00
..
__init__.py chore: enable pyupgrade fixes (#1806) 2025-05-01 14:23:50 -07:00
embedding_mixin.py fix(inference): enable routing of models with provider_data alone (backport #3928) (#4142) 2025-11-12 13:41:27 -08:00
inference_store.py fix(inference): respect table_name config in InferenceStore (backport #4371) (#4372) 2025-12-11 15:07:35 +01:00
litellm_openai_mixin.py feat(api)!: support extra_body to embeddings and vector_stores APIs (#3794) 2025-10-12 19:01:52 -07:00
model_registry.py fix: allowed_models config did not filter models (backport #4030) (#4223) 2025-11-24 11:29:53 -08:00
openai_compat.py fix: Update watsonx.ai provider to use LiteLLM mixin and list all models (#3674) 2025-10-08 07:29:43 -04:00
openai_mixin.py fix: enforce allowed_models during inference requests (backport #4197) (#4228) 2025-11-24 11:31:36 -08:00
prompt_adapter.py chore!: Safety api refactoring to use OpenAIMessageParam (#3796) 2025-10-12 08:01:00 -07:00