forked from phoenix/litellm-mirror
* fix(vertex_endpoints.py): fix vertex ai pass through endpoints * test(test_streaming.py): skip model due to end of life * feat(custom_logger.py): add special callback for model hitting tpm/rpm limits Closes https://github.com/BerriAI/litellm/issues/4096
4 lines
93 B
YAML
4 lines
93 B
YAML
model_list:
|
|
- model_name: "gpt-3.5-turbo"
|
|
litellm_params:
|
|
model: "gpt-3.5-turbo"
|