Krrish Dholakia
a2a5884df1
fix(utils.py): allow passing in custom pricing to completion_cost as params
2024-05-16 16:24:44 -07:00
Ishaan Jaff
57d1efec1b
fix round team spend to 2 decimals
2024-05-16 16:21:21 -07:00
Ishaan Jaff
a0c5c402ae
Merge pull request #3698 from BerriAI/litellm_ui_fix_bug_default_selected_model
...
[Fix] Polish Models Page - set max width per column, fix bug with selecting models
2024-05-16 16:13:01 -07:00
Ishaan Jaff
a65c1f8340
fix models page
2024-05-16 16:10:53 -07:00
Ishaan Jaff
907ee2acbd
Merge pull request #3699 from msabramo/msabramo/add_commented_out_set_verbose_line_to_proxy_server_config
...
Add commented `set_verbose` line to proxy_config
2024-05-16 16:08:28 -07:00
Marc Abramowitz
83c242bbb3
Add commented set_verbose line to proxy_config
...
because I've wanted to do this a couple of times and couldn't remember
the exact syntax.
2024-05-16 15:59:37 -07:00
Ishaan Jaff
b170259a88
Merge pull request #3696 from BerriAI/litellm_ui_clean_up_models_page
...
[Feat] Admin UI - show model prices as Per 1M tokens
2024-05-16 15:48:37 -07:00
Ishaan Jaff
97324800ec
Merge pull request #3694 from BerriAI/litellm_allow_setting_anthropic_beta
...
[Feat] Support Anthropic `tools-2024-05-16` - Set Custom Anthropic Custom Headers
2024-05-16 15:48:26 -07:00
Krrish Dholakia
bc23365acc
fix(utils.py): update completion_cost docstring
2024-05-16 15:47:40 -07:00
Krrish Dholakia
ce4dffb7cb
fix(utils.py): fix logging level of error message
2024-05-16 15:44:08 -07:00
Krish Dholakia
acf95e978d
Merge pull request #3575 from BerriAI/litellm_end_user_obj
...
fix(proxy_server.py): check + get end-user obj even for master key calls
2024-05-16 15:34:06 -07:00
Ishaan Jaff
a1eff57ded
ui - fix bug with defaul selected model
2024-05-16 15:31:27 -07:00
Krrish Dholakia
b696d47442
docs(billing.md): update lago screenshot
2024-05-16 15:30:33 -07:00
Krrish Dholakia
53ddc9fdbe
docs(billing.md): improve proxy billing tutorial
2024-05-16 15:27:23 -07:00
Ishaan Jaff
1dabf424f3
ui - show model prices as /1M tokens
2024-05-16 15:10:39 -07:00
Krrish Dholakia
48714805bd
fix(proxy_server.py): fix code
2024-05-16 15:02:39 -07:00
Ishaan Jaff
aa0863dd76
docs - Setting anthropic-beta
Header in Requests
2024-05-16 14:55:29 -07:00
Ishaan Jaff
e19e475c9f
test - setting extra headers for anthropic tool use
2024-05-16 14:41:26 -07:00
Ishaan Jaff
23bcd03904
feat: Anthropic allow users to set anthropic-beta in headers
2024-05-16 14:40:31 -07:00
Ishaan Jaff
1fc9bcb184
feat use OpenAI extra_headers param
2024-05-16 14:38:17 -07:00
Ishaan Jaff
2179598d1d
Merge pull request #3693 from BerriAI/litellm_fix_gemini_responses
...
[Fix] AI Studio (Gemini API) returns invalid 1 index instead of 0 when "stream": false
2024-05-16 14:21:55 -07:00
Krish Dholakia
0a775821db
Merge branch 'main' into litellm_end_user_obj
2024-05-16 14:16:09 -07:00
Krish Dholakia
92729478c3
Merge pull request #3645 from paneru-rajan/issue-3627-timeout-support
...
Timeout param: custom_llm_provider needs to be set before setting timeout
2024-05-16 14:15:34 -07:00
Krrish Dholakia
a7b9a03991
docs(billing.md): add tutorial on billing with litellm + lago to docs
2024-05-16 14:13:39 -07:00
Ishaan Jaff
a2ef089667
fix - choices index for gemini/ provider
2024-05-16 13:52:46 -07:00
Ishaan Jaff
e9358684fb
feat add gemini-1.5-flash-latest
2024-05-16 13:48:51 -07:00
Ishaan Jaff
0a816b2c45
Merge pull request #3682 from BerriAI/litellm_token_counter_endpoint
...
[Feat] `token_counter` endpoint
2024-05-16 13:39:23 -07:00
Ishaan Jaff
c397114591
Merge pull request #3692 from BerriAI/ui_fix_start_end_time
...
[UI] End User Spend - Fix Timezone diff bug
2024-05-16 13:39:01 -07:00
Ishaan Jaff
0fbe12ef3d
fix don't let tag spend logs raise an error on usage tab
2024-05-16 13:37:38 -07:00
Ishaan Jaff
7b2e210c7b
ui - fix end user timezone diff
2024-05-16 13:29:17 -07:00
Krrish Dholakia
3acb31fa49
docs(lago.md): add lago usage-based billing quick-start to docs
2024-05-16 13:24:04 -07:00
Ishaan Jaff
4a5e6aa43c
test - token count response
2024-05-16 13:20:01 -07:00
Krish Dholakia
d43f75150a
Merge pull request #3685 from BerriAI/litellm_lago_integration
...
feat(lago.py): Enable Usage-based billing with lago
2024-05-16 13:09:48 -07:00
Ishaan Jaff
1664a06de4
Merge pull request #3690 from BerriAI/litellm_fix_cooldown_errors
...
[Fix] - include model name in cool down alerts
2024-05-16 12:55:13 -07:00
Ishaan Jaff
d16a6c03a2
feat - include model name in cool down alerts
2024-05-16 12:52:15 -07:00
Ishaan Jaff
3351c5f11d
add gpt-4o to openai vision docs
2024-05-16 12:43:40 -07:00
Ishaan Jaff
8c3657bad0
Merge pull request #3686 from msabramo/msabramo/fix-datetime-utcnow-deprecation-warnings
...
Fix `datetime.datetime.utcnow` `DeprecationWarning`
2024-05-16 12:19:06 -07:00
Krish Dholakia
ea976d8c30
Merge pull request #3663 from msabramo/msabramo/allow-non-admins-to-use-openai-routes
...
Allow non-admins to use `/engines/{model}/chat/completions`
2024-05-16 12:17:50 -07:00
Marc Abramowitz
4af6638be6
Fix datetime.datetime.utcnow DeprecationWarning
...
Eliminates these warning when running tests:
```
$ cd litellm/tests
pytest test_key_generate_prisma.py -x -vv
...
====================================================================== warnings summary =======================================================================
...
test_key_generate_prisma.py::test_generate_and_call_with_expired_key
test_key_generate_prisma.py::test_key_with_no_permissions
/Users/abramowi/Code/OpenSource/litellm/litellm/proxy/proxy_server.py:2934: DeprecationWarning: datetime.datetime.utcnow() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.now(datetime.UTC).
expires = datetime.utcnow() + timedelta(seconds=duration_s)
...
```
2024-05-16 11:56:02 -07:00
Marc Abramowitz
4194bafae0
Add nicer test ids when using pytest -v
...
Replace:
```
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route0] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route10] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route11] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route12] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route13] PASSED
test_key_generate_prisma.py::test_generate_and_call_with_valid_key[api_route14] PASSED
````
with:
```
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'audio_transcriptions', 'path': '/audio/transcriptions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'audio_transcriptions', 'path': '/v1/audio/transcriptions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/engines/{model}/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/openai/deployments/{model}/chat/completions'}] PASSED
litellm/tests/test_key_generate_prisma.py::test_generate_and_call_with_valid_key[{'route': 'chat_completion', 'path': '/v1/chat/completions'}] PASSED
```
2024-05-16 11:34:22 -07:00
Ishaan Jaff
22ba5fa186
feat - try using hf tokenizer
2024-05-16 10:59:29 -07:00
Krrish Dholakia
e273e66618
feat(lago.py): adding support for usage-based billing with lago
...
Closes https://github.com/BerriAI/litellm/issues/3639
2024-05-16 10:54:18 -07:00
Marc Abramowitz
cf71857354
Add more routes to test_generate_and_call_with_valid_key
2024-05-16 10:44:36 -07:00
Marc Abramowitz
dc52c83b88
Add more routes to test_generate_and_call_with_valid_key
2024-05-16 10:05:35 -07:00
Marc Abramowitz
c427ea3781
Add "/engines/{model}/chat/completions" to openai_routes
...
I don't think that this helps with the issue that I'm seeing, but I
think it might be nice to have this model listed in the openai_routes
list so that it's documented that it's a valid chat_completion route.
2024-05-16 10:03:23 -07:00
Ishaan Jaff
c646b809a6
fix token counter endpoint
2024-05-16 10:03:21 -07:00
Ishaan Jaff
b790d65d28
fix make token counter a /utils/token_counter
2024-05-16 10:00:34 -07:00
Ishaan Jaff
d42e5fcbd5
working token counter endpoint
2024-05-16 09:58:22 -07:00
Marc Abramowitz
d5b2e8e7e8
Make test_generate_and_call_with_valid_key parametrized
...
This allows us to test the same code with different routes.
For example, it lets us test the `/engines/{model}/chat/completions`
route, which https://github.com/BerriAI/litellm/pull/3663 fixes.
2024-05-16 09:54:10 -07:00
Ishaan Jaff
e50284bc72
dev - token_counter endpoint
2024-05-16 09:47:07 -07:00