-
-
Notifications
You must be signed in to change notification settings - Fork 4.9k
Open
Labels
bugSomething isn't workingSomething isn't working
Description
What happened?
We issue many requests daily through both the proxy and pass-through in testing of an application we're building. After a few days the RAM usage by litellm is significant and it never seems to let it go until issuing a restart. Eventually our monitoring software triggers a ram usage threshold alert and I go in and restart the container.
Here's a graph over a week - you can clearly see where I restarted it ;-)
Running in docker on Almalinux 9 and generally I update when there is a new stable pushed out, but seemingly always the same thing happens.
Relevant log output
Nothing especially telling in the logs. I see a number of these repeating all the time:
litellm-1 | Traceback (most recent call last):
litellm-1 | File "/usr/lib/python3.13/site-packages/litellm/proxy/hooks/proxy_track_cost_callback.py", line 178, in _PROXY_track_cost_callback
litellm-1 | raise Exception(
litellm-1 | "User API key and team id and user id missing from custom callback."
litellm-1 | )
litellm-1 | Exception: User API key and team id and user id missing from custom callback.
litellm-1 | ESC[92m10:34:50 - LiteLLM Proxy:ERRORESC[0m: proxy_track_cost_callback.py:215 - Error in tracking cost callback - User API key and team id and user id missing from custom ca
llback.Are you a ML Ops Team?
No
What LiteLLM version are you on ?
v1.73.6-stable.patch.1
Twitter / LinkedIn details
No response
miguelangelmorenochacon, 404Huang, vuhuucuong, ManuelLR, daily-kim and 8 more
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working