-
Notifications
You must be signed in to change notification settings - Fork 3
NH-124861: redo token calculation #238
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR refactors the token bucket implementation to use a time-based calculation approach instead of a timer-thread model. The main change eliminates the background thread that periodically replenished tokens, replacing it with on-demand calculations based on elapsed time since last usage. This improves thread safety and simplifies the implementation.
Key Changes:
- Replaced timer-based token replenishment with time-elapsed calculations
- Changed rate parameter from "tokens per interval" to "tokens per second"
- Added proper thread safety with mutex locks for all token bucket operations
- Deprecated
sample_rateandsampling_rateconfiguration options
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| lib/solarwinds_apm/sampling/token_bucket.rb | Complete refactoring to use time-based token calculation, removed timer thread, added mutex locking for thread safety |
| lib/solarwinds_apm/sampling/oboe_sampler.rb | Removed timer start calls and reorganized mutex synchronization in update_settings and get_settings methods |
| lib/solarwinds_apm/config.rb | Deprecated sample_rate and sampling_rate configuration options with simplified warning messages |
| test/sampling/token_bucket_test.rb | Updated tests to reflect new token bucket behavior with time-based replenishment and added thread safety tests |
| test/sampling/oboe_sampler_test.rb | Changed bucket initialization from TokenBucket objects to hash-based settings |
| test/Dockerfile | Updated Ruby version from 3.1.0 to 3.2.6 and removed SWIG installation |
| now = Time.now.to_f | ||
| elapsed = now - @last_used | ||
| @last_used = now | ||
| @tokens += elapsed * @rate |
Copilot
AI
Jan 5, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The token calculation happens on every consume call and on every tokens getter call. For high-frequency operations, this could lead to redundant calculations. Consider memoizing the result within a short time window or only recalculating when necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot suggests to have following to avoid expensive calculation
# Skip recalculation if insufficient time has elapsed
RECALCULATION_THRESHOLD = 0.001 # 1ms
elapsed = now - @last_update_time
return if elapsed < RECALCULATION_THRESHOLDThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The following json is a setting from prod.
{
"value": 1000000,
"flags": "SAMPLE_START,SAMPLE_THROUGH_ALWAYS,SAMPLE_BUCKET_ENABLED,TRIGGER_TRACE",
"timestamp": 1767640328,
"ttl": 120,
"arguments": {
"BucketCapacity": 2,
"BucketRate": 1,
"TriggerRelaxedBucketCapacity": 20,
"TriggerRelaxedBucketRate": 1,
"TriggerStrictBucketCapacity": 6,
"TriggerStrictBucketRate": 0.1,
"SignatureKey": "<key>"
}
}
(BucketRate, TriggerRelaxedBucketRate, TriggerStrictBucketRate) are (1, 1, 0.1) per second. If we skip the calculation within 1ms windows, the accuracy of the rate calculation will be decreased from (1/1000, 1/1000, 0.1/1000, we are using linux time in ms) to (1/500, 1/500, 0.1/500).
I think the token calculation shouldn't cause too much as it is just arithmetic operations and it is for local root span only. I don't think it is worth to sacrifice the accuracy of the token calculation for this.
@cheempz @raphael-theriault-swi Do you have other opinion? I recalled @raphael-theriault-swi suggested to use nanoseconds to calculate the token in a standup
Description
Test (if applicable)