Fix supervisor: report vault execution so stuck-scan order isn't fixed#187
Fix supervisor: report vault execution so stuck-scan order isn't fixed#187
Conversation
added autobalancer callback to find potentially stuck vaults
a6cce3f to
b2af175
Compare
|
Failing tests are due to rounding issues and should be fixed with: Just for info: |
| return | ||
| } | ||
| var i = 0 | ||
| while i < self.stuckScanOrder.length { |
There was a problem hiding this comment.
You can use firstIndex(of: T): Int? to find the element here
https://cadence-lang.org/docs/language/values-and-types/arrays#array-fields-and-functions
| let pending = self.pendingQueue.remove(key: yieldVaultID) | ||
| var i = 0 | ||
| while i < self.stuckScanOrder.length { | ||
| if self.stuckScanOrder[i] == yieldVaultID { |
| access(all) view fun getPendingYieldVaultIDsPaginated(page: Int, size: UInt?): [UInt64] { | ||
| let pageSize = size ?? Int(self.MAX_BATCH_SIZE) |
There was a problem hiding this comment.
| access(all) view fun getPendingYieldVaultIDsPaginated(page: Int, size: UInt?): [UInt64] { | |
| let pageSize = size ?? Int(self.MAX_BATCH_SIZE) | |
| access(all) view fun getPendingYieldVaultIDsPaginated(page: Int, size: UInt): [UInt64] { | |
| let pageSize = size == 0 ? Int(self.MAX_BATCH_SIZE) : Int(size) |
main no longer replaces 0 with nil, so the behaviour of defaulting to MAX_BATCH_SIZE on 0 should either be moved here (and its other occurrence updated, and the doc comment above) or added back to main
| let pageSize: Int? = size > 0 ? size : nil | ||
| return FlowYieldVaultsSchedulerRegistry.getPendingYieldVaultIDsPaginated(page: page, size: pageSize) | ||
| access(all) fun main(page: Int, size: UInt): [UInt64] { | ||
| return FlowYieldVaultsSchedulerRegistry.getPendingYieldVaultIDsPaginated(page: page, size: size) |
There was a problem hiding this comment.
Either update getPendingYieldVaultIDsPaginated or add back the default check here
Closes: #177
Description
The supervisor “check the first N vaults” logic was fixed: vault executions are now reported to the registry, which keeps an ordered list of “least recently executed” vaults. The supervisor then scans only those first N (e.g. 5) and recovers the ones that are actually stuck, instead of always the same fixed set.
What was implemented
Execution callback
Each AutoBalancer now has an execution callback that runs after a scheduled rebalance. The callback calls the registry with that vault’s id so the registry can update its internal order (remove id from the list, append to the end).
Shared callback resource
In
FlowYieldVaultsAutoBalancers, a singleRegistryReportCallbackresource per account implementsDeFiActions.AutoBalancerExecutionCallback. ItsonExecuted(balancerUUID)calls the registry so the vault that just ran is reported by id. Every new AutoBalancer gets a capability to this shared callback and passes it tosetExecutionCallback(cap).Context (from discussion)
The supervisor was limited to processing a small batch (e.g. first 5 vaults) per run. The agreed short-term approach was to order the vault list by “last executed” so the supervisor always checks the oldest / least recently executed vaults first (most likely stuck).