Conversation
@microsoft-github-policy-service agree company="Microsoft" |
| def reset(self): | ||
| """Reset monitor state for a new problem without reloading the model.""" | ||
| self.entropy = [] | ||
| self.ema_means = [] | ||
| self.ema_vars = [] | ||
| self.exit_point = None | ||
| gc.collect() | ||
| try: | ||
| torch.cuda.empty_cache() | ||
| except Exception as e: | ||
| print("Error while emptying cuda cache: ",e) |
There was a problem hiding this comment.
This function is not being used any more, right? If not, we can remove this
There was a problem hiding this comment.
It can be useful in some cases. For instance, consider the case where a user evaluates n samples. Monitors like EAT and DEER can be initialized just once instead of initializing separately for each of the n samples, and after each iteration, they can be reset. This helps reduce the overhead of repeated initialization for these monitors.
| def reset(self): | ||
| """Reset monitor state for a new problem.""" | ||
| self.confidence = [] | ||
| gc.collect() | ||
| try: | ||
| torch.cuda.empty_cache() | ||
| except Exception as e: | ||
| print("Error while emptying cuda cache: ",e) |
There was a problem hiding this comment.
Addressed in an earlier comment
There was a problem hiding this comment.
What is the latency difference after adding the lock across all the monitors ?
No description provided.