When I run DASP on a convolutional neural network (mnist dataset), I found something weird.
If I use a ImagePlayerIterator with a big window_shape (like 1/4 of the image size), then y1 and y2 will produce nan.
y1, y2 = self.dasp_model.predict(inputs, batch_size=batch_size)
y1 = y1.reshape(len(ks), x.shape[0], -1, 2)
y2 = y2.reshape(len(ks), x.shape[0], -1, 2)
I tried to check if something like delta (0.0001) is missing in denominator or divisor, but it seems like no problem in codes of all lpdn or dasp layers.
Have you any idea about this? Thanks in advance.