-
Notifications
You must be signed in to change notification settings - Fork 119
FBP testing function with different variations of parameters. #1712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
leftaroundabout
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks pretty good overall. A couple of details could perhaps be improved.
| ['par2d', 'par3d', 'cone2d', 'cone3d'] | ||
| ) | ||
|
|
||
| projectors = [] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There should be some comment here regarding what all the numbers mean and where they come from.
As I understand it, these are measured
And I don't see any reason to store these numbers to 4 digits accuracy, it's mostly interesting what the order of magnitude of the errors is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I agree. I'll add explanations.
Basically all L2 values under 10 are still good reconstructions, while as the value approaches that, they get more blurred. For helical geometries, the reconstructions were just bad, with a lot of streaking artifacts.
We can change those to for example 1 or 2 digit accuracy.
| raise ValueError('geom not valid') | ||
|
|
||
| # Ray transform | ||
| return (odl.applications.tomo.RayTransform(reco_space, geom, impl=astra_impl, use_cache=False), filter, L2_value, max_dist) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this a dict, so it's clear what each entry represents?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can do that.
| backproj = fbp(proj) | ||
|
|
||
| L2Norm = odl.functionals.default_functionals.L2Norm(fbp.range) | ||
| assert L2Norm(phantom-backproj) == pytest.approx(float(projector[2]), rel=float(projector[2])*rtol) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather make this a < check, as the goal is mostly to put an upper bound on the error. But that's debatable.
| 'cone3d astra_cuda pytorch cuda:0 Cosine 8.0977 0.2606', | ||
| 'cone3d astra_cuda pytorch cuda:0 Hamming 9.5953 0.3016', | ||
| 'cone3d astra_cuda pytorch cuda:0 Hann 10.0030 0.3154', | ||
| 'helical astra_cuda pytorch cuda:0 Ram-Lak 42.6803 0.7171', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is helical only tested with pytorch?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here I have followed the ray transform testing parameters, and added the filters and threshold values.
It can be tested with different combinations of course. We can do that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right... that should probably be changed in ray_trafo_test.py as well.
|
How long do the tests take to run? It might be good to put most of them, especially the 3D ones, in the "slow" category. |
Couple of minutes, haven't timed it. 3D ones certainly slow down the process a lot. |
|
Good! Thank you for designing these tests :) Can you also include the fix to the Ram-Lak filter implementation to this PR? |
|
Oh yes, that is fixed on my local but did not think about pushing it. I will do. I will have a chat tomorrow with either of you to ask about the "slow" category that Justus mentioned. I can also push ray transform test with the helical geometry with numpy. |
…rection to filtered_back_projection, added helical numpy to ray_trafo_test
I also modified the ray_transform_slow_test which was erroring for pytorch+skimage. Added a skiptest
|
@leftaroundabout could you please review the changes and merge if everything fits? Also, should we rebase on master before merging it? Antti's new test are much more accurate than the existing ones. |
| filt = filter_type(norm_freq) | ||
| elif filter_type == 'ram-lak': | ||
| pass | ||
| filt = norm_freq |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line was also changed in #1713.
A problem with filt = norm_freq is that it will cause norm_freq to be modified in the subsequent filt *= indicator statement. While this does not currently seem to cause any problems, it would be quite confusing and might cause hard-to-debug issues in the future.
I suggest you rebase on top of master, which now includes the safer alternative from #1713.
git fetch origin
git rebase -X ours origin/masterThen push to update the pull request (might require -f force-pushing)
This script runs a tests for Filtered Back-Projection with respect to different parameters, such as geometries, backends, and filters.