Added implementation to calculate Gram and Row Gram matrices #7378
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Type
Motivation and Context
Gram and row Gram matrix computations (ie. A.T @ A and A @ A.T) are relatively common in some linear algebra (eg. least squares, linear independence, ML kernels, ...).
If you execute A.T().Matmul(A), at least one of the two matrices in the matmul operation will not be contiguous, meaning when matmul is executed, a copy operation will be performed which can be a noticeable performance loss.
Implementations of Gram() and RowGram() are done with a single matrix, with the transposition being done in gemm functions. This means that if A is contiguous, no copy will be performed, which is not true for A.T().Matmul(A).
Checklist:
python util/check_style.py --applyto apply Open3D code styleto my code.
updated accordingly.
results (e.g. screenshots or numbers) here.
Description
Added Gram() and RowGram() functions to Tensor. These are intended for <=2D tensors, similar to how T() is implemented.