Skip to content

Conversation

@termi-official
Copy link
Member

@termi-official termi-official commented May 23, 2024

This is commonly used to compute the push-forward of surface areas from reference to current configuration in continuum mechanics (Nansons formula).

using Tensors, BenchmarkTools

F = Tensor{2,3,Float64}((1.1,-0.1,0.1, -0.1,0.9,0.0 ,0.25,0.0,1.2))

# Master version
cof2(F) = det(F)*transpose(inv(F))
@btime cof2($F); # 16.583 ns
@btime gradient(cof2, $F); # 233.892 ns
@btime gradient(cof2, $F, :all); # 236.603 ns

# This PR
@btime cof($F); # 5.890 ns
@btime gradient(cof, $F); # 78.131 ns
@btime gradient(cof, $F, :all); # 92.096 ns

@termi-official
Copy link
Member Author

Nightly failure is unrelated to the PR. The behavior for the built-in eigenvalue reconstruction seems to have changed.

@fredrikekre
Copy link
Member

Can cof not be re-used in inv? Also, can you add a note in the changelog for 1.17.0 (and update the version number in the project file).

src/Tensors.jl Outdated
using StaticArrays
# re-exports from LinearAlgebra
export , ×, dot, diagm, tr, det, norm, eigvals, eigvecs, eigen
export , ×, dot, diagm, tr, det, cof, norm, eigvals, eigvecs, eigen
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this come from LinearAlgebra so maybe list separately. Is cof a standard name for this operation btw?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, right. In the continuum mechanics books I read this operation is usually abbreviated with cof (Holzapfel, Wriggers). German Wikipedia also uses it.

https://de.wikipedia.org/wiki/Minor_(Lineare_Algebra)#Eigenschaften

But possible that this abbreviation is a German thing. If you want something different then we can also use it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't mind cof, just wondering if it was called that, or something else, in other software. I can't find anything from a quick serach though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My only worry here is that some standard library might include cofactors in the future. :D

@termi-official
Copy link
Member Author

Can cof not be re-used in inv? Also, can you add a note in the changelog for 1.17.0 (and update the version number in the project file).

We could do, but I am not sure about the performance implications. Will add a benchmark.

@termi-official
Copy link
Member Author

If I understood you correctly, then you propose something along this line:

@generated function inv_new(t::Tensor{2, dim}) where {dim}
    Tt = get_base(t)
    idx(i,j) = compute_index(Tt, i, j)
    ex = quote
        transpose(cof(t))*dinv
    end
    return quote
        $(Expr(:meta, :inline))
        dinv = 1 / det(t)
        @inbounds return $ex
    end
end

Seems like this is slightly faster on my machine in 3D.

julia> F3 = Tensor{2,3,Float64}((1.1,-0.1,0.1, -0.1,0.9,0.0 ,0.25,0.0,1.2))
3×3 Tensor{2, 3, Float64, 9}:
  1.1  -0.1  0.25
 -0.1   0.9  0.0
  0.1   0.0  1.2

julia> F2 = Tensor{2,2,Float64}((1.1,-0.1, -0.1,0.9))
2×2 Tensor{2, 2, Float64, 4}:
  1.1  -0.1
 -0.1   0.9

julia> @btime Tensors.inv_new($F2)
  6.240 ns (0 allocations: 0 bytes)
2×2 Tensor{2, 2, Float64, 4}:
 0.918367  0.102041
 0.102041  1.12245

julia> @btime inv_new($F3)
  11.331 ns (0 allocations: 0 bytes)
3×3 Tensor{2, 3, Float64, 9}:
  0.936281    0.104031    -0.195059
  0.104031    1.12267     -0.0216732
 -0.0780234  -0.00866927   0.849588

julia> @btime Tensors.inv($F2)
  6.240 ns (0 allocations: 0 bytes)
2×2 Tensor{2, 2, Float64, 4}:
 0.918367  0.102041
 0.102041  1.12245

julia> @btime Tensors.inv($F3)
  11.612 ns (0 allocations: 0 bytes)
3×3 Tensor{2, 3, Float64, 9}:
  0.936281    0.104031    -0.195059
  0.104031    1.12267     -0.0216732
 -0.0780234  -0.00866927   0.849588

Reproducer

using Tensors, BenchmarkTools

F2 = Tensor{2,2,Float64}((1.1,-0.1, -0.1,0.9))
F3 = Tensor{2,3,Float64}((1.1,-0.1,0.1, -0.1,0.9,0.0 ,0.25,0.0,1.2))

@btime Tensors.inv_new($F2)
@btime Tensors.inv_new($F3)
@btime Tensors.inv($F2)
@btime Tensors.inv($F3)

Should I include the change?

@KnutAM
Copy link
Member

KnutAM commented May 23, 2024

I'm getting

julia> @btime inv_new($F2);
  2.100 ns (0 allocations: 0 bytes)

julia> @btime inv_new($F3);
  6.800 ns (0 allocations: 0 bytes)

julia> @btime Tensors.inv($F2);
  1.900 ns (0 allocations: 0 bytes)

julia> @btime Tensors.inv($F3);
  4.300 ns (0 allocations: 0 bytes)

julia> @btime myinv($F2);
  1.900 ns (0 allocations: 0 bytes)

julia> @btime myinv($F3);
  7.307 ns (0 allocations: 0 bytes)

with

@generated function inv_new(t::Tensor{2, dim}) where {dim}
           Tt = Tensors.get_base(t)
           idx(i,j) = Tensors.compute_index(Tt, i, j)
           ex = quote
               transpose(cof(t))*dinv
           end
           return quote
               $(Expr(:meta, :inline))
               dinv = 1 / det(t)
               @inbounds return $ex
           end
       end
end

myinv(F) = cof(F)'/det(F);

@termi-official
Copy link
Member Author

I guess the optimizer has some trouble with the transpose statement here. It is also questionable how reliable the measurements are within this regime.

@KnutAM
Copy link
Member

KnutAM commented May 23, 2024

Could do

cof(t::SecondOrderTensor) = _cof(t, Val(false))
inv(t::SecondOrderTensor) = _cof(t, Val(true)) / det(t)
@generated function _cof(t::Tensor{2, dim, T}, ::Val{Transpose}) where {dim, T, Transpose}
    Tt = get_base(t)
    idx(i,j) = Transpose ? compute_index(Tt, j, i) : compute_index(Tt, i, j)
    ...

but not sure if that is worth the added complexity compared to just duplicating the code...

Comment on lines +100 to +131
@generated function cof(t::Tensor{2, dim, T}) where {dim, T}
Tt = get_base(t)
idx(i,j) = compute_index(Tt, i, j)
if dim == 1
ex = :($Tt((T(1.0), )))
elseif dim == 2
ex = quote
v = get_data(t)
$Tt((v[$(idx(2,2))], -v[$(idx(1,2))],
-v[$(idx(2,1))], v[$(idx(1,1))]))
end
else # dim == 3
ex = quote
v = get_data(t)
$Tt(((v[$(idx(2,2))]*v[$(idx(3,3))] - v[$(idx(2,3))]*v[$(idx(3,2))]), #11
-(v[$(idx(1,2))]*v[$(idx(3,3))] - v[$(idx(1,3))]*v[$(idx(3,2))]), #21
(v[$(idx(1,2))]*v[$(idx(2,3))] - v[$(idx(1,3))]*v[$(idx(2,2))]), #31

-(v[$(idx(2,1))]*v[$(idx(3,3))] - v[$(idx(2,3))]*v[$(idx(3,1))]), #12
(v[$(idx(1,1))]*v[$(idx(3,3))] - v[$(idx(1,3))]*v[$(idx(3,1))]), #22
-(v[$(idx(1,1))]*v[$(idx(2,3))] - v[$(idx(1,3))]*v[$(idx(2,1))]), #32

(v[$(idx(2,1))]*v[$(idx(3,2))] - v[$(idx(2,2))]*v[$(idx(3,1))]), #13
-(v[$(idx(1,1))]*v[$(idx(3,2))] - v[$(idx(1,2))]*v[$(idx(3,1))]), #23
(v[$(idx(1,1))]*v[$(idx(2,2))] - v[$(idx(1,2))]*v[$(idx(2,1))])))#33
end
end
return quote
$(Expr(:meta, :inline))
@inbounds return $ex
end
end
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is a generated function required? This seems easier (and equivalent for symmetric tensor)? I get similar or slightly better performance with this.

Suggested change
@generated function cof(t::Tensor{2, dim, T}) where {dim, T}
Tt = get_base(t)
idx(i,j) = compute_index(Tt, i, j)
if dim == 1
ex = :($Tt((T(1.0), )))
elseif dim == 2
ex = quote
v = get_data(t)
$Tt((v[$(idx(2,2))], -v[$(idx(1,2))],
-v[$(idx(2,1))], v[$(idx(1,1))]))
end
else # dim == 3
ex = quote
v = get_data(t)
$Tt(((v[$(idx(2,2))]*v[$(idx(3,3))] - v[$(idx(2,3))]*v[$(idx(3,2))]), #11
-(v[$(idx(1,2))]*v[$(idx(3,3))] - v[$(idx(1,3))]*v[$(idx(3,2))]), #21
(v[$(idx(1,2))]*v[$(idx(2,3))] - v[$(idx(1,3))]*v[$(idx(2,2))]), #31
-(v[$(idx(2,1))]*v[$(idx(3,3))] - v[$(idx(2,3))]*v[$(idx(3,1))]), #12
(v[$(idx(1,1))]*v[$(idx(3,3))] - v[$(idx(1,3))]*v[$(idx(3,1))]), #22
-(v[$(idx(1,1))]*v[$(idx(2,3))] - v[$(idx(1,3))]*v[$(idx(2,1))]), #32
(v[$(idx(2,1))]*v[$(idx(3,2))] - v[$(idx(2,2))]*v[$(idx(3,1))]), #13
-(v[$(idx(1,1))]*v[$(idx(3,2))] - v[$(idx(1,2))]*v[$(idx(3,1))]), #23
(v[$(idx(1,1))]*v[$(idx(2,2))] - v[$(idx(1,2))]*v[$(idx(2,1))])))#33
end
end
return quote
$(Expr(:meta, :inline))
@inbounds return $ex
end
end
@inline cof(t::AbstractTensor{2, 1}) = one(t)
@inline function cof(t::Tensor{2, 2})
t11, t21, t12, t22 = get_data(t)
return Tensor{2,2}((t22, -t21, -t12, t11))
end
@inline function cof(t::Tensor{2, 3})
t11, t21, t31, t12, t22, t32, t13, t23, t33 = get_data(t)
c11 = t22 * t33 - t23 * t32
c21 = t13 * t32 - t12 * t33
c31 = t12 * t23 - t13 * t22
c12 = t23 * t31 - t21 * t33
c22 = t11 * t33 - t13 * t31
c32 = t13 * t21 - t11 * t23
c13 = t21 * t32 - t22 * t31
c23 = t12 * t31 - t11 * t23
c33 = t11 * t22 - t12 * t21
return Tensor{2, 3}((c11, c21, c31, c12, c22, c32, c13, c23, c33))
end

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Tensor{2,3} benchmark tested, hopefully no typos when transferring here :)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I used generate, because the other variants above were generated. I do not have a strong opinion here about either.

Copy link
Member

@KnutAM KnutAM Nov 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Was there any reason why, @KristofferC or @fredrikekre?
(IMO we should avoid @generated when possible)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants