Skip to content

Commit a7c9cae

Browse files
committed
typo fix for Euclidean
1 parent 2270120 commit a7c9cae

File tree

12 files changed

+28
-28
lines changed

12 files changed

+28
-28
lines changed

README.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -95,7 +95,7 @@ mtt(A, G) == Z # matrix times tensor
9595

9696
The output variable `stats` is a `DataFrame` that records the requested stats every iteration. You can pass a list of supported stats, or custom stats. See [`Iteration Stats`](@ref) for more details.
9797

98-
By default, the iteration number, objective value (L2 norm between the input and the model in this case), and the Euclidian norm of the gradient (of the loss function at the current iteration) are recorded. The following would reproduce the default stats in our running example.
98+
By default, the iteration number, objective value (L2 norm between the input and the model in this case), and the Euclidean norm of the gradient (of the loss function at the current iteration) are recorded. The following would reproduce the default stats in our running example.
9999

100100
```julia
101101
X, stats, kwargs = factorize(Y;

docs/src/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
(Coming Soon) The package also supports user defined models and constraints provided the operations for combining factor into a tensor, and projecting/applying the constraint are given. It is also a longer term goal to support other optimization objective beyond minimizing the least-squares (Frobenius norm) between the input tensor and model.
77

8-
The general scheme for computing the decomposition is a generalization of Xu and Yin's Block Coordinate Descent Method (2013) that cyclicaly updates each factor in a model with a proximal gradient descent step. Note for convex constraints, the proximal operation would be a Euclidian projection onto the constraint set, but we find some improvment with a hybrid approach of a partial Euclidian projection followed by a rescaling step. In the case of a simplex constraint on one factor, this looks like: dividing the constrained factor by the sum of entries, and multiplying another factor by this sum to preserve the product.
8+
The general scheme for computing the decomposition is a generalization of Xu and Yin's Block Coordinate Descent Method (2013) that cyclicaly updates each factor in a model with a proximal gradient descent step. Note for convex constraints, the proximal operation would be a Euclidean projection onto the constraint set, but we find some improvment with a hybrid approach of a partial Euclidean projection followed by a rescaling step. In the case of a simplex constraint on one factor, this looks like: dividing the constrained factor by the sum of entries, and multiplying another factor by this sum to preserve the product.
99

1010

1111
```@contents

docs/src/quickguide.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ mtt(A, G) == Z # matrix times tensor
6666

6767
The output variable `stats` is a `DataFrame` that records the requested stats every iteration. You can pass a list of supported stats, or custom stats. See [Iteration Stats](@ref) for more details.
6868

69-
By default, the iteration number, objective value (L2 norm between the input and the model in this case), and the Euclidian norm of the gradient (of the loss function at the current iteration) are recorded. The following would reproduce the default stats in our running example.
69+
By default, the iteration number, objective value (L2 norm between the input and the model in this case), and the Euclidean norm of the gradient (of the loss function at the current iteration) are recorded. The following would reproduce the default stats in our running example.
7070

7171
```julia
7272
X, stats, kwargs = factorize(Y;

docs/src/tutorial/iterationstats.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,8 +15,8 @@ ObjectiveRatio
1515
RelativeError
1616
IterateNormDiff
1717
IterateRelativeDiff
18-
EuclidianStepSize
19-
EuclidianLipschitz
18+
EuclideanStepSize
19+
EuclideanLipschitz
2020
FactorNorms
2121
PrintStats
2222
DisplayDecomposition
@@ -33,8 +33,8 @@ ObjectiveRatio
3333
RelativeError
3434
IterateNormDiff
3535
IterateRelativeDiff
36-
EuclidianStepSize
37-
EuclidianLipschitz
36+
EuclideanStepSize
37+
EuclideanLipschitz
3838
FactorNorms
3939
```
4040

example/constrainedCP.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ options = (
2828
momentum=true,
2929
final_constraints = l1scale_cols!,
3030
stats=[
31-
Iteration, ObjectiveValue, GradientNNCone, RelativeError, FactorNorms, EuclidianLipschitz
31+
Iteration, ObjectiveValue, GradientNNCone, RelativeError, FactorNorms, EuclideanLipschitz
3232
],
3333
)
3434

example/constrainedTucker.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ decomposition, stats_data, kwargs = fact(Y;
1717
converged=(GradientNNCone, RelativeError),
1818
constrain_init=true,
1919
constraints=nonnegative!,
20-
stats=[Iteration, ObjectiveValue, GradientNNCone, RelativeError, EuclidianLipschitz, EuclidianStepSize]
20+
stats=[Iteration, ObjectiveValue, GradientNNCone, RelativeError, EuclideanLipschitz, EuclideanStepSize]
2121
);
2222

2323
display(stats_data)
@@ -27,4 +27,4 @@ display(kwargs[:update])
2727
# using Pkg
2828
# Pkg.add("Plots")
2929
# using Plots
30-
# plot(stats_data[2:end, :EuclidianLipschitz], stats_data[2:end, :EuclidianStepSize])
30+
# plot(stats_data[2:end, :EuclideanLipschitz], stats_data[2:end, :EuclideanStepSize])

example/constrainedTucker1.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,7 +14,7 @@ decomposition, stats_data, kwargs = fact(Y;
1414
converged=(GradientNNCone, RelativeError),
1515
constrain_init=true,
1616
constraints=nonnegative!,
17-
stats=[Iteration, ObjectiveValue, GradientNNCone, RelativeError, EuclidianLipschitz, EuclidianStepSize]
17+
stats=[Iteration, ObjectiveValue, GradientNNCone, RelativeError, EuclideanLipschitz, EuclideanStepSize]
1818
);
1919

2020
display(stats_data)
@@ -24,4 +24,4 @@ display(kwargs[:update])
2424
# using Pkg
2525
# Pkg.add("Plots")
2626
# using Plots
27-
# plot(stats_data[2:end, :EuclidianLipschitz], stats_data[2:end, :EuclidianStepSize])
27+
# plot(stats_data[2:end, :EuclideanLipschitz], stats_data[2:end, :EuclideanStepSize])

src/BlockTensorFactorization.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -62,7 +62,7 @@ export LinearConstraint
6262

6363
#include("./stats.jl")
6464
export AbstractStat
65-
export DisplayDecomposition, EuclidianLipschitz, EuclidianStepSize, FactorNorms, GradientNorm, GradientNNCone
65+
export DisplayDecomposition, EuclideanLipschitz, EuclideanStepSize, FactorNorms, GradientNorm, GradientNNCone
6666
export IterateNormDiff, IterateRelativeDiff, Iteration, ObjectiveValue, ObjectiveRatio, PrintStats, RelativeError
6767

6868
#include("./blockupdates.jl")

src/Core/Core.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ export LinearConstraint
7272

7373
include("./stats.jl")
7474
export AbstractStat
75-
export DisplayDecomposition, EuclidianLipschitz, EuclidianStepSize, FactorNorms, GradientNorm, GradientNNCone
75+
export DisplayDecomposition, EuclideanLipschitz, EuclideanStepSize, FactorNorms, GradientNorm, GradientNNCone
7676
export IterateNormDiff, IterateRelativeDiff, Iteration, ObjectiveValue, ObjectiveRatio, PrintStats, RelativeError
7777

7878
include("./blockupdates.jl")

src/Core/stats.jl

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -133,30 +133,30 @@ The 2-norm of the stepsizes that would be taken for all blocks.
133133
For example, if there are two blocks, and we would take a stepsize of A to update one block
134134
and B to update the other, this would return sqrt(A^2 + B^2).
135135
"""
136-
struct EuclidianStepSize{T} <: AbstractStat
136+
struct EuclideanStepSize{T} <: AbstractStat
137137
steps::T
138-
function EuclidianStepSize{T}(steps) where T
138+
function EuclideanStepSize{T}(steps) where T
139139
@assert all(x -> typeof(x) <: AbstractStep, steps)
140140
new{T}(steps)
141141
end
142142
end
143143

144-
EuclidianStepSize(; steps, kwargs...) = EuclidianStepSize{typeof(steps)}(steps)
144+
EuclideanStepSize(; steps, kwargs...) = EuclideanStepSize{typeof(steps)}(steps)
145145

146146
"""
147147
The 2-norm of the lipschitz constants that would be taken for all blocks.
148148
149-
Need the stepsizes to be lipschitz steps since it is calculated similarly to EuclidianStepSize.
149+
Need the stepsizes to be lipschitz steps since it is calculated similarly to EuclideanStepSize.
150150
"""
151-
struct EuclidianLipschitz{T} <: AbstractStat
151+
struct EuclideanLipschitz{T} <: AbstractStat
152152
steps::T
153-
function EuclidianLipschitz{T}(steps) where T
153+
function EuclideanLipschitz{T}(steps) where T
154154
@assert all(x -> typeof(x) <: AbstractStep, steps)
155155
new{T}(steps)
156156
end
157157
end
158158

159-
EuclidianLipschitz(; steps, kwargs...) = EuclidianLipschitz{typeof(steps)}(steps)
159+
EuclideanLipschitz(; steps, kwargs...) = EuclideanLipschitz{typeof(steps)}(steps)
160160

161161
"""
162162
FactorNorms(; norm, kwargs...)
@@ -189,8 +189,8 @@ end
189189

190190
(S::IterateNormDiff)(X, _, previous, _, _) = S.norm(X - previous[begin])
191191
(S::IterateRelativeDiff)(X, _, previous, _, _) = S.norm(X - previous[begin]) / S.norm(previous[begin])
192-
(S::EuclidianStepSize)(X, _, _, _, _) = sqrt.(sum(calcstep -> calcstep(X)^2, S.steps))
193-
(S::EuclidianLipschitz)(X, _, _, _, _) = sqrt.(sum(calcstep -> calcstep(X)^(-2), S.steps))
192+
(S::EuclideanStepSize)(X, _, _, _, _) = sqrt.(sum(calcstep -> calcstep(X)^2, S.steps))
193+
(S::EuclideanLipschitz)(X, _, _, _, _) = sqrt.(sum(calcstep -> calcstep(X)^(-2), S.steps))
194194
(S::FactorNorms)(X, _, _, _, _) = S.norm.(factors(X))
195195
(S::PrintStats)(_, _, _, parameters, stats) = if parameters[:iteration] > 0; println(last(stats)); end
196196
function (S::DisplayDecomposition)(X, _, _, parameters, _)

0 commit comments

Comments
 (0)