Skip to content

Commit 0015a26

Browse files
committed
updated readme related packages and typos
1 parent 62dd529 commit 0015a26

File tree

9 files changed

+29
-29
lines changed

9 files changed

+29
-29
lines changed

README.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -33,15 +33,15 @@ Here is an example of an order-3 Tucker-1 factorization.
3333

3434
<img width="500" height="526" alt="tucker_1_example" src="https://github.com/user-attachments/assets/120224d1-f7a9-4566-b416-73917a614ff1" />
3535

36-
Say you've collected your data into an order-3 tensor `Y`. You can use `randn` from `Random.jl` to simulate this.
36+
Say you've collected your data into an order-3 tensor `Y`. (We'll use `randn` from `Random.jl` to simulate this.)
3737

3838
```julia
3939
using Random
4040

4141
Y = randn(100,100,100)
4242
```
4343

44-
Then you can call `factorize` with a number of keywords. The main keywords you many want to specify are the `model` and `rank`. This lets `factorize` know they type and size of the decomposition. See [Decomposition Models](@ref) for a complete list of avalible models, and how to define your own custom decomposition.
44+
Then you can call `factorize` with a number of [keywords](https://mpf-optimization-laboratory.github.io/BlockTensorFactorization.jl/dev/reference/functions/#BlockTensorFactorization.Core.default_kwargs-Tuple{Any}). The main keywords you many want to specify are the `model` and `rank`. This lets `factorize` know they type and size of the decomposition. See [Decomposition Models](https://mpf-optimization-laboratory.github.io/BlockTensorFactorization.jl/dev/tutorial/decompositionmodels/) for a complete list of available models, and how to define your own custom decomposition.
4545

4646
```julia
4747
using BlockTensorFactorization
@@ -112,21 +112,21 @@ stats[end, :ObjectiveValue] # Final objective value
112112
stats[:, :ObjectiveValue] # Objective value at every iteration
113113
```
114114

115-
You may also want to see every stat at a particular iteration which can be accessed in the following way. Note that the initilization is stored in the first row, so the nth row stores the stats right *before* the nth iteration, not after.
115+
You may also want to see every stat at a particular iteration which can be accessed in the following way. Note that the initialization is stored in the first row, so the nth row stores the stats right *before* the nth iteration, not after.
116116

117117
```julia
118118
stats[begin, :] # Every stat at the initialization
119119
stats[4, :] # Every stat right *before* the 4th iteration
120120
stats[end, :] # Every stat at the final iteration
121121
```
122122

123-
See the `DataFrames.jl` package for more data handeling.
123+
See the [`DataFrames.jl`](https://dataframes.juliadata.org/stable/) package for more data handling.
124124

125125
## Output keyword arguments
126126

127-
Since there are many options and a complicated handeling of defaults arguments, the `factorize` function also outputs all the keyword arguments as a `NamedTuple`. This allows you to check what keywords you set, along with the default values that were substituted for the keywords you did not provide.
127+
Since there are many options and a complicated handling of defaults arguments, the `factorize` function also outputs all the keyword arguments as a `NamedTuple`. This allows you to check what keywords you set, along with the default values that were substituted for the keywords you did not provide.
128128

129-
You can access the values by getting the relevent field, or index (as a `Symbol`). In our running example, this would look like the following.
129+
You can access the values by getting the relevant field, or index (as a `Symbol`). In our running example, this would look like the following.
130130

131131
```julia
132132
kwargs.rank == 5
@@ -148,12 +148,12 @@ Naomi Graham, Nicholas Richardson, Michael P. Friedlander, and Joel Saylor. Trac
148148

149149
# Related Packages
150150

151-
## For decomposing tensors
151+
## For decomposing/factorizing tensors
152152

153-
- [TensorDecompositions.jl](https://github.com/yunjhongwu/TensorDecompositions.jl): Supports the decompositions; high-order SVD, CP & Tucker (and nonnegative version), symmetric rank-1, and Tensor-CUR. Most models support one or two algorithms (usually alternating methods). No customizability of constraints.
153+
- [TensorDecompositions.jl](https://github.com/yunjhongwu/TensorDecompositions.jl): Supports the decompositions; high-order SVD, CP & Tucker (and nonnegative versions), symmetric rank-1, and Tensor-CUR. Most models support one or two algorithms (usually alternating methods). No customizability of constraints.
154154
- [NTFk.jl](https://github.com/SmartTensors/NTFk.jl): Only nonnegative Tucker and CP decompositions supported
155-
- [GCPDecompositions.jl](https://github.com/dahong67/GCPDecompositions.jl): Only LBFGSB or ALS algorithms for CPDecompositions
156-
- [NMF.jl](https://github.com/JuliaStats/NMF.jl): Multiple algorithms supported for nonnegative matrix factorizations
155+
- [GCPDecompositions.jl](https://github.com/dahong67/GCPDecompositions.jl): Only LBFGSB or ALS algorithms for CP Decompositions
156+
- [NMF.jl](https://github.com/JuliaStats/NMF.jl): Multiple algorithms supported for only nonnegative matrix factorizations
157157
- [TensorFactorizations.jl](https://github.com/mhauru/TensorFactorizations.jl): Eigenvalue and singular value decompositions of tensors
158158

159159
## For working with tensors and some basic decompositions

docs/src/index.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55

66
(Coming Soon) The package also supports user defined models and constraints provided the operations for combining factor into a tensor, and projecting/applying the constraint are given. It is also a longer term goal to support other optimization objective beyond minimizing the least-squares (Frobenius norm) between the input tensor and model.
77

8-
The general scheme for computing the decomposition is a generalization of Xu and Yin's Block Coordinate Descent Method (2013) that cyclicaly updates each factor in a model with a proximal gradient descent step. Note for convex constraints, the proximal operation would be a Euclidean projection onto the constraint set, but we find some improvment with a hybrid approach of a partial Euclidean projection followed by a rescaling step. In the case of a simplex constraint on one factor, this looks like: dividing the constrained factor by the sum of entries, and multiplying another factor by this sum to preserve the product.
8+
The general scheme for computing the decomposition is a generalization of Xu and Yin's Block Coordinate Descent Method (2013) that cyclically updates each factor in a model with a proximal gradient descent step. Note for convex constraints, the proximal operation would be a Euclidean projection onto the constraint set, but we find some improvement with a hybrid approach of a partial Euclidean projection followed by a rescaling step. In the case of a simplex constraint on one factor, this looks like: dividing the constrained factor by the sum of entries, and multiplying another factor by this sum to preserve the product.
99

1010

1111
```@contents

docs/src/quickguide.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,15 +4,15 @@
44

55
The main feature of this package is to factorize an array. This is accomplished with the `factorize` function.
66

7-
Say you've collected your data into an order-$3$ tensor `Y`. You can use `randn` from `Random.jl` to simulate this.
7+
Say you've collected your data into an order-$3$ tensor `Y`. (We will use `randn` from `Random.jl` to simulate this.)
88

99
```julia
1010
using Random
1111

1212
Y = randn(100,100,100)
1313
```
1414

15-
Then you can call `factorize` with a number of keywords. The main keywords you many want to specify are the `model` and `rank`. This lets `factorize` know they type and size of the decomposition. See [Decomposition Models](@ref) for a complete list of avalible models, and how to define your own custom decomposition.
15+
Then you can call `factorize` with a number of [keywords](https://mpf-optimization-laboratory.github.io/BlockTensorFactorization.jl/dev/reference/functions/#BlockTensorFactorization.Core.default_kwargs-Tuple{Any}). The main keywords you many want to specify are the `model` and `rank`. This lets `factorize` know they type and size of the decomposition. See [Decomposition Models](@ref) for a complete list of available models, and how to define your own custom decomposition.
1616

1717
```julia
1818
using BlockTensorFactorization
@@ -83,21 +83,21 @@ stats[end, :ObjectiveValue] # Final objective value
8383
stats[:, :ObjectiveValue] # Objective value at every iteration
8484
```
8585

86-
You may also want to see every stat at a particular iteration which can be accessed in the following way. Note that the initilization is stored in the first row, so the nth row stores the stats right *before* the nth iteration, not after.
86+
You may also want to see every stat at a particular iteration which can be accessed in the following way. Note that the initialization is stored in the first row, so the nth row stores the stats right *before* the nth iteration, not after.
8787

8888
```julia
8989
stats[begin, :] # Every stat at the initialization
9090
stats[4, :] # Every stat right *before* the 4th iteration
9191
stats[end, :] # Every stat at the final iteration
9292
```
9393

94-
See the `DataFrames.jl` package for more data handeling.
94+
See the `DataFrames.jl` package for more data handling.
9595

9696
## Output keyword arguments
9797

98-
Since there are many options and a complicated handeling of defaults arguments, the `factorize` function also outputs all the keyword arguments as a `NamedTuple`. This allows you to check what keywords you set, along with the default values that were substituted for the keywords you did not provide.
98+
Since there are many options and a complicated handling of defaults arguments, the `factorize` function also outputs all the keyword arguments as a `NamedTuple`. This allows you to check what keywords you set, along with the default values that were substituted for the keywords you did not provide.
9999

100-
You can access the values by getting the relevent field, or index (as a `Symbol`). In our running example, this would look like the following.
100+
You can access the values by getting the relevant field, or index (as a `Symbol`). In our running example, this would look like the following.
101101

102102
```julia
103103
kwargs.rank == 5

src/Core/constraint.jl

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ end
7575

7676
# Need to separate inner and outer into two lines
7777
# since (C.outer ∘ C.inner) would apply f=C.outer to the output of g=C.inner.
78-
# This is not nessesarily the mutated A.
78+
# This is not necessarily the mutated A.
7979
# For example, g = l1scale! would divide A by the 1-norm of A,
8080
# and return the 1-norm of A, not A itself.
8181
function (C::ComposedConstraint)(A::AbstractArray)
@@ -91,7 +91,7 @@ Base.:∘(f::AbstractConstraint, g::AbstractConstraint) = ComposedConstraint(f,
9191
# function Base.:∘(f::AbstractConstraint, g::AbstractConstraint)
9292
# # Need to separate f.apply and g.apply into two lines
9393
# # since (f.apply ∘ g.apply) would apply f to the output of g.
94-
# # This is not nessesarily the result of g.apply.
94+
# # This is not necessarily the result of g.apply.
9595
# # For example, g = l1scale! would divide X by the 1-norm of X,
9696
# # and return the 1-norm of X, not X itself.
9797
# function composition(X::AbstractArray)
@@ -119,7 +119,7 @@ Base.:∘(f::AbstractConstraint, g::AbstractConstraint) = ComposedConstraint(f,
119119
# g::Function
120120
# end
121121

122-
# (F::BoolFunctionAnd)(x) = F.f(x) & F.g(x) # No short curcit to ensure any warnings are shown from both checks
122+
# (F::BoolFunctionAnd)(x) = F.f(x) & F.g(x) # No short circuit to ensure any warnings are shown from both checks
123123

124124
# bool_function_and(f::Function, g::Function) = BoolFunctionAnd(f, g)
125125

@@ -327,7 +327,7 @@ normalizations_to_whats_normalized = [
327327
# Generate the constraints
328328

329329
"""List of symbols of the built-in constraint functions."""
330-
const BUILT_IN_CONSTRAINTS = Symbol[] # const means constant type, not an unmutable
330+
const BUILT_IN_CONSTRAINTS = Symbol[] # const means constant type, not an immutable
331331

332332
for (type, pattern) in constraint_types
333333
for c in constraint_set

src/Core/decomposition.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
"""
2-
Low level code defining the verious decomposition types like Tucker and CP
2+
Low level code defining the various decomposition types like Tucker and CP
33
"""
44

55
"""
@@ -378,7 +378,7 @@ contractions(_::Tucker1) = ((×₁),)
378378
rankof(T::Tucker) = map(x -> size(x, 2), matrix_factors(T))
379379
rankof(T::Tucker1) = size(core(T), 1)
380380

381-
# Essentialy zero index tucker factors so the core is the 0th factor, and the nth factor
381+
# Essentially zero index tucker factors so the core is the 0th factor, and the nth factor
382382
# is the matrix factor in the nth dimension
383383
function factor(D::AbstractTucker, n::Integer)
384384
if n == 0

src/Core/factorize.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -192,7 +192,7 @@ function default_kwargs(Y; kwargs...)
192192
get!(kwargs, :constrain_output) do
193193
isnothing(kwargs[:final_constraints]) ? false : true
194194
end
195-
# the rest of the constraint parsing is handled later, once the decomposition is initalized
195+
# the rest of the constraint parsing is handled later, once the decomposition is initialized
196196

197197
# Stats
198198
get!(kwargs, :stats) do

src/Core/tensorproducts.jl

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ function mtt(A::AbstractMatrix, B::AbstractArray)
5656
#Cmat = A * Bmat
5757
#C = reshape(Cmat, size(A, 1), sizeB[2:end]...)
5858

59-
# Slightly faster implimentation
59+
# Slightly faster implementation
6060
C = zeros(size(A, 1), sizeB[2:end]...)
6161
Cmat = reshape(C, size(A, 1), prod(sizeB[2:end]))
6262
mul!(Cmat, A, Bmat)

src/Core/utils.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -148,7 +148,7 @@ function mat(A::AbstractArray, n::Integer)
148148
N = ndims(A)
149149
1 <= n && n <= N || throw(ArgumentError("n=$n is not a valid dimension to matricize"))
150150
dims = ((1:n-1)..., (n+1:N)...) # all axis, skipping n
151-
return cat(eachslice(A; dims)...; dims=2) # TODO Idealy return a view/SubArray
151+
return cat(eachslice(A; dims)...; dims=2) # TODO Ideally return a view/SubArray
152152
end
153153

154154
#fullouter(v...) = reshape(kron(reverse(vec.(v))...),tuple(vcat(collect.(size.(v))...)...))
@@ -260,7 +260,7 @@ end
260260
"""
261261
abs_randn(x...)
262262
263-
Folded normal or more specificly the half-normal initialization.
263+
Folded normal or more specifically the half-normal initialization.
264264
"""
265265
abs_randn(x...) = abs.(randn(x...))
266266

test/runtests.jl

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -248,7 +248,7 @@ const VERBOSE = true
248248
@test_broken check(c!, v) # Only nonnegativity is satisfied. Entries do not sum to 1
249249
end
250250

251-
@testset "Convertion" begin
251+
@testset "Conversion" begin
252252
@test ProjectedNormalization(l1scale!) == l1normalize!
253253
@test ScaledNormalization(l1normalize!) == l1scale!
254254
end
@@ -343,7 +343,7 @@ end
343343
@test_throws ArgumentError Tucker((G, A)) # Can handle auto conversion to TuckerN in the future??
344344

345345
G = Tucker1((10,11,12), 5);
346-
Y = Tucker1((10,11,12), 5; init=abs_randn); # check if other initilizations work
346+
Y = Tucker1((10,11,12), 5; init=abs_randn); # check if other initializations work
347347

348348
@test isfrozen(G, 0) == false # the core is not frozen
349349
@test isfrozen(G, 1) == false # the matrix factor A is not frozen

0 commit comments

Comments
 (0)