You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Say you've collected your data into an order-3 tensor `Y`. You can use `randn` from `Random.jl` to simulate this.
36
+
Say you've collected your data into an order-3 tensor `Y`. (We'll use `randn` from `Random.jl` to simulate this.)
37
37
38
38
```julia
39
39
using Random
40
40
41
41
Y =randn(100,100,100)
42
42
```
43
43
44
-
Then you can call `factorize` with a number of keywords. The main keywords you many want to specify are the `model` and `rank`. This lets `factorize` know they type and size of the decomposition. See [Decomposition Models](@ref) for a complete list of avalible models, and how to define your own custom decomposition.
44
+
Then you can call `factorize` with a number of [keywords](https://mpf-optimization-laboratory.github.io/BlockTensorFactorization.jl/dev/reference/functions/#BlockTensorFactorization.Core.default_kwargs-Tuple{Any}). The main keywords you many want to specify are the `model` and `rank`. This lets `factorize` know they type and size of the decomposition. See [Decomposition Models](https://mpf-optimization-laboratory.github.io/BlockTensorFactorization.jl/dev/tutorial/decompositionmodels/) for a complete list of available models, and how to define your own custom decomposition.
45
45
46
46
```julia
47
47
using BlockTensorFactorization
@@ -112,21 +112,21 @@ stats[end, :ObjectiveValue] # Final objective value
112
112
stats[:, :ObjectiveValue] # Objective value at every iteration
113
113
```
114
114
115
-
You may also want to see every stat at a particular iteration which can be accessed in the following way. Note that the initilization is stored in the first row, so the nth row stores the stats right *before* the nth iteration, not after.
115
+
You may also want to see every stat at a particular iteration which can be accessed in the following way. Note that the initialization is stored in the first row, so the nth row stores the stats right *before* the nth iteration, not after.
116
116
117
117
```julia
118
118
stats[begin, :] # Every stat at the initialization
119
119
stats[4, :] # Every stat right *before* the 4th iteration
120
120
stats[end, :] # Every stat at the final iteration
121
121
```
122
122
123
-
See the `DataFrames.jl` package for more data handeling.
123
+
See the [`DataFrames.jl`](https://dataframes.juliadata.org/stable/) package for more data handling.
124
124
125
125
## Output keyword arguments
126
126
127
-
Since there are many options and a complicated handeling of defaults arguments, the `factorize` function also outputs all the keyword arguments as a `NamedTuple`. This allows you to check what keywords you set, along with the default values that were substituted for the keywords you did not provide.
127
+
Since there are many options and a complicated handling of defaults arguments, the `factorize` function also outputs all the keyword arguments as a `NamedTuple`. This allows you to check what keywords you set, along with the default values that were substituted for the keywords you did not provide.
128
128
129
-
You can access the values by getting the relevent field, or index (as a `Symbol`). In our running example, this would look like the following.
129
+
You can access the values by getting the relevant field, or index (as a `Symbol`). In our running example, this would look like the following.
130
130
131
131
```julia
132
132
kwargs.rank ==5
@@ -148,12 +148,12 @@ Naomi Graham, Nicholas Richardson, Michael P. Friedlander, and Joel Saylor. Trac
148
148
149
149
# Related Packages
150
150
151
-
## For decomposing tensors
151
+
## For decomposing/factorizing tensors
152
152
153
-
-[TensorDecompositions.jl](https://github.com/yunjhongwu/TensorDecompositions.jl): Supports the decompositions; high-order SVD, CP & Tucker (and nonnegative version), symmetric rank-1, and Tensor-CUR. Most models support one or two algorithms (usually alternating methods). No customizability of constraints.
153
+
-[TensorDecompositions.jl](https://github.com/yunjhongwu/TensorDecompositions.jl): Supports the decompositions; high-order SVD, CP & Tucker (and nonnegative versions), symmetric rank-1, and Tensor-CUR. Most models support one or two algorithms (usually alternating methods). No customizability of constraints.
154
154
-[NTFk.jl](https://github.com/SmartTensors/NTFk.jl): Only nonnegative Tucker and CP decompositions supported
155
-
-[GCPDecompositions.jl](https://github.com/dahong67/GCPDecompositions.jl): Only LBFGSB or ALS algorithms for CPDecompositions
156
-
-[NMF.jl](https://github.com/JuliaStats/NMF.jl): Multiple algorithms supported for nonnegative matrix factorizations
155
+
-[GCPDecompositions.jl](https://github.com/dahong67/GCPDecompositions.jl): Only LBFGSB or ALS algorithms for CP Decompositions
156
+
-[NMF.jl](https://github.com/JuliaStats/NMF.jl): Multiple algorithms supported for only nonnegative matrix factorizations
157
157
-[TensorFactorizations.jl](https://github.com/mhauru/TensorFactorizations.jl): Eigenvalue and singular value decompositions of tensors
158
158
159
159
## For working with tensors and some basic decompositions
Copy file name to clipboardExpand all lines: docs/src/index.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@
5
5
6
6
(Coming Soon) The package also supports user defined models and constraints provided the operations for combining factor into a tensor, and projecting/applying the constraint are given. It is also a longer term goal to support other optimization objective beyond minimizing the least-squares (Frobenius norm) between the input tensor and model.
7
7
8
-
The general scheme for computing the decomposition is a generalization of Xu and Yin's Block Coordinate Descent Method (2013) that cyclicaly updates each factor in a model with a proximal gradient descent step. Note for convex constraints, the proximal operation would be a Euclidean projection onto the constraint set, but we find some improvment with a hybrid approach of a partial Euclidean projection followed by a rescaling step. In the case of a simplex constraint on one factor, this looks like: dividing the constrained factor by the sum of entries, and multiplying another factor by this sum to preserve the product.
8
+
The general scheme for computing the decomposition is a generalization of Xu and Yin's Block Coordinate Descent Method (2013) that cyclically updates each factor in a model with a proximal gradient descent step. Note for convex constraints, the proximal operation would be a Euclidean projection onto the constraint set, but we find some improvement with a hybrid approach of a partial Euclidean projection followed by a rescaling step. In the case of a simplex constraint on one factor, this looks like: dividing the constrained factor by the sum of entries, and multiplying another factor by this sum to preserve the product.
Copy file name to clipboardExpand all lines: docs/src/quickguide.md
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,15 +4,15 @@
4
4
5
5
The main feature of this package is to factorize an array. This is accomplished with the `factorize` function.
6
6
7
-
Say you've collected your data into an order-$3$ tensor `Y`. You can use `randn` from `Random.jl` to simulate this.
7
+
Say you've collected your data into an order-$3$ tensor `Y`. (We will use `randn` from `Random.jl` to simulate this.)
8
8
9
9
```julia
10
10
using Random
11
11
12
12
Y =randn(100,100,100)
13
13
```
14
14
15
-
Then you can call `factorize` with a number of keywords. The main keywords you many want to specify are the `model` and `rank`. This lets `factorize` know they type and size of the decomposition. See [Decomposition Models](@ref) for a complete list of avalible models, and how to define your own custom decomposition.
15
+
Then you can call `factorize` with a number of [keywords](https://mpf-optimization-laboratory.github.io/BlockTensorFactorization.jl/dev/reference/functions/#BlockTensorFactorization.Core.default_kwargs-Tuple{Any}). The main keywords you many want to specify are the `model` and `rank`. This lets `factorize` know they type and size of the decomposition. See [Decomposition Models](@ref) for a complete list of available models, and how to define your own custom decomposition.
16
16
17
17
```julia
18
18
using BlockTensorFactorization
@@ -83,21 +83,21 @@ stats[end, :ObjectiveValue] # Final objective value
83
83
stats[:, :ObjectiveValue] # Objective value at every iteration
84
84
```
85
85
86
-
You may also want to see every stat at a particular iteration which can be accessed in the following way. Note that the initilization is stored in the first row, so the nth row stores the stats right *before* the nth iteration, not after.
86
+
You may also want to see every stat at a particular iteration which can be accessed in the following way. Note that the initialization is stored in the first row, so the nth row stores the stats right *before* the nth iteration, not after.
87
87
88
88
```julia
89
89
stats[begin, :] # Every stat at the initialization
90
90
stats[4, :] # Every stat right *before* the 4th iteration
91
91
stats[end, :] # Every stat at the final iteration
92
92
```
93
93
94
-
See the `DataFrames.jl` package for more data handeling.
94
+
See the `DataFrames.jl` package for more data handling.
95
95
96
96
## Output keyword arguments
97
97
98
-
Since there are many options and a complicated handeling of defaults arguments, the `factorize` function also outputs all the keyword arguments as a `NamedTuple`. This allows you to check what keywords you set, along with the default values that were substituted for the keywords you did not provide.
98
+
Since there are many options and a complicated handling of defaults arguments, the `factorize` function also outputs all the keyword arguments as a `NamedTuple`. This allows you to check what keywords you set, along with the default values that were substituted for the keywords you did not provide.
99
99
100
-
You can access the values by getting the relevent field, or index (as a `Symbol`). In our running example, this would look like the following.
100
+
You can access the values by getting the relevant field, or index (as a `Symbol`). In our running example, this would look like the following.
0 commit comments