You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This is a wrapper that add functionalities to sklearn logistic regression.
14
+
Contrary to sklearn, this class produces well-calibrated likelihood ratios.
15
+
Thus, this is suitable for score calibration.
16
+
17
+
Attributes:
18
+
A: Scaling Coefficients (num_feats, 1)
19
+
b: biases (1, )
20
+
penalty: str, ‘l1’ or ‘l2’, default: ‘l2’ ,
21
+
Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties.
22
+
New in version 0.19: l1 penalty with SAGA solver (allowing ‘multinomial’ + L1)
23
+
lambda_reg: float, default: 1e-5
24
+
Regularization strength; must be a positive float.
25
+
use_bias: bool, default: True
26
+
Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
27
+
bias_scaling: float, default 1.
28
+
Useful only when the solver ‘liblinear’ is used and use_bias is set to True.
29
+
In this case, x becomes [x, bias_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight.
30
+
Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) bias_scaling has to be increased.
31
+
priors: prior prob for having a positive sample.
32
+
random_state: RandomState instance or None, optional, default: None
default: ‘liblinear’ Algorithm to use in the optimization problem.
36
+
For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and
37
+
‘saga’ are faster for large ones.
38
+
‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas
39
+
‘liblinear’ and ‘saga’ handle L1 penalty.
40
+
Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale.
41
+
New in version 0.17: Stochastic Average Gradient descent solver.
42
+
New in version 0.19: SAGA solver.
43
+
max_iter: int, default: 100
44
+
Useful only for the newton-cg, sag and lbfgs solvers. Maximum number of iterations taken for the solvers to converge.
45
+
dual: bool, default: False
46
+
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
47
+
tol: float, default: 1e-4
48
+
Tolerance for stopping criteria.
49
+
verbose: int, default: 0
50
+
For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.
51
+
warm_start: bool, default: False
52
+
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver.
53
+
New in version 0.17: warm_start to support lbfgs, newton-cg, sag, saga solvers.
Copy file name to clipboardExpand all lines: hyperion/classifiers/greedy_fusion.py
+115-6Lines changed: 115 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -14,6 +14,58 @@
14
14
15
15
16
16
classGreedyFusionBinaryLR(HypModel):
17
+
"""Greedy score fusion based on binary logistic regression.
18
+
19
+
It computes ``max_systmes`` fusions. The best system, the best fusion of two,
20
+
the best fusion of three, ...
21
+
The system selection procedure is as follows:
22
+
* Choose the best system.
23
+
* Fix the best system and choose the system that fuses the best with the best.
24
+
* Fix the best two and choose the system that fuses the best with those two.
25
+
* ...
26
+
27
+
Attributes:
28
+
weights: fusion weights, this is a list with ``max_systems`` elements with shapes, (1,1), (2,1), (3,1), ..., (max_systems,1).
29
+
bias: fusion biaes, this is a list with ``max_systems`` elements with shape (1,).
30
+
system_idx: list of index vector that indicate, which systems are used for the fusion of 1 system, fusion of 2, ....
31
+
system_names: list of strings containing descriptive names for the systems,
32
+
max_systems: max number of systems to fuse, if None, ``max_systems=total_systems``.
33
+
penalty: str, ‘l1’ or ‘l2’, default: ‘l2’ ,
34
+
Used to specify the norm used in the penalization. The ‘newton-cg’, ‘sag’ and ‘lbfgs’ solvers support only l2 penalties.
35
+
New in version 0.19: l1 penalty with SAGA solver (allowing ‘multinomial’ + L1)
36
+
lambda_reg: float, default: 1e-5
37
+
Regularization strength; must be a positive float.
38
+
use_bias: bool, default: True
39
+
Specifies if a constant (a.k.a. bias or intercept) should be added to the decision function.
40
+
bias_scaling: float, default 1.
41
+
Useful only when the solver ‘liblinear’ is used and use_bias is set to True.
42
+
In this case, x becomes [x, bias_scaling], i.e. a “synthetic” feature with constant value equal to intercept_scaling is appended to the instance vector. The intercept becomes intercept_scaling * synthetic_feature_weight.
43
+
Note! the synthetic feature weight is subject to l1/l2 regularization as all other features. To lessen the effect of regularization on synthetic feature weight (and therefore on the intercept) bias_scaling has to be increased.
44
+
priors: prior prob for having a positive sample.
45
+
random_state: int, RandomState instance or None, optional, default: None
46
+
The seed of the pseudo random number generator to use when shuffling the data. If int, random_state is the seed used by the random number generator; If RandomState instance, random_state is the random number generator; . Used when solver == ‘sag’ or ‘liblinear’.
default: ‘liblinear’ Algorithm to use in the optimization problem.
49
+
For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and
50
+
‘saga’ are faster for large ones.
51
+
‘newton-cg’, ‘lbfgs’ and ‘sag’ only handle L2 penalty, whereas
52
+
‘liblinear’ and ‘saga’ handle L1 penalty.
53
+
Note that ‘sag’ and ‘saga’ fast convergence is only guaranteed on features with approximately the same scale.
54
+
max_iter: int, default: 100
55
+
Useful only for the newton-cg, sag and lbfgs solvers. Maximum number of iterations taken for the solvers to converge.
56
+
dual: bool, default: False
57
+
Dual or primal formulation. Dual formulation is only implemented for l2 penalty with liblinear solver. Prefer dual=False when n_samples > n_features.
58
+
tol: float, default: 1e-4
59
+
Tolerance for stopping criteria.
60
+
verbose: int, default: 0
61
+
For the liblinear and lbfgs solvers set verbose to any positive number for verbosity.
62
+
warm_start: bool, default: False
63
+
When set to True, reuse the solution of the previous call to fit as initialization, otherwise, just erase the previous solution. Useless for liblinear solver.
64
+
New in version 0.17: warm_start to support lbfgs, newton-cg, sag, saga solvers.
0 commit comments