>> [parameters,criteria,message,output] = minimizer(objective, guess, options)where the input arguments are:
>> [parameters, fval, exitval, output] = fminimfil(function, starting_parameters, options, {constraints})
where the options and output arguments are structures.
All of these methods support constraints on parameters during the
optimization process.Optimizer |
Description
(continuous problems) |
Mean
Success
ratio
(%) |
Mean
execution
time
(s) |
fminpso [8] | Particle Swarm Optimization | 97.0 | 18.0 |
fminpowell [13] | Powell with Coggins line search | 96.7 | 0.7 |
fminsimpsa [1] | simplex/simulated annealing | 94.7 | 26.5 |
fminimfil [3] | Unconstrained Implicit filtering |
92.8 | 9.8 |
fminralg [4] | Shor R-algorithm | 88.4 | 0.04 |
fminnewton [21] | Newton gradient search | 79.1 |
0.02 |
Optimizer |
Description
(noisy problems) |
Mean
Success
ratio
(%) |
Mean
execution
time
(s) |
fminpso [8] | Particle Swarm Optimization | 69.7 | 13.8 |
fminsimpsa [1] | simplex/simulated annealing | 67.7 | 24.2 |
fminsce [10] | Shuffled Complex Evolution | 65.7 | 18.7 |
fmincmaes [9] | Evolution Strategy with Covariance Matrix Adaptation | 59.6 |
9.8 |
fminimfil [3] | Unconstrained Implicit filtering | 40.5 | 6.0 |
fminhooke [12] | Hooke-Jeeves direct search | 38.9 |
8.1 |
>> fits(iData)
Function Name |
Description |
Efficiency |
Efficiency |
Comments |
fminanneal [18] |
Simulated annealing |
53.6 |
5.3 |
|
fminbfgs [19] |
Broyden-Fletcher-Goldfarb-Shanno |
83.9 |
2.5 |
fast |
fmincgtrust [5] |
Steihaug Newton-CG-Trust |
87.4 |
4.1 |
fast |
fmincmaes [9] |
Evolution Strategy with Covariance Matrix Adaptation |
86.3 |
59.5 |
|
fminga [15] |
Genetic Algorithm (real coding) |
84.1 |
55.5 |
|
fmingradrand [6] |
Random Gradient |
62.6 |
13.1 |
rather slow for a gradient method |
fminhooke [12] |
Hooke-Jeeves direct search |
94.6 |
38.8 |
Fast and efficient |
fminimfil [3] |
Implicit filtering |
92.7 |
40.5 |
the slowest of the gradient methods |
fminkalman [20] |
unscented Kalman filter |
63.6 |
7.6 |
|
fminlm [7] |
Levenberg-Maquardt |
14.2 |
1.8 |
Performs very well when the objective is a vector, e.g. from least-square errors, on non-noisy problems. Rather slow when not used properly. |
fminnewton [21] |
Newton gradient search |
79.1 |
1.6 |
fast |
fminpowell [13] |
Powell Search with Coggins |
96.6 |
30.7 |
very fast, especially for d>16 |
Powell Search with Golden rule | slower than with Coggins | |||
fminpso [8] |
Particle Swarm Optimization |
97.0 |
69.7 |
|
fminralg [4] |
Shor r-algorithm |
88.3 |
3.3 |
fast |
fminrand [22] |
adaptive random search |
60.7 |
44.7 |
|
fminsce [10] |
Shuffled Complex Evolution |
88.0 |
65.7 |
|
fminsearchbnd [2] (fminsearch) |
Nelder-Mead simplex (fminsearch) |
55.3 |
5.4 |
Matlab default. |
fminsimplex [14] |
Nelder-Mead simplex (alternate implementation
than fminsearch) |
73.3 |
30.0 |
|
fminsimpsa [1] |
simplex/simulated annealing |
94.7 |
67.7 |
|
fminswarm [11] |
Particule Swarm Optimizer (alternate
implementation than fminpso) |
78.4 |
50.2 |
|
fminswarmhybrid [11] |
Hybrid Particule Swarm Optimizer |
80.5 |
24.1 |
|
>> [parameters,criteria,message,output]= fminimfil(model, initial_parameters, options, constraints)
>> p=fminimfil(objective, starting, options, [ 1 0 0 0 ])will fix the first model parameter. A similar behavior is obtained when setting constraints as a structure with a fixed member :
>> constraints.fixed = [ 1 0 0 0 ];The constraints vector should have the same length as the model parameter vector.
>> p=fminimfil(objective, starting, options, [ 0.5 0.8 0 0 ], [ 1 1.2 1 1 ])A similar behavior is obtained by setting constraints as a structure with members min and max :
>> constraints.min = [ 0.5 0.8 0 0 ];The constraints vectors should have the same length as the model parameter vector, and NaN values can be used not to apply min/max constraints on specified parameters.
>> constraints.max = [ 1 1.2 1 1 ];