Calibration Algorithms and Parameter Sensitivity

Material calibration relies on numerical minimization algorithms that can minimize the errors between the test data and the computed material response.

Material calibration is performed by minimizing an objective function that measures the errors between the test data and the predicted material response. The app constructs the objective function using the user-selected error measure, the test data sets selected for calibration, and the assigned weights. You can assign per-test data weights as well as point-wise weights. During the calibration procedure, the app minimizes the objective function using the selected minimization algorithm.

This page discusses:

Minimization Algorithms

The following numerical minimization algorithms are available for material calibration:

Category Description
Derivative-free Algorithms that optimize the set of fitted model parameters without using derivatives of functions. These algorithms are best suited for viscoelastic problems or problems that are rate-dependent in general.

Algorithms: Nelder-Mead, Hooke-Jeeves, and Powell's Method

Gradient-based Algorithms that use gradients of the function that describes the error between the fitted function and the data. The gradients identify local minima of the error, which would be when the fit would be most accurate. These algorithms are best suited for linear elastic or hyperelastic problems, where the model fits are phenomenological and driven by a single equation or set of equations.

Algorithms: Broyden-Fletcher-Goldfarb-Shanno (BFGS) and Conjugate Gradient

Heuristic (evolutionary) Machine learning algorithms that use information gathered from subsequent iterations of the algorithm to influence the decision of the direction to follow to the most minimal error. These algorithms work best on extremely large sets or complicated sets of data, where many variables or evolving variables are present.

Algorithms: Particle Swarm, Hybrid, and Differential Evolution

Minimization Algorithm Description
Nelder-Mead method The downhill simplex method of Nelder and Mead (derivative-free method).
Hooke-Jeeves method The direct search method of Hooke and Jeeves (derivative-free method).
Linear Least Squares Fit The linear least squares fit method is available for certain hyperelastic material models.
Powell's method Powell's conjugate directions method (line search-based, derivative-free method)
BFGS Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method (line search, gradient-based method).
Conjugate Gradient Non-linear conjugate gradient method (line search, gradient-based method).
Particle Swarm Particle swarm optimization method (stochastic, global, derivative-free method). This method requires all active material parameters to have lower and upper bounds set.
Hybrid Search Hybrid search minimization method (stochastic, global, derivative-free method) that employs both the particle swarm and downhill simplex methods. This method requires all active material parameters to have lower and upper bounds set.
Differential Evolution Differential evolution minimization method (stochastic, global, derivative-free method). This method requires all active material parameters to have lower and upper bounds set.

Nelder-Mead Algorithm

The downhill simplex algorithm of Nelder and Mead is a derivative free minimization algorithm. It uses a simplex with N + 1 vertices for finding the minimum of a function of N variables (in this case, the active parameters). Therefore, a function of two variables would have a simplex that is a triangle, and a function of three variables would have a simplex that is a tetrahedron. The algorithm iteratively alters the locations of the vertices through a series of reflections, expansions, and contractions. These changes enable the simplex to advance and shrink toward the location of a local minimum.

The algorithm will create the initial simplex such that one of the vertices corresponds to the user provided initial guess. The app automatically computes he location of the other vertices. The initial simplex size might impact the convergence of the algorithm and sometimes it might be beneficial to modify the initial simplex size. You can modify the initial simplex size by modifying the simplex edge length scaling factor.

The algorithm is assumed to have “converged” if the characteristic size of the simplex is less than the “Solution tolerance” value or if the maximum difference in the objective function values at the simplex vertices is less than the “Function tolerance” value.

You can also modify the simplex reflection, expansion, contraction and shrink factors, although in practice you should almost never need to do so.

The Nelder-Mead algorithm might stop at non-stationary points. This undesirable situation is more prevalent for cases with an increased number of active parameters. A possible workaround is to restart the algorithm. You can indicate the maximum number of times the algorithm should be automatically restarted or you can manually restart the algorithm by resubmitting the calibration to be executed from the provided solution.

Hooke-Jeeves Algorithm

The direct search method of Hooke and Jeeves belongs to the family of derivative free pattern search algorithms. Starting from the initial guess, this method advances toward a local minimum through a repeated sequence of local exploratory moves followed by pattern moves. During each local exploratory move, the algorithm attempts to find a better location in the vicinity of the current best-so-far location by evaluating the objective function in the direction of each coordinate using a certain step size. If the exploratory search is successful (i.e. a better location is found), a pattern move is performed along the vector defined by the current location and new better location found by the exploratory search. If the exploratory search is not successful or if the subsequent pattern move fails to find a better location, the step size for the exploratory search is decreased repeatedly until a better location is found. The algorithm stops when the step size for the exploratory search becomes less than a cutoff value. You can control this cutoff value by modifying the “Solution tolerance” value. You can modify the ratio by witch the step size of the exploratory search is decreased by changing the “Perturbation factor” value. The initial step size is based on the coordinates of the initial guess. You can also control the initial step size by modifying the “Coordinates fraction” value.

Although fairly robust, it is common for this algorithm to need more objective function evaluations than the other provided local algorithms. You might need to manually restart this algorithm if the maximum number of function evaluations is exhausted before reaching the desired convergence accuracy.

The Linear Least Squares Fit Method

When using the analytical calibration mode, you can use a built-in linear least squares based approach for calibrating certain hyperelastic material models that are characterized by a linear dependence on the material parameters. This method employs a specialized implementation derived from a relative error norm which is currently not available to be used with the other algorithms. This method does not use the initial guess and cannot enforce bounds on the parameters.

Non-linear least squares methods are not currently available in the app.

Powell’s Conjugate Directions Method

This method is a derivative free approach that attempts to locate a local minimum through repeated line searches (i.e. repeated one dimensional directional minimizations). For an objective function of N variables, starting from an initial guess, each iteration the algorithm searches for better locations along a set of N directions. The initial directions correspond to the basis vectors. The algorithm might repeatedly alter the set of directions (i.e., replace existing directions with new directions) to improve the convergence.

The algorithm stops if, during an iteration, either the decrease in the objective function value or the incremental change in solution is smaller than a cutoff value. You can control each of these two convergence criteria by modifying the “Function tolerance” and “Solution tolerance” values, respectively.

In addition, you can control the line search accuracy by modifying the value of the “Line search convergence criterion” (relative convergence tolerance). Increasing the accuracy of the line search might not necessary be beneficial (and it might result in an increased number of function evaluations) since many of the line searches might occur away from the actual minimum location.

When you use this method, parameter bounds are enforced using a penalty function. You can modify the value of penalty stiffness used to enforce the bound constraints.

The Non-linear Conjugate Gradient Method

The non-linear conjugate gradient method belongs to the family of gradient-based line search algorithms. Starting from the initial guess, during each iteration, this algorithm attempts to improve the solution by performing a line search (i.e. one dimensional directional minimization) along a direction conjugate to the gradient of the object function at the best location found in the prior iteration. There are several approaches in the literature for computing the conjugate direction. The approach used by the implementation available with the app is the Polak-Ribière variant. The algorithm computes the gradient numerically. This algorithm might sometimes perform poorly for badly scaled problems.

In certain cases, the algorithm might benefit from increasing the line search accuracy. You can control the line search accuracy by modifying the value of the “Line search convergence criterion” (relative convergence tolerance).

The algorithm stops if, during an iteration, either the decrease in the objective function value or the incremental change in solution is smaller than a cutoff value. You can control each of these two convergence criteria by modifying the “Function tolerance” and “Solution tolerance” values, respectively.

When you use this method, parameter bounds are enforced using a penalty function. You can modify the value of penalty stiffness used to enforce the bound constraints.

The BFGS Method

The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is perhaps the most widespread quasi-Newton minimization algorithm. Quasi-Newton methods attempt to find a local minimum by performing line searches in the Newton direction but it approximates the Hessian matrix based on gradient evaluations.

The algorithm stops if, during an iteration, either the decrease in the objective function value or the incremental change in solution is smaller than a cutoff value. You can control each of these two convergence criteria by modifying the “Function tolerance” and “Solution tolerance” values, respectively.

In the case of this algorithm, using too accurate line searches might result in wasted objective function evaluations. In other words, this algorithm tends to work more efficiently with less exact line searches. You can control the line search accuracy by modifying the value of the “Line search convergence criterion” (relative convergence tolerance).

When you use this method, parameter bounds are enforced using a penalty function. You can modify the value of penalty stiffness used to enforce the bound constraints.

The Particle Swarm Algorithm

The app includes two variants of the particle swarm minimization algorithm, one that uses a global-best swarm topology and a variant that uses a local-best topology.

Global-best Topology
The so-called global-best swarm topology is the default algorithm variant. When you use it, the algorithm communicates the information about the best achieved location to all particles in the swarm. The global-best topology tends to have faster convergence than the local-best variant, but it sometimes requires the use of multiple independent swarms to avoid premature convergence to a local minimum.
Local-best Topology
When you use the local topology variant, the algorithm propagates the information about the objective function decreasing from a particle to the particles in its neighborhood only. For this variant the convergence rate depends on the user-specified particle neighborhood size.

Particle swarm requires you to specify the bounds for the active parameters. The bounds are used to populate the initial locations of the particles using a uniform probability distribution. One of the particles is placed at the location corresponding to the user-specified initial guess. Therefore, this algorithm is less sensitive to a poor initial guess than the other algorithms.

In general, when you select the number of particles, you should include at least 4–5 particles for each active parameter. For the global-best variant, if multiple swarms are used, the algorithm randomly assigns the total number of specified particles to each of the swarms.

Because of the stochastic nature of the algorithm, repeating the same calibration run with the particle swarm algorithm can yield a different solution unless you use a fixed seed for the random number generator.

Particle swarm tends to be more computationally expensive than the other algorithms. By default, this algorithm attempts to run the specified number of particle swarm iterations. The algorithm can terminate before reaching this specified number of iterations if it detects that no progress is being made for a certain number of consecutive iterations. You can control the number of consecutive iterations without any significant change in the objective value and the number of consecutive iterations without any significant change in the solution.

At the end of the particle swarm iterations, if the number of allowed objective evaluations has not been exhausted, the algorithm then automatically runs a final minimization using any of the other available local minimization algorithms, and it specifies the solution at the end of the particle swarm iterations as the initial guess. You can select the algorithm used for this final minimization, and you can skip the final minimization completely.

Sometimes, you might find useful to run the particle swarm algorithm as a domain exploration tool by setting a small number of iterations and turning the final minimization off. You can specify bounds that span several orders of magnitude and possibly increase the total number of particles.

The Differential Evolution Algorithm

The app provides a variant of the differential evolution methods. This algorithm randomly populates the parameter space with one or more population groups, with each group composed of a certain number of "members" or "agents," and each member in turn possessing a "genome" (or location) that represents a candidate solution. The algorithm uses your initial guess as the genome of one of the members.

You can use multiple population groups that explore the parameter space independently. Using more than one population group can sometimes help you avoid premature convergence to a local minimum. You can control the size of the population groups by modifying the group size factor. The default size of the population group is 10 times the number of active parameters.

During each computational iteration, the algorithm generates a new population through a sequence of mutation, crossover, and selection steps:

  • The algorithm promotes the best individual in each population group to the next generation, using it during the mutation step. If desired, you can instruct the algorithm to use the second best individual as well during the mutation step.
  • The crossover step employs the binomial distribution. You can control the differential weight (mutation scaling factor) and the crossover probability; both of these values range from 0.0 to 1.0. Smaller differential weight values can reduce the convergence rate of the algorithm, and larger values can favor exploration of the parameter space but with the risk of missing regions of local minima when the value is too large. A differential weight value in the range 0.4-0.5 typically represents a good compromise. When you increase the population size, it can sometimes be beneficial to decrease the differential weight value. Decreasing the crossover probability improves the search robustness, whereas increasing the value of this control parameter favors a faster convergence.
  • The algorithm uses a selection step to maintain a constant population size.

You can specify the allowed number of objective function evaluations and a total number of iterations to run. The algorithm stops before reaching the specified number of iterations if it detects that no improvement in the objective value or change in solution has been made for a certain number of consecutive iterations. You can control the number of consecutive iterations without any significant change in the objective value and the number of consecutive iterations without any significant change in the solution.

Similarly to the particle swarm and the hybrid search algorithms, you can automatically run a final minimization using the solution of the differential evolution algorithm as the initial guess for any of the local minimization algorithms. The algorithm performs the local minimization if the allowed number of objective function evaluations was not exhausted.

Convergence and Termination Criteria

All minimization algorithms in the app are iterative except for the linear least squares fit algorithm. During each iteration, a minimization algorithm performs one or more objective function evaluations. These evaluations are attempts to move the set of parameters to be calibrated (that is, the minimization solution) toward a location corresponding to a smaller objective function value. The minimization algorithms use certain convergence criteria to terminate the execution if they detect that no progress has been made during one or more consecutive iterations. Although the exact formulation depends somewhat on the algorithm, all algorithms assess the progress of the iterative minimization process based on two convergence criteria:

  1. No "significant decrease" has occurred in the value of the objective function.
  2. No "significant change" has occurred in the values of the calibrated parameters.

You can adjust the definition of a "significant decrease" in the objective function and that of a "significant change" for the convergence criterion based on the change in parameter values. Both options are available in the Optimization Controls dialog box.

In addition to convergence criteria, you can also control the maximum number of iterations and maximum number of allowed objective function evaluations for the selected algorithm. The minimization algorithm stops when it exceeds either of these values.

Importance of Initial Values

Except for the particle swarm and hybrid search algorithms, the supported minimization algorithms are all local minimization algorithms. Their convergence performance depends on the set of initial values that you specify. If the objective function has multiple local minima, the same algorithm can converge to different local minima for different initial values. In addition, two different algorithms can converge to different local minima even when they start from the same set of initial values.

Based on the material behavior selected, the app attempts to provide a set of reasonable initial values that you can further modify before running the calibration. The app also verifies that the initial parameter values are consistent with material behavior-specific requirements before allowing you to start the calibration.

Bounds Specification

The particle swarm and hybrid search algorithms require minimum and maximum bounds for all active parameters; that is, the parameters you select for calibration. For all other algorithms, bounds specification is optional. Depending on the selected material behavior, the app might automatically set bounds on certain material parameters. The app does not enforce bounds when you use the linear least squares fit.

Weights

For the construction of the objective function two types of weights can be used: (i) per-test data set weights and (ii) point-wise weights. By default equal test data weights are assigned to each test data set and point-wise weights are not used.

When using multiple test data sets, it is possible that the calibrated material behavior will fit individual test data sets with different levels of accuracy. You can influence how well a particular test data set is fit (as a whole) by modifying the default level weight for the test data.

In certain cases, different portions of one test data set might be fit to different levels of accuracy. A difference like this can occur if the test data points are very dense in one region of the test data set but scarce in a different one. In other cases, the test data set might contain outliers, inaccurate data, or noisy data. You might be able to account for data sets like this one by assigning increased point-wise weights to test data points in the regions of interest. These weights can force minimization algorithm to provde a better fit.

Parameter Sensitivity

At the end of a calibration run, the app enables you to compute parameter sensitivity indicators for each active design variable. These indicators provide measures of the relative sensitivity of the objective function to perturbations of the active design variables about the final converged solution. The app computes the normalized sensitivities ( S i | x ) as

S i | x = | Δ F i | max k = 1 , N | Δ F k | .
i represents the i t h design variable, x is the converged solution, and Δ F i is the change in the objective function caused by a perturbation of the i t h design variable. The sensitivity values range between 0 and 1. Values near 1 indicate a high sensitivity, while values near 0 indicate a low sensitivity.

The parameter sensitivity indicators can help you gain a better understanding about the quality of the results you see during a calibration. For example, a design variable with a low sensitivity can show a large amount of variability during calibrations because its effect on the objective function is relatively small. You might conclude that this variable is not required in the material model, or perhaps you might not want to include it as a design variable.