Each time you execute a calibration job, you can observe the following
general changes in the app:
- The response lines get closer to the test data in the plot.
- The material model parameters change.
- The error measures listed for the test data sets change, typically
approaching zero or one (depending on your choice of error measurement
algorithm) for all data sets or the data sets you choose to emphasize.
- The
Calibration History panel appears, providing a
graphical representation of the reduction of error by increment.
Calibration Strategies
You can fine-tune the calibration and achieve a better fit for the
response using the following strategies:
- After you import data sets and specify a material model, the app
provides initial values for the material parameters and adds the resulting
response curve to the plot. You can adjust these default values manually in the
Calibration options panel to achieve a better
fit; each change you make to a parameter is reflected in the response curve and
in the error measurements. The robustness of any nonlinear optimization method
can be improved significantly by choosing a good set of initial parameters.
- For calibrations with multiple test data sets, you can increase the
relative weights of the data sets or even disable selected data sets. Data in
inactive data sets are disregarded by the calibration job, and data sets with
higher weights have a higher influence on the calibration calculations. You
might want to disable or de-emphasize test data sets when your primary goal is
to achieve a close fit for one of the sets; for example, if you want the
response to match your uniaxial test data very closely.
- You can adjust the best-fit error measure to calculate the
deviation between response and test data using a different method.
- You can use a different optimization method to perform the
calibration. For most optimization methods you can also adjust the solution and
function tolerance, specify a maximum number of function evaluations, and set a
limit on the number of iterations.
-
The flexibility in the app enables you to perform a
wide range of sequential calibration workflows. You can make any of the
following changes between calibration runs: modify the material model, activate
and deactivate test data, modify your selection of material constant design
variables, or choose different optimization algorithms and error measures.
For example, assume you have test data that is well suited for a
hyperelastic Mullins material model. The app enables you to start by defining a
hyperelastic-only material model and calibrate it using some or all of the
available test data. Once your calibration is complete, you can modify the
material model by adding Mullins. If you keep the same hyperelastic potential,
you can see that the calibrated hyperelastic material constants from your
previous calibration run persist in the app, which gives you a good initial
solution for the next calibration run that includes Mullins.
Calibration Notes
The following list provides some insights and tips about the app:
- The analytical and numerical execution modes are based upon
driving the material's kinematics and calculating the stress response, so in
cases of tests in which the stress is specified (such as creep tests), the
evaluation is performed by imposing the strain measured in the test and
predicting the stress (i.e. the controlled variable is switched from that
originally held constant in the test itself
- Meaningful calibration of Mullins effect requires specifying a
cyclic test, or a test with unloading.