About Test Data Processing

Test data processing enables you to clean your test data before you start the calibration process. Processing your data properly can help you complete the calibration more efficiently and with greater accuracy.

After you edit your test data, you can still restore the test data back to its original state even if you save the data or close the app.

This page discusses:

Repair

You can repair test data sets after import to correct problems such as repeated time values. When you run the repair utility, you can specify the minimum time increment between data points. The app removes all data points that have a time increment less than the specified minimum.

Decimate

Decimation removes data points from a test data set. Many test data sets contain far more points than the app needs to describe the material response accurately. Therefore, including unneeded data points can lead to an unnecessarily high number of evaluations during the calibration.

Two decimation algorithms are available: uniform and log-based.

  • Uniform decimation enables you to specify the total number of points that the app retains in the data set.
  • Log-based decimation retains a greater number of points at the beginning of the selected test data and fewer points later in the set. As a result, the spacing of the points is approximately uniform when plotted logarithmically in time. This decimation type is useful for creep or strain relaxation data where it is important to retain an accurate capture of the change in strain or stress that occurs at very short time scales.

    The algorithm for log-based decimation first examines the test data series to identify the initial temporal resolution of the decimation interval and the total amount of time in the interval. Then it decimates the data, retaining the specified number of points in each decade of time in the selected decimation interval. The algorithm also caps the maximum time interval of decimation at twice the time required to decimate the data uniformly in time.

You can decimate the entire set of test data or restrict the decimation to a specific range of the data along the selected x-axis. The app does not decimate any data points outside the specified range, and it does not include the data points outside the range in the calculation of the number of points to retain. This approach enables you to focus on reducing points in particular areas of the test data set.

You can also retain reversal points in the test data, which can be useful when you decimate cyclic test data. The app defines reversal points as the points that are larger or smaller than both the preceding and following points or the points for which exactly one of the preceding or following points are equal. When you perform logarithmic decimation with the option to retain reversals, the app resets the logarithmic behavior when it encounters a reversal in the test data.

Zero Shift

Zero shift correction enables you to correct test data that does not start at the origin because of slack or a small initial preload in the test specimen. You must select two or more points from the test data that exhibit a linear relationship between stress and strain and lie on a line that is expected to pass through the data's origin. In practice, these points would be the first of several points in the test data that are not affected by an initial slack or preload of a test specimen.

The app then passes a best-fit line through the selected points and applies offsets to the time, strain, and/or volume ratio series. These changes ensure that the values are appropriate (zero for strain and time, unity for volume ratio) when the best-fit line crosses zero stress or pressure. The data points preceding the selected linear points are discarded. The app adds points on the best fit line at even intervals between the zero point and the first of the extrapolation points in a manner that approximates the spacing between the extrapolation points. This approach enables you to avoid large increments at the beginning of the shifted test data.

Smoothing

Experimental test data often contain noise in the sense that the test variable is both slowly varying and also corrupted by random noise. This noise can affect the quality of the strain energy potential that Abaqus derives. This noise is particularly problematic with the Marlow form, where a strain energy potential that exactly describes the test data that are used to calibrate the model is computed. It is less of a concern with the other forms, since smooth functions are fitted through the test data.

The app provides a smoothing technique to remove the noise from the test data based on the Savitzky-Golay method. The idea is to replace each data point by a local average of its surrounding data points, so that the level of noise can be reduced without biasing the dominant trend of the test data. You can control the following aspects of the smoothing technique:

  • The order of the fitted polynomial.
  • The number of times the app repeats the smoothing process.
  • The range of test data for which you want to apply smoothing. You can also apply smoothing to the entire data set.

Outlier Removal

When a data point is too far away from the rest of the test data, it can skew the results of a calibration. You can manually remove outliers from the test data.

Regularization

Regularization lets you resample a range of the test data, either adding or removing points. Before you regularize test data, you can flag individual points as critical points in the data set; these points are always preserved when you perform the regularization.