These methods are referred to as “zeroth-order methods” because they require only evaluation of the function, f(X), in each iterative step. Some examples of zeroth-order methods are the Bracketing Method and the Golden Section Search Method. Some population based methods could also be categorized as zeroth-order methods[1].
The Bracketing method is a zeroth-order method which used progressively smaller intervals to converge to an optimal solution. The interval is set up such that the x value corresponding to the optimal value of f lies within the interval. The interval is then divided into any number of sub-intervals of any given length. At each dividing point the value of f is calculated. The optimum sub-interval is then chosen as the next interval. This process iterates until convergence criteria is met[1].
In addition to evaluation of f(X), first-order methods require the calculation of the gradient vector ∇f(X) in each iterative step. Some examples of first-order methods are the Steepest Descent or Cauchy Method and the Conjugate Gradient Method.
The Steepest Descent method uses a search direction of some magnitude in the negative direction of the gradient. The negative of the gradient gives the direction of maximum decrease, hence steepest descent. The magnitude of the constant for the search direction can be determined through zeroth-order methods or from direct calculation. The direct calculation is done by setting the derivative equal to zero and solving for the constant. This method is guaranteed to converge to a local minimum, but convergence may be slow as previous iterations are not considered in determining the search direction of subsequent iterations. The rate of convergence can be estimated using the condition number of the Hessian matrix. If the condition number of the Hessian is large convergence will be slow[1].
The Conjugate Gradient Method is similar to the Steepest Descent Method except that it takes into consideration previous iterations when choosing search directions. The conjugate direction is determining by adding the steepest descent direction of the previous iteration, scaled by some value, to the steepest descent direction of the current iteration. The constant used to scale the search direction of the previous iteration can be determined using either the Fletcher-Reeves formula or the Polak-Ribiere formula[1].
Second-order methods take advantage of the Hessian matrix, the second derivative, of the the function to improve search direction and rate of convergence. Some examples of second-order methods are Newton's Method, Davidon-Fletcher-Powell (DFP) method, and the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method[1].
Population based methods generate a population of points throughout the design space. Some methods then specify a range of the best points and generate a new population, continuing until convergence is reached (Monte-Carlo Method). Others generate a population and then "evolve" the points. The weakest of the new population is eliminated and the remainder evolved again until convergence is reached (Genetic Algorithm).
See Monte-Carlo
Genetic algorithms are based on the principles of natural selection and natural genetics, meaning reproduction, crossover, and mutation are involved in the search procedure. The design variables are represented as strings of binary numbers which mirror chromosomes in genetics. These strings allow for the different binary numbers, or bits, to be adjusted during the reproduction, mutation, and crossover stage[2]. A population of points is used, and the number of initial points is typically two to four times the number of design variables. These points are evaluated to provide a fitness value, and above average points are selected and added to a new population of points. Points in this new population undergo the second stage in the algorithm known as crossover. In this stage information from two "parent" points, or strings, is combined to produce a new "child" point. The mutation operator is optional. It selects points based on a user-defined probability and alters a bit in the points binary string, thereby maintaining diversity in the population. The process is iterated until convergence is reached. GAs differ from other optimization techniques in that they work with a coding of the parameter set and not the parameters themselves, search a population of points instead of a single point, and use objective function knowledge instead of derivatives or other auxiliary knowledge[3].
One of the most prevalent uses of optimization occurs in the design of structures. Common applications in this area consist of the reduction of weight subject to strength requirements, maximizing energy absorption in crashworthiness scenarios, and topology optimization. In a simple case, analytical solutions for objective functions are solved while altering design variables subject to constraints. This optimization can be performed in a software such as MATLAB. In a more complex example, objective functions can be solved by a finite element software such as Abaqus, LS-DYNA, etc. Because the optimization algorithm sometimes requires hundreds or even thousands of iterations, optimization can become unfeasible when directly coupled to a computationally expensive finite element simulation. A suitable alternative is the use of metamodels.
Metamodels offer a fast analytical approximation of a complex, expensive objective function. Metamodels approximate responses of these functions over a predefined design space. In order to create a metamodel, objective function values must calculated using a full scale simulation at "training points" sampled throughout the design space. These training points can be found using a Design of Experiments (DoE). Some tutorials for DoE can be found at the following links: 1, 2, and 3. Once the DoE points and their objective function values are found, the data is used to "train" the metamodel. After training, the metamodel can be directly used in place of the full scale simulations to calculate objective functions in a much faster manner. Metamodels can also be used in Monte Carlo simulations to quantify uncertainty when many calculations are necessary.
A comparative study of metamodeling methods for multi objective crashworthiness optimization
Authors: Howie Fang, Masoud Rais-Rohani, Z. Liu, and Mark Horstemeyer
Analytical Model for Axial Crushing of Multi-cell Multi-corner Tubes (Multi-CRUSH)
Contributers: Ali Najafi and Masoud Rais-Rohani
Topology Optimization of Continuum Structures Using Element Exchange Method
Authors: Mohammad Rouhi and Masoud Rais-Rohani
Element Exchange Method for Topology Optimization
Authors: Mohammad Rouhi, Masoud Rais-Rohani and Thomas Neil Williams
A Goal-Oriented, Inverse Decision-Based Design Method for Multi-Component Product Design
Authors: Author(s): Tate R. Fonville, Anand B. Nellippallil, Mark F. Horstemeyer, Janet K. Allen, Farrokh Mistree
Product Design Optimization with Microstructure-property Modeling and Associated Uncertainties
Author(s): Kiran N. Solanki
Optimization algorithms can be used for model calibration. For example, the DMGfit for metals, TP for polymers, and MSFfit routines employ optimization algorithms to automatically fit the plasticity-damage model and the fatigue model, respectively. The constants of interest are selected and a Monte Carlo optimization routine is performed to generate candidate constants. A single element simulation then produces the model stress-strain curve. The curve is compared to the input data for fit comparison, and this process is repeated until a satisfactory fit is achieved or a maximum number of iterations is reached. The resulting optimized constants are then output.
The Embedded Atom Method (EAM) and Modified Embedded Atom Method (MEAM) potentials can be optimized based upon on Electronics Scale calculation results and experimental data. See MEAM Potential Calibration
This is an emerging topics at CAVS. The pages describing the progress are currently available only to the members of the research team.