Geometry and beyond: Optimization taxonomy and methods

The power of numerical optimization in engineering design is its ability to rationally and rapidly search through alternatives for the best possible design(s). Parameters in a design that can be varied to search for a “best” design are design variables. Given these, design can be structured as the process of finding the minimum or maximum of some attribute, termed the objective function. For a design to be acceptable, it also must satisfy certain requirements, or design constraints. Optimization is the process of automatically changing the design variables to identify the minimum or maximum of the objective function while satisfying all the required design constraints. [This survey of design optimization methods is in complement to our recent review of design exploration techniques. See also Design exploration vs. design optimization.]

ora_dse_functionstack_150316Geometry optimization—Optimization of a structure’s shape, size, topology, topography or topometry to satisfy operating limits imposed on the response of the structure, and by further limits on the values that the structural parameters can assume. See Optimization fundamentals.

Parameter optimization—For any design parameter(s), selection of values that are optimal in some defined respect, such as minimization of a specified objective function over a defined data set.

Multidisciplinary optimization (MDO)—Incorporates all relevant disciplines—structural (linear or nonlinear, static or dynamic, bulk materials or composites), fluid, thermal, acoustic, NVH, multibody dynamics, or any combination—simultaneously in an optimization problem. By exploiting interactions among disciplines, MDO can identify design solutions that are superior to those arrived at by optimizing each discipline sequentially, with substantially less time and labor expenditure.

Multi-objective or Pareto optimization—Real-world optimization problems usually have more than one objective. In optimizing a structural design, for example, the desired design would be both lightweight and rigid. Because these two objectives conflict, a tradeoff must be made: there will be one lightest design, one stiffest design, and an infinite number of designs that are some compromise of weight and stiffness. The set of tradeoff designs that cannot be improved on according to one criterion without harming another criterion is called the Pareto set. See Optimization fundamentals.

Robustness and reliability optimization—Product designs are nominal, while manufacturing and operating conditions are real-world. Finite geometric tolerances, variations in material properties, uncertainty in loading conditions and other variances encountered by a product in either production or use can cause it to perform slightly differently from its nominal as-designed values. Therefore robustness and reliability as design objectives beyond the nominal design are often desirable. Performance of robust and reliable designs is less affected by these expected variations, and remains at or above specified acceptable levels in all conditions. To evaluate the robustness and reliability of a design during simulation, its variables and system inputs are made stochastic by being defined in terms of both mean value and a statistical distribution function. The resulting system performance characteristics are then measured in terms of a mean value and its variance. See Anatomy of design space exploration.

Gradient-based vs. deterministic vs. heuristic methods

Algorithmic solution methods used by design optimization software can be broadly categorized as gradient-based, deterministic or heuristic.

esteco_opt
Source: ESTECO

Gradient-based optimization—A generalization of the derivative of a function in one dimension to a function in several dimensions, the gradient represents the slope of the tangent of the graph of the function. Specifically, the gradient points in the direction of the greatest rate of increase or decrease of the function, and its magnitude is the slope of the graph in that direction. For example, in a room in which the temperature is given by a scalar field, T, so at each point (x, y, z) the temperature is T(x, y, z), then at each point in the room, the gradient of T at that point will show the direction in which the temperature rises or falls most quickly, while the magnitude of the gradient will determine how fast the temperature changes in that direction.

Sequential quadratic programming is a class of gradient-based iterative methods for nonlinear optimization used on problems for which the objective function and the constraints are twice continuously differentiable. SQP methods solve a sequence of optimization sub-problems, each of which optimizes a quadratic model of the objective subject to a linearization of the constraints.

Deterministic optimization—“Deterministic approaches take advantage of the analytical properties of the problem to generate a sequence of points that converge to a global optimal solution. Heuristic approaches have been found to be more flexible and efficient than deterministic approaches; however, the quality of the obtained solution cannot be guaranteed. Moreover, the probability of finding the global solution decreases when the problem size increases. Deterministic approaches (e.g., linear programming, nonlinear programming, and mixed-integer nonlinear programming, etc.) can provide general tools for solving optimization problems to obtain a global or an approximately global optimum.” Ming-Hua Lin, Jung-Fa Tsai, and Chian-Son Yu, “A Review of Deterministic Optimization Methods in Engineering and Management,” Mathematical Problems in Engineering, Volume 2012 (2012)

Heuristic and metaheuristic optimization methods make few or no assumptions about the problem being optimized. While heuristics do not guarantee that any optimal solution will be found, they are used to find approximate solutions for many complex classes of optimization problem.

A genetic algorithm is a search heuristic that mimics the process of natural selection. An example appeared in our most recent case study: “Our preliminary investigation of the design space indicated that it was highly non-linear; meaning small changes in variable values sometimes resulted in large changes in performance. This observation, combined with the optimization formulation being comprised of only discrete variables, led us to choose a genetic algorithm to perform the structural steel optimization study. Genetic algorithms utilize processes analogous to natural selection to stochastically search for the best designs. Since they do not require objective or constraint gradient information, genetic algorithms are able to search discontinuous and ‘noisy’ design spaces effectively. Compared to gradient-based optimization algorithms, we concluded genetic optimizers are much more likely to find globally optimal designs for this problem.” Flager F, Welle B, Bansal P, Soremekun G, Haymaker J (2009), Multidisciplinary process integration and design optimization of a classroom building, Journal of Information Technology in Construction (ITcon), Vol. 14, pg. 595-612, http://www.itcon.org/2009/38.

Genetic algorithms are one kind of evolutionary algorithm (EA). EAs generate solutions to optimization problems using techniques inspired by natural evolution such as inheritance, mutation, selection and crossover. Candidate solutions to the optimization problem play the role of individuals in a population, and the fitness function determines the quality of the solutions. Evolution of the population takes place after repeated application of these operators. Evolutionary algorithms often perform well in approximating solutions to all types of problems because they ideally do not make any assumption about the underlying fitness landscape.

Simulated annealing is a probabilistic metaheuristic for the global optimization problem of locating a good approximation to the global optimum of a given function in a large search space. It is often used when the search space is discrete rather than continuous. The name and concept are taken from the technique of annealing in metallurgy, which involves heating and controlled cooling of a material to increase the size of its crystals and reduce their defects.

Particle swarm optimization (PSO) optimizes a problem by iteratively trying to improve a candidate solution with regard to a given measure of quality. Starting with a population of candidate solutions or “particles,” PSO moves the particles around in the search space according to simple mathematical formulae governing the particles’ position and velocity. Each particle’s movement is influenced by its local best known position but is also guided toward the best known positions in the search space, which are updated as better positions are found by other particles. This process is designed to move the entire swarm toward the best solutions.

Local vs. global optimum

Optimization methods can search for either a local or a global optimum. Local optimization methods search for an optimum based on local information about the problem, for example gradient and geometric information. Global optimization methods search for the optimum based on global information about the problem, and are usually probability-based. Hybrid optimization methods combine local and global approaches, generally using response surface approximation to seek the global optimum with least effort.

Local optimization algorithms require the definition of an objective function (minimize, maximize, reach target value), boundaries for the input parameters, and if needed a set of equality or inequality constraints for the outputs. Local optimization algorithms are useful for solving general constrained optimization problems. Sensitivity-based algorithms use the sensitivities or gradients of the objective function as well as the constraints to efficiently find a local optimum: principal types of local optimization include the gradient-based and sequential quadratic programming families of methods.

noesis_local_1 noesis_local_2
Local optimization
Source: Noesis Solutions

Global optimization algorithms have a high probability, though not certainty, of finding the global optimum by looking around the design space at many locations simultaneously. Principal types of global optimization include the heuristic methods of genetic and general evolutionary algorithms, simulated annealing and particle swarm optimization.

noesis_global
Global optimization
Source: Noesis Solutions

Click images to activate GIFs

See also Martins, J.R.R.A., and Lambe, A.B., “Multidisciplinary Design Optimization: A Survey of Architectures,” AIAA Journal, 51(9), 2013.