Nonlinear Model Predictive Control using Derivative-Free Optimization
MetadataVis full innførsel
Today it is common to solve the Non-Linear Programming (NLP) problem arising in NMPC by the use of gradient-based optimization. However, these techniques may not be suited if the prediction model, and thus the optimization problem, is not differentiable. Such cases can arise when the prediction model contains logic operators, lookup-tables or a variable-step ODE-solver is used to simulate the prediction model.Although the model may be made differentiable by alterations or using another ODE-solver, this can compromise the accuracy of the prediction, and thus the performance of the NMPC can suffer. Therefore it is desirable to investigate techniques that can solve a NLP without requiring the gradient of the objective function or its constraints.Derivative-Free Optimization(DFO) has been subject to substantial research, as it is common for gradient information to be unavailable in optimization. This class of algorithms have been frequently applied to simulation-based optimization, as well as to some extent to NMPC. However, for the latter the studies have mainly been limited to smaller and simpler systems. This thesis will investigate some theoretical fundamentals of DFO, and present some common practical algorithms. Then a case-study is performed which develops a NMPC for a particular industrial crude-oil separator. This controller is simulated using the presented algorithms, as well as a gradient-based SQP. The experience gained is then used to modify an existing algorithm to improve real-time performance. The findings of the thesis are that DFO is significantly more robust against the numerical issues which the model of the case-study inhabits compared to the SQP tested. Further, one of the DFO-algorithms is comparable with the particular SQP, both with respect to computational consumption and accuracy of the solutions. Another algorithm is also very robust against numerical issues. Distribution of the computational load over several processing cores can improve real-time performance significantly, as well as minimizing the computational consumption by reducing the number of predictions required at each time step. This makes DFO a promising alternative to gradient-based optimization in NMPC.