Joint work with Alexander Shapiro
We consider the multistage stochastic programming problem where uncertainty enters the right-hand sides of the problem. Stochastic Dual Dynamic Programming (SDDP) is a popular method to solve such problems under the assumption that the random data process is stagewise independent. There exist two approaches to incorporate de-
pendence into SDDP type algorithms. One approach is to model the data process as an autoregressive time series and to reformulate the problem in stagewise independent terms by adding state variables to the model. We refer to this approach as the TS-SDDP method. The other approach is to use Markov Chain discretization of the random data process. We refer to this approach as the Markov Chain SDDP (MC-SDDP) method. Although the MC-SDDP method can handle any Markovian data process, some advantages of statistical analysis of the true process are lost. In a computational study based on the long-term operational planning problem of the Brazilian interconnected power systems, we nd that the optimality bounds computed by the MC-SDDP method close faster than the corresponding TS-SDDP bounds and that MC-SDDP produces slightly better policies. When implementing the optimized policies on real data, we observe that not only the method but also the accuracy of the stochastic models has a signicant impact on the cost and that using an AV@R formulation is eective in making the policy robust against a misspecied stochastic model.
working paper: http://www.optimization-online.org/DB_HTML/2017/09/6201.html