Abstract:
Discrete dynamic programming, widely used in addressing optimization over time, suffers from the so-called curse of dimensionality, the exponential increase in problem size as the number of system variables increases. One method to partially address this problem is to avoid the use of state transition probability matrices, which grow in the square of the size of the state space. This can be done through the use of expected value (EV) functions, which compute the expectation of functions of the future state variables conditioned on current variables. Two ways that this leads to potential gains arise when the state transition can be broken into separate phases and when the transitions for different state variables are conditionally independent. Both of these situations arise in models that are used in natural resource management and are illustrated with several examples.
Suggested citation: Fackler, Paul L (2018). Stochastic dynamic programming without transition matrices. (CEnREP Working Paper No. 18-018). Raleigh, NC: Center for Environmental and Resource Economic Policy.