Communications on Stochastic Analysis. 2014, 8 (1)
We give a short introduction to the stochastic calculus for Itô-Lévy processes and review briefly the two main methods of optimal control of systems described by such processes:
(i) Dynamic programming and the Hamilton-Jacobi-Bellman (HJB) equation
(ii) The stochastic maximum principle and its associated backward stochastic differential equation (BSDE).
The two methods are illustrated by application to the classical portfolio optimization problem in finance. A second application is the problem of risk minimization in a financial market. Using a dual representation of risk, we arrive at a stochastic differential game, which is solved by using the Hamilton-Jacobi-Bellman-Isaacs (HJBI) equation, which is an extension of the HJB equation to stochastic differential games.
Communications on Stochastic Analysis Vol. 8, No. 1, March 2014 s. 1-15 https://www.math.lsu.edu/cosa/ Posted here with permission.