We consider optimal control problems where the state X(t) at time t of the system is given by a stochastic differential delay equation. The growth at time t not only depends on the present value X(t), but also on X(t-d) and some sliding average of previous values. Morover, this dependence may be nonlinear. Using the dynamic programming principle we derive an associated (finite dimensional) Hamilton-Jacobi-Bellman equation for the value function of such problems. This (finite dimensional) HJB equation has solutions if and only if the coefficients satisfy a particular system of first order PDEs. We introduce viscosity solutions for the type of HJB-equations that we consider, and prove that under certain conditions, the value function is the unique viscosity solution to the HJB-equation. We also give numerical examples for two cases where the HJB-equation reduces to a finite dimensional one.