Deep Reinforcement Learning (DRL) has recently spread into a range of domains within physics and engineering, with multiple remarkable achievements. Still, much remains to be explored before the capabilities of these methods are well understood. In this paper, we present the first application of DRL to direct shape optimization. We show that, given adequate reward, an artificial neural network trained through DRL is able to generate optimal shapes on its own, without any prior knowledge and in a constrained time. While we choose here to apply this methodology to aerodynamics, the optimization process itself is agnostic to details of the use case, and thus our work paves the way to new generic shape optimization strategies both in fluid mechanics, and more generally in any domain where a relevant reward function can be defined.
This item's license is: Attribution-NonCommercial-NoDerivatives 4.0 International