• English
    • Norsk
  • English 
    • English
    • Norsk
  • Administration
View Item 
  •   Home
  • Det matematisk-naturvitenskapelige fakultet
  • Fysisk institutt
  • Fysisk institutt
  • View Item
  •   Home
  • Det matematisk-naturvitenskapelige fakultet
  • Fysisk institutt
  • Fysisk institutt
  • View Item
JavaScript is disabled for your browser. Some features of this site may not work without it.

Using Deep Reinforcement Learning for Active Flow Control

Holm, Marius
Master thesis
View/Open
main.pdf (10.58Mb)
Year
2020
Permanent link
http://urn.nb.no/URN:NBN:no-82314

Metadata
Show metadata
Appears in the following Collection
  • Fysisk institutt [2360]
Abstract
We apply deep reinforcement learning (DRL) to reduce and increase the drag of a 2-dimensional wake flow around a cluster of three equidistantly spaced cylinders, known as the fluidic pinball. The flow is controlled by rotating the cylinders according to input provided by a DRL agent or a pre-determined control function. Simulations are carried out for two Reynolds numbers (Res), Re = 100 and 150, corresponding to a periodic asymmetric and chaos symmetric flow respectively. At Re = 100 DRL agents are able to reduce drag by up to ≈ 28% and increase drag by ≈ 45% compared to the baseline flow with no applied control. For the chaotic flow at Re = 150 DRL agents are able to reduce the drag by up to ≈ 32% and increase drag by up to ≈ 65% compared to the baseline flow. Deep reinforcement learning combines artificial neural networks (ANNs) with reinforcement learning (RL) architecture that enables an agent to learn the best actions by interacting with an environment. Reinforcement learning (RL) refers to goal-oriented algorithms that can learn to achieve a complex goal by interacting with an environment, i.e. by trial-and-error. Artificial neural networks are used as function approximators for the reinforcement learning policy and/or value function. The ANN is trained to be the best possible approximation to the target function by a gradient descent (GD) optimization algorithm. This especially effective for complex systems where the possible states of the system, and all possible actions are too large to be completely known. In the thesis we implemented a DRL agent based on the proximal policy optimization (PPO) algorithm together with a fully connected neural network (FCNN) to control the rotations of the cylinders. We also compare the DRL strategies with simpler strategies like constant rotations and pre-determined sinusoidal control functions.
 
Responsible for this website 
University of Oslo Library


Contact Us 
duo-hjelp@ub.uio.no


Privacy policy
 

 

For students / employeesSubmit master thesisAccess to restricted material

Browse

All of DUOCommunities & CollectionsBy Issue DateAuthorsTitlesThis CollectionBy Issue DateAuthorsTitles

For library staff

Login
RSS Feeds
 
Responsible for this website 
University of Oslo Library


Contact Us 
duo-hjelp@ub.uio.no


Privacy policy