Stability-certified deep reinforcement learning strategy for UAV and lagrangian floating platform

This paper presents a robust technique for an Unmanned Aerial Vehicle (UAV) with the ability to fly above a moving platform autonomously. The purpose of the study is to investigate the problem of certifying stability of reinforcement learning policy when linked with nonlinear dynamical systems since...

Full description

Saved in:
Bibliographic Details
Main Authors: Muslim, M. S. M., Ismail, Z. H.
Format: Conference or Workshop Item
Published: 2021
Subjects:
Online Access:http://eprints.utm.my/id/eprint/95750/
http://dx.doi.org/10.1109/ECTI-CON51831.2021.9454688
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper presents a robust technique for an Unmanned Aerial Vehicle (UAV) with the ability to fly above a moving platform autonomously. The purpose of the study is to investigate the problem of certifying stability of reinforcement learning policy when linked with nonlinear dynamical systems since conventional control methods often fail to properly account for complex effects. However, deep reinforcement learning algorithms have been designed to monitor the robust stability of a UAV's position in three-dimensional space, such as altitude and longitude-latitude location, so that the UAV can fly over a moving platform in a stable manner. Plus, the input-output policy gradient method is regulated and capable of approving a large number of stabilization controllers to obtain robust stability by exploiting problem-specific structure. Inside the stability-certified parameter space, reinforcement learning agents will attain high efficiency while also exhibiting consistent learning behaviors over time, according to a formula assessment on a decentralized control task involving flight creation.