Detail publikace

Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models

BRABLC, M. ŽEGKLITZ, J. GREPL, R. BABUŠKA, R.

Anglický název

Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models

Typ

článek v časopise ve Web of Science, Jimp

Jazyk

en

Originální abstrakt

Reinforcement learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been used in the RL domain, such as neural networks, local linear regression, or Gaussian processes. In this article, we focus on techniques that have not been used much so far: symbolic regression (SR), based on genetic programming and local modelling. Using measured data, symbolic regression yields a nonlinear, continuous-time analytic model. We benchmark two state-of-the-art methods, SNGP (single-node genetic programming) and MGGP (multigene genetic programming), against a standard incremental local regression method called RFWR (receptive field weighted regression). We have introduced modifications to the RFWR algorithm to better suit the low-dimensional continuous-time systems we are mostly dealing with. The benchmark is a nonlinear, dynamic magnetic manipulation system. The results show that using the RL framework and a suitable approximation method, it is possible to design a stable controller of such a complex system without the necessity of any haphazard learning. While all of the approximation methods were successful, MGGP achieved the best results at the cost of higher computational complexity. Index Terms-AI-based methods, local linear regression, nonlinear systems, magnetic manipulation, model learning for control, optimal control, reinforcement learning, symbolic regression.

Anglický abstrakt

Reinforcement learning (RL) agents can learn to control a nonlinear system without using a model of the system. However, having a model brings benefits, mainly in terms of a reduced number of unsuccessful trials before achieving acceptable control performance. Several modelling approaches have been used in the RL domain, such as neural networks, local linear regression, or Gaussian processes. In this article, we focus on techniques that have not been used much so far: symbolic regression (SR), based on genetic programming and local modelling. Using measured data, symbolic regression yields a nonlinear, continuous-time analytic model. We benchmark two state-of-the-art methods, SNGP (single-node genetic programming) and MGGP (multigene genetic programming), against a standard incremental local regression method called RFWR (receptive field weighted regression). We have introduced modifications to the RFWR algorithm to better suit the low-dimensional continuous-time systems we are mostly dealing with. The benchmark is a nonlinear, dynamic magnetic manipulation system. The results show that using the RL framework and a suitable approximation method, it is possible to design a stable controller of such a complex system without the necessity of any haphazard learning. While all of the approximation methods were successful, MGGP achieved the best results at the cost of higher computational complexity. Index Terms-AI-based methods, local linear regression, nonlinear systems, magnetic manipulation, model learning for control, optimal control, reinforcement learning, symbolic regression.

Klíčová slova anglicky

Approximation theory; Complex networks; Continuous time systems; Genetic algorithms; Genetic programming; Magnetism; Manipulators; Nonlinear systems; Approximation methods; Local linear models; Local linear regression; Magnetic manipulation; Magnetic manipulators; Multi-gene genetic programming; Receptive fields; Reinforcement learning; Symbolic regression; Weighted regression;

Vydáno

20.12.2021

Nakladatel

WILEY-HINDAWI

Místo

LONDON

ISSN

1076-2787

Ročník

2021

Číslo

1

Strany od–do

1–12

Počet stran

12

BIBTEX


@article{BUT178291,
  author="Martin {Brablc} and Jan {Žegklitz} and Robert {Grepl} and Robert {Babuška},
  title="Control of Magnetic Manipulator Using Reinforcement Learning Based on Incrementally Adapted Local Linear Models",
  year="2021",
  volume="2021",
  number="1",
  month="December",
  pages="1--12",
  publisher="WILEY-HINDAWI",
  address="LONDON",
  issn="1076-2787"
}