Publication detail
DETERMINATION OF Q-FUNCTION OPTIMUM GRID APPLIED ON ACTIVE MAGNETIC BEARING CONTROL TASK
BŘEZINA, T. KREJSA, J.
English title
DETERMINATION OF Q-FUNCTION OPTIMUM GRID APPLIED ON ACTIVE MAGNETIC BEARING CONTROL TASK
Type
conference paper
Language
en
Original abstract
Active magnetic bearing control task can be successfully solved using reinforcement learning based method called Q-learning. The main problem to solve is the convergence speed. Two-phase Q-learning can be used to speed up the learning process [2]. Efficient prelearning phase uses mathematical model, following tutorage phase runs on real system and uses conventional Q-learning. This method can increase learning speed significantly, however there are still certain issues remaining to solve in order to improve the overall performance of the controllers based on Q-learning. When the table is used as Q-function approximation, the learning speed and precision of found controllers depend highly on the Q-function table grid properties. The paper is denoted to the determination of optimum grid with respect to the properties of controllers found by given method. Comparison of the results with performance of referential PID controller is included. Obtained results indicate that using nonlinear grid of Q-function table approximation improves the performance in terms of quadratic control quality criterion values, however, regarding the robustness against variables observation error and action delay the effect of nonlinear grid is questionable, improving the robustness for reduced state definitions only and degrading the robustness for common state definition which considers rotor deflection, velocity and acceleration as system state variables
English abstract
Active magnetic bearing control task can be successfully solved using reinforcement learning based method called Q-learning. The main problem to solve is the convergence speed. Two-phase Q-learning can be used to speed up the learning process [2]. Efficient prelearning phase uses mathematical model, following tutorage phase runs on real system and uses conventional Q-learning. This method can increase learning speed significantly, however there are still certain issues remaining to solve in order to improve the overall performance of the controllers based on Q-learning. When the table is used as Q-function approximation, the learning speed and precision of found controllers depend highly on the Q-function table grid properties. The paper is denoted to the determination of optimum grid with respect to the properties of controllers found by given method. Comparison of the results with performance of referential PID controller is included. Obtained results indicate that using nonlinear grid of Q-function table approximation improves the performance in terms of quadratic control quality criterion values, however, regarding the robustness against variables observation error and action delay the effect of nonlinear grid is questionable, improving the robustness for reduced state definitions only and degrading the robustness for common state definition which considers rotor deflection, velocity and acceleration as system state variables
Keywords in English
Reinforcement learning, Q-learning, Active Magnetic Bearing, Control
RIV year
2003
Released
24.03.2003
Publisher
Institute of Mechanics of Solids Faculty of Mechanical Engineering Brno University of Technologi
Location
Brno
ISBN
80-214-2312-9
Book
Mechtronics, Robotics and Biomechanics 2003
Edition number
1
Pages count
2
BIBTEX
@inproceedings{BUT9733,
author="Tomáš {Březina} and Jiří {Krejsa},
title="DETERMINATION OF Q-FUNCTION OPTIMUM GRID APPLIED ON ACTIVE MAGNETIC BEARING CONTROL TASK",
booktitle="Mechtronics, Robotics and Biomechanics 2003",
year="2003",
month="March",
publisher="Institute of Mechanics of Solids
Faculty of Mechanical Engineering
Brno University of Technologi",
address="Brno",
isbn="80-214-2312-9"
}