This content is not included in your SAE MOBILUS subscription, or you are not logged in.

A New Optimal Design of Stable Feedback Control of Two-Wheel System Based on Reinforcement Learning

Journal Article
13-05-01-0004
ISSN: 2640-642X, e-ISSN: 2640-6438
Published April 26, 2023 by SAE International in United States
A New Optimal Design of Stable Feedback Control of Two-Wheel System
                    Based on Reinforcement Learning
Sector:
Citation: Yu, Z. and Zhu, X., "A New Optimal Design of Stable Feedback Control of Two-Wheel System Based on Reinforcement Learning," SAE J. STEEP 5(1):39-50, 2024, https://doi.org/10.4271/13-05-01-0004.
Language: English

Abstract:

The two-wheel system design is widely used in various mobile tools, such as remote-control vehicles and robots, due to its simplicity and stability. However, the specific wheel and body models in the real world can be complex, and the control accuracy of existing algorithms may not meet practical requirements. To address this issue, we propose a double inverted pendulum on mobile device (DIPM) model to improve control performances and reduce calculations. The model is based on the kinetic and potential energy of the DIPM system, known as the Euler-Lagrange equation, and is composed of three second-order nonlinear differential equations derived by specifying Lagrange. We also propose a stable feedback control method for mobile device drive systems. Our experiments compare several mainstream reinforcement learning (RL) methods, including linear quadratic regulator (LQR) and iterative linear quadratic regulator (ILQR), as well as Q-learning, SARSA, DQN (Deep Q Network), and AC. The simulation results demonstrate that the DQN and AC methods are superior to ILQR in our designed nonlinear system. In all aspects of the test, the performance of Q-learning and SARSA is comparable to that of ILQR, with some slight improvements. However, ILQR shows its advantages at 10 deg and 20 deg. In the small deflection (between 5 and 10 deg), the DQN and AC methods perform 2% better than the traditional ILQR, and in the large deflection (10–30 deg), the DQN and AC methods perform 15% better than the traditional ILQR. Overall, RL not only has the advantages of strong versatility, wide application range, and parameter customization but also greatly reduces the difficulty of control system design and human investment, making it a promising field for future research.