What is it about?
This article presents the synthesis and performance comparisons of four “state-of-the-art” online self-tuning mechanisms that are retrofitted with a conventional state-feedback controller to indirectly self-tune its gains, to enhance the closed-loop robustness of under-actuated mechatronic systems against bounded exogenous disturbances and parametric variations. The ubiquitous Linear-Quadratic-Regulator (LQR) is used as the baseline controller. The proposed scheme adaptively modulates the state and control input weighting factors of the LQR’s inner quadratic performance index by using state-error-driven hierarchical composite adaptive mechanisms. These mechanisms are formulated via pre-calibrated hyperbolic scaling functions that are driven by real-time variations in the control input and state error variables. The adjusted weighting factors are fed to the online Riccati Equation solver that modifies the state-compensator gains online. The efficacy of each adaptive control scheme is analyzed by conducting credible hardware-in-the-loop experiments on the Quanser rotary inverted pendulum setup.
Photo by Johny vino on Unsplash
Why is it important?
The proposed contribution is innovative and significant because it formulates and experimentally validates four unique robust-adaptive stabilization control strategies for inverted-pendulum-type robotic systems. The balancing control principles of inverted pendulum systems are essential for developing robust stabilization and regulation strategies for under-actuated mechatronic systems; such as self-balancing robots, rotorcrafts, and aerospace systems, etc. The aforementioned control task becomes even more challenging under the influence of identification errors, model variations, or bounded exogenous disturbances.
Read the Original
This page is a summary of: An experimental comparison of different hierarchical self-tuning regulatory control procedures for under-actuated mechatronic systems, PLoS ONE, August 2021, PLOS, DOI: 10.1371/journal.pone.0256750.
You can read the full text:
The following have contributed to this page