dc.description.abstract |
Currently, numerous researchers are prioritizing renewable energy sources as the cost of traditional
fuels continues to escalate in daily operations, while also aiming to mitigate environmental
impacts. Wind energy stands out as one of the swiftest expanding energy sources, renowned for
its cost efficiency and widespread availability. The adaptive optimal controller using
reinforcement learning for optimizing wind power extraction from available wind speed is the
focus of this thesis. It has been difficult to harness energy from the wind because wind speed is so
unpredictable. The proposed control system which is based on reinforcement learning (Q-learning)
is used for an adaptive control to adapt the system and adequately achieve high performance
compare to dynamic programming because of its model-free nature and its learning rate capability
from the environment that makes this algorithm best for adaptive optimal control system. The
addition of Kalman filter estimator collaboration with q learning makes the control system more
robust and accurate in estimation and prediction of the control state which is again compute by q
learning agents.
In this thesis Q learning algorithm is employed for adaptive optimal control and dynamic
programming algorithm is used for optimal control in place of traditional torque control to
optimize the wind energy capture under fluctuating wind scenario in the wind turbine systems of
region 2 operations. The MATLAB software is employed to analyze the effectiveness of this
control system. The result indicates the tip speed ratio is 2.0 to 7.95 and 6.173 to 9.1787
respectively using q learning and dynamic programming with an aerodynamic efficiency of 0.411.
Furthermore, the adaptive controller using q learning captured 2.08% and 10.17% more power
than the optimal dynamic programming controller and STC respectively using wind speed from
the input of piecewise step. In the sinusoidal function-based wind speed, the adaptive optimal
controller using q learning collected 3.59% and 30.61% more power than the optimal controller
using dynamic programming and STC respectively. As the given result depicted that the proposed
control system which is adaptive control using reinforcement (Q learning) perform better compare
to dynamic programming (DP) and standard torque control. The smaller percentage rise for the
given condition is because of narrow scope of available wind speed in the simulation. |
en_US |