Turkish Journal of Electrical Engineering and Computer Sciences




This paper introduces an innovative adaptive controller based on the actor-critic method. The proposed approach employs the ink drop spread (IDS) method as its main engine. The IDS method is a new trend in soft-computing approaches that is a universal fuzzy modeling technique and has been also used as a supervised controller. Its process is very similar to the processing system of the human brain. The proposed actor-critic method uses an IDS structure as an actor and a 2-dimensional plane, representing control variable states, as a critic that estimates the lifetime goodness of each state. This method is fast, simple, and away from mathematical complexity. The proposed method uses the temporal differences (TD) method to update both the actor and the critic. Our system: 1) learns to produce real-valued control actions in a continuous space without regarding the Markov decision process, 2) can adaptively improve performance during the lifetime, and 3) can scale well to high-dimensional problems. To show the effectiveness of the method, we conduct experiments on 3 systems: an inverted pendulum, a ball and beam, and a 2-wheel balancing robot. In each of these systems, the method converges to a pertinent fuzzy system with a significant improvement in terms of the rise time and overshoot compared to other fuzzy controllers.


Active learning method, actor-critic method, adaptive control, fuzzy inference system, ink drop spread, reinforcement learning

First Page


Last Page