•  
  •  
 

Turkish Journal of Electrical Engineering and Computer Sciences

DOI

10.3906/elk-1202-113

Abstract

The attribute reduction problem is the process of reducing unimportant attributes from a decision system to decrease the difficulty of data mining or knowledge discovery tasks. Many algorithms have been used to optimize this problem in rough set theory. The genetic algorithm (GA) is one of the algorithms that has already been applied to optimize this problem. This paper proposes 2 kinds of memetic algorithms, which are a hybridization of the GA, with 2 versions (linear and nonlinear) of the great deluge (GD) algorithm. The purpose of this hybridization is to investigate the ability of this local search algorithm to improve the performance of the GA. In both of the methods, the local search (the GD algorithm) is employed to each generation of the GA. The only difference of these methods is the rate of increase in the `level' in the GD algorithm. The level is increased by a fixed value in the linear GD algorithm, while the nonlinear GD algorithm uses the quality of the current solution to calculate the increase rate of the level in each iteration. The 13 datasets taken from the University of California - Irvine machine learning repository are used to test the methods and compare the results with the on-hand results in the literature, especially with the original GA. The classification accuracies of each dataset using the obtained reducts are examined and compared with other approaches using ROSETTA software. The promising results show the potential of the algorithm to solve the attribute reduction problem.

Keywords

Great deluge algorithm, genetic algorithm, rough set theory, attribute reduction, classification

First Page

1737

Last Page

1750

Share

COinS