•  
  •  
 

Turkish Journal of Electrical Engineering and Computer Sciences

DOI

10.55730/1300-0632.3998

Abstract

With the rapid development of 5G and the Internet of Things (IoT), the traditional cloud computing architecture struggle to support the booming computation-intensive and latency-sensitive applications. Mobile edge computing (MEC) has emerged as a solution which enables abundant IoT tasks to be offloaded to edge services. However, task offloading and resource allocation remain challenges in MEC framework. In this paper, we add the total number of offloaded tasks to the optimization objective and apply algorithm called Deep Learning Trained by Genetic Algorithm (DL-GA) to maximize the value function, which is defined as a weighted sum of energy consumption, latency, and the number of offloaded tasks. First, we use GA to optimize the task offloading scheme and store the states and labels of scenario. Each state consists of five parameters: the IDs of all tasks generated in this scenario, the cost of each task, whether the task is offloaded, bandwidth occupied by offloaded task and remaining bandwidth of edge server. The labels are the tasks that are currently selected for offloading. Then, these states and labels will be used to train neural network. Finally, the trained neural network can quickly give optimization solutions. Simulation results show that DL-GA can execute 75 to 450 times faster than GA without losing much optimization power. At the same time, DL-GA has stronger optimization capability compared to Deep Q-Learning Network (DQN).

Keywords

Mobile edge computing, task offloading, resource allocation, genetic algorithm, deep learning

First Page

498

Last Page

515

Share

COinS