Abstract

Resource-Constrained Edge Devices Can Not Efficiently Handle the Explosive Growth of Mobile Data and the Increasing Computational Demand of Modern-Day User Applications. Task Offloading Allows the Migration of Complex Tasks from User Devices to the Remote Edge-Cloud Servers Thereby Reducing their Computational Burden and Energy Consumption While Also Improving the Efficiency of Task Processing. However, Obtaining the Optimal Offloading Strategy in a Multi-Task Offloading Decision-Making Process is an NP-Hard Problem. Existing Deep Learning Techniques with Slow Learning Rates and Weak Adaptability Are Not Suitable for Dynamic Multi-User Scenarios. in This Article, We Propose a Novel Deep Meta-Reinforcement Learning-Based Approach to the Multi-Task Offloading Problem using a Combination of First-Order Meta-Learning and Deep Q-Learning Methods. We Establish the Meta-Generalization Bounds for the Proposed Algorithm and Demonstrate that It Can Reduce the Time and Energy Consumption of IoT Applications by Up to 15%. through Rigorous Simulations, We Show that Our Method Achieves Near-Optimal Offloading Solutions While Also Being Able to Adapt to Dynamic Edge-Cloud Environments.

Department(s)

Computer Science

Keywords and Phrases

Computational modeling; Deep Q-learning; Directed Acyclic Graph; Edge-Cloud Computing; Energy consumption; Heuristic algorithms; Internet of Things; Internet of Things; Meta-Learning; Multi-Task Offloading; Multitasking; Servers; Task analysis

International Standard Serial Number (ISSN)

1558-0660; 1536-1233

Document Type

Article - Journal

Document Version

Citation

File Type

text

Language(s)

English

Rights

© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.

Publication Date

01 Jan 2023

Share

 
COinS