Resource-Constrained Edge Devices Can Not Efficiently Handle the Explosive Growth of Mobile Data and the Increasing Computational Demand of Modern-Day User Applications. Task Offloading Allows the Migration of Complex Tasks from User Devices to the Remote Edge-Cloud Servers Thereby Reducing their Computational Burden and Energy Consumption While Also Improving the Efficiency of Task Processing. However, Obtaining the Optimal Offloading Strategy in a Multi-Task Offloading Decision-Making Process is an NP-Hard Problem. Existing Deep Learning Techniques with Slow Learning Rates and Weak Adaptability Are Not Suitable for Dynamic Multi-User Scenarios. in This Article, We Propose a Novel Deep Meta-Reinforcement Learning-Based Approach to the Multi-Task Offloading Problem using a Combination of First-Order Meta-Learning and Deep Q-Learning Methods. We Establish the Meta-Generalization Bounds for the Proposed Algorithm and Demonstrate that It Can Reduce the Time and Energy Consumption of IoT Applications by Up to 15%. through Rigorous Simulations, We Show that Our Method Achieves Near-Optimal Offloading Solutions While Also Being Able to Adapt to Dynamic Edge-Cloud Environments.
N. Sharma et al., "Deep Meta Q-Learning based Multi-Task Offloading in Edge-Cloud Systems," IEEE Transactions on Mobile Computing, Institute of Electrical and Electronics Engineers, Jan 2023.
The definitive version is available at https://doi.org/10.1109/TMC.2023.3264901
Keywords and Phrases
Computational modeling; Deep Q-learning; Directed Acyclic Graph; Edge-Cloud Computing; Energy consumption; Heuristic algorithms; Internet of Things; Internet of Things; Meta-Learning; Multi-Task Offloading; Multitasking; Servers; Task analysis
International Standard Serial Number (ISSN)
Article - Journal
© 2023 Institute of Electrical and Electronics Engineers, All rights reserved.
01 Jan 2023