Abstract:
Cloud computing has transformed data management with its scale and flexibility. Cloud resources are transient and diversified,
making task scheduling difficult. This paper proposes Double Deep Q-Network (DDQN) reinforcement learning model to solve the cloud
computing task scheduling problem. Double Deep Q-Network (DDQN) is a powerful reinforcement learning system that improves on
Deep Q-Networks (DQN). The target network and the online network are the two distinct neural networks that DDQN presents. To create
a more consistent and less unpredictable learning process, the target network is updated on a regular basis to imitate the Q-value estimations
of the online network. Traditional DQN can have problems with overestimation bias, which is something that this dual-network architecture
helps to alleviate. DDQN is a reliable and efficient tool for solving complex reinforcement learning problems. It excels in learning optimal
strategies through iteratively improving its Q-value estimations. DDQN presents a robust framework for addressing the challenges inherent
in cloud computing task scheduling. Its dual-network architecture and iterative learning process offer a promising avenue for enhancing
the efficiency and effectiveness of resource allocation in cloud environments. Through its continuous refinement of Q-value estimations,
DDQN emerges as a valuable asset in navigating the complexities of modern data management within cloud infrastructures.