Abstract:
Dynamic path planning involves finding the most efficient path between the beginning and the destination in an unfamiliar and constantly changing environment while avoiding fixed and moving impediments. Using advanced sensors, mobile robots may traverse their environment without human intervention, ensuring safety and autonomy. It is necessary to utilize more efficient algorithms to resolve the problem of inadequate robot performance in such environments and achieve intelligent path planning considering factors such as time, energy, and distance. Recently, reinforcement learning and deep neural networks techniques had been used recently to address these problems. By using a trial-and-error methodology to communicate with its surroundings, an artificial intelligence agent uses reinforcement learning to acquire an ideal behavioral approach predicated on reward signals from previous transactions. The reinforcement learning agent's learning process resembles the method used by humans and animals to learn. The fact that reinforcement learning may be used to different scientific and engineering domains is one of its most advantageous features. Reinforcement learning has shown to be an effective approach in recent years for managing difficult sequential decisions. It presents a fantastic chance to explore new technological horizons in areas where system models are non-existent or too complex, costly, or time-consuming to develop. This review article examines path planning strategies utilizing neural networks, such as deep reinforcement learning, the fundamental concepts of it as well as the components of a system that uses it. Including policy gradient, model-free learning, model- based learning, and actor-critic techniques.