This research, path planning optimization of the deep Q-network (DQN) algorithm
is enhanced through integration with the enhanced deep Q-network (EDQN) for
mobile robot (MR) navigation in specific scenarios. This approach involves
multiple objectives, such as minimizing path distance, energy consumption, and
obstacle avoidance. The proposed algorithm has been adapted to operate MRs in
both 10 × 10 and 15 × 15 grid-mapped environments, accommodating both static and
dynamic settings. The main objective of the algorithm is to determine the most
efficient, optimized path to the target destination. A learning-based MR was
utilized to experimentally validate the EDQN methodology, confirming its
effectiveness. For robot trajectory tasks, this research demonstrates that the
EDQN approach enables collision avoidance, optimizes path efficiency, and
achieves practical applicability. Training episodes were implemented over 3000
iterations. In comparison to traditional algorithms such as A*, GA, and ACO, as
well as deep learning algorithms (IDQN and D3QN), the simulation and real-time
experimental results showed improved performance in both static and dynamic
environments. The results indicated a travel time reduction to 9 s, a 14.6%
decrease in total path distance, and a training duration reduction of 1657
iterations compared to IDQN and D3QN.