University of Bahrain
Scientific Journals

Speeding Up the Learning in A Robot Simulator

Show simple item record Al-Emran,Mostafa 2018-07-31T08:44:59Z 2018-07-31T08:44:59Z 2015
dc.identifier.issn 2210-1519
dc.description.abstract Q-learning is a one of the well-known Reinforcement Learning algorithms that has been widely used in various problems. The main contribution of this work is how to speed up the learning in a single agent environment (e.g. the robot). In this work, an attempt to optimize the traditional Q-learning algorithm has been done via using the Repeated Update Q-learning (RUQL) algorithm (the recent state-of-the-art) in a robot simulator. The robot simulator should learn how to move from one state into another in order to reach the end of screen as faster as possible. An experiment has been conducted in order test the effectiveness of the RUQL algorithm versus the traditional Q-learning algorithm by comparing both algorithms through using similar parameters' values for several trials. Experiment results revealed that the RUQL algorithm has outperforms the traditional Q-learning algorithm in all the trials. en_US
dc.language.iso en en_US
dc.publisher University of Bahrain en_US
dc.rights Attribution-NonCommercial-ShareAlike 4.0 International *
dc.rights.uri *
dc.subject Robot
dc.subject Simulator
dc.subject Q-Learning
dc.title Speeding Up the Learning in A Robot Simulator en_US
dc.type Article en_US
dc.volume 03
dc.issue 03
dc.source.title International Journal of Computing and Network Technology
dc.abbreviatedsourcetitle IJCNT

Files in this item

This item appears in the following Issue(s)

Show simple item record

Attribution-NonCommercial-ShareAlike 4.0 International Except where otherwise noted, this item's license is described as Attribution-NonCommercial-ShareAlike 4.0 International

All Journals

Advanced Search


Administrator Account