ct 18(14): e3

Research Article

Exploring Deep Recurrent Q-Learning for Navigation in a 3D Environment

Download1114 downloads
  • @ARTICLE{10.4108/eai.16-1-2018.153641,
        author={Rasmus Kongsmar Brejl and Henrik Purwins and Henrik Schoenau-Fog},
        title={Exploring Deep Recurrent Q-Learning for Navigation in a 3D Environment},
        journal={EAI Endorsed Transactions on Creative Technologies},
        volume={5},
        number={14},
        publisher={EAI},
        journal_a={CT},
        year={2018},
        month={1},
        keywords={Reinforcement Learning ∙ Deep Learning ∙ Q-Learning ∙ Deep Recurrent Q-Learning ∙ Artificial Intelligence ∙ Navigation ∙ Game Intelligence},
        doi={10.4108/eai.16-1-2018.153641}
    }
    
  • Rasmus Kongsmar Brejl
    Henrik Purwins
    Henrik Schoenau-Fog
    Year: 2018
    Exploring Deep Recurrent Q-Learning for Navigation in a 3D Environment
    CT
    EAI
    DOI: 10.4108/eai.16-1-2018.153641
Rasmus Kongsmar Brejl1,2,*, Henrik Purwins1,2, Henrik Schoenau-Fog1
  • 1: The Center for Applied Game Research, Department of Architecture, Design, and Media Technology, Technical Faculty of IT and Design, Aalborg University Copenhagen, Denmark
  • 2: Audio Analysis Lab, Department of Architecture, Design, and Media Technology, Technical Faculty of IT and Design, Aalborg University Copenhagen, Denmark
*Contact email: rasmuskbrejl@gmail.com

Abstract

Learning to navigate in 3D environments from raw sensory input is an important step towards bridging the gap between human players and artificial intelligence in digital games. Recent advances in deep reinforcement learning have seen success in teaching agents to play Atari 2600 games from raw pixel information where the environment is always fully observable by the agent. This is not true for first-person 3D navigation tasks. Instead, the agent is limited by its field of view which limits its ability to make optimal decisions in the environment. This paper explores using a Deep Recurrent Q-Network implementation with a long short-term memory layer for dealing with such tasks by allowing an agent to process recent frames and gain a memory of the environment. An agent was trained in a 3D first-person labyrinth-like environment for 2 million frames. Informal observations indicate that the trained agent navigated in the right direction but was unable to find the target of the environment.