My First Imitation Learning Model

article currently in the process of being written

I decided to write about this because ..boy it was difficult doing it. Also I'd like to improve on it soon so this will hopefully help clear up the clutter in my head and I will have an easy ride when I pick it back up

I recently completed my first imitation learning model. It was a behavioral cloning model which was built for the atari breakout game. For it I had to do a lot of research and reading.

First of all here is the github link

Imitation learning is a technique in ML in which an agent learns from the recorded behavior of an expert or human and tries to replicate the behavior demonstrated by the expert. Behavioral cloning is a type of imitation learning that uses supervised learning to achieve the main purpose of imitation learning.

There were 3 key parts to the implementation of this problem:

  1. the expert data
  2. the environment
  3. the model


This is looking a bit like reinforcement learning, ...maybe but also, no, not really.

The data

The data for this project was obtained according to the process described in this article
The data consists of images of the game being played by the expert as well as the actions taken by the expert for each image. We do not need the reward in our case because we are performing behavioral cloning, a supervised learning approach to imitation learning.

The environment

The environment here refers to the openAI gym environment, specifically its atari breakout game component. The expert data was obtained from playing the game and recording the actions as well as the corresponding image of game screen.
Subsequently after the model has been developed and the agent is to carry out what it has learnt from training, it does so on this environment.

The model

I decided to use the architecture described in deepmind's nature paper with 3 convolutional layers, 1 fully connected and 1 output layer.

No Comments Yet