flappybird-qlearning-bot

Flappy Bird Bot using Reinforcement Learning

  • 所有者: chncyhn/flappybird-qlearning-bot
  • 平台:
  • 許可證: MIT License
  • 分類:
  • 主題:
  • 喜歡:
    0
      比較:

Github星跟蹤圖

Flappy Bird Bot using Reinforcement Learning in Python

4000+ scored

A Flappy Bird bot in Python, that learns from each game played via Q-Learning.

Youtube Link


Running

Only dependency of the project is pygame.

  • src/flappy.py - Run to see the actual visual gameplay.
  • src/learn.py - Run for faster learning/training. This runs without any pygame visualization, so it's much faster.
    • The following command-line args are available:
      • --verbose to see iteration, score pair printed at each iteration. (Iteration = a bird playing from start until death)
      • --iter number of iterations to run.
  • src/initialize_qvalues.py - Run if you want to reset the q-values, so you can observe how the bird learns to play over time.
  • src/bot.py - This file contains the Bot class that applies the Q-Learning logic to the game.

How it works

With every game played, the bird observes the states it has been in, and the actions it took. With regards to their outcomes, it punishes or rewards the state-action pairs. After playing the game numerous times, the bird is able to consistently obtain high scores.

A reinforcement learning algorithm called Q-learning is utilized. This project is heavily influenced by the awesome work of sarvagyavaish, but I changed the state space and the algorithm to some extent. The bot is built to operate on a modifed version of the Flappy Bird pygame clone of sourabhv.


We define the state space and action set, and the bird uses its experiences to give rewards to various state-action pairs.

I defined the states a little different from sarvagyavaish. In his version horizontal and vertical distances from the next pipe define the state of the bird. When I wrote the program to work like this, I found that convergence takes a very long time. So I instead discretized the distances to 10x10 grids, which greatly reduces the state space. Moreover, I added vertical velocity of the bird to the state space.

I also changed the algorithm a bit. Instead of updating Q-values with each experience observed, I went backward after each game played. So, Q-values are calculated going backwards from the last experience to first. I figured this would help propagate the “bad state” information faster. In addition if the bird dies by collapsing to the top-section of a pipe, the state where bird jumped gets flagged and is punished additionally. This works nice, since dying to the top-section of the pipe is almost always the result of a bad jump. The flagging helps propagating the information to this ‘bad’ [s,a] pair quickly.

Learning Graph

As it can be seen, after around 1500 game iterations, the bot learns to play quite well, averaging about 150 score, and also occasionally hitting very good max scores.


Update

With 5x5 grids instead of 10x10 (and also y velocity still in the state space), the convergence takes longer, but it converges to around 675 score, significantly beating the 150 score of the previous run. Also, the bird is able reach very high scores (3000+) quite many times.

Learning Graph II

Credits

https://github.com/sourabhv/FlapPyBird

http://sarvagyavaish.github.io/FlappyBirdRL/

https://github.com/mihaibivol/Q-learning-tic-tac-toe

主要指標

概覽
名稱與所有者chncyhn/flappybird-qlearning-bot
主編程語言Python
編程語言Python (語言數: 1)
平台
許可證MIT License
所有者活动
創建於2016-05-24 12:58:33
推送於2019-10-19 17:57:26
最后一次提交2019-10-19 19:54:41
發布數0
用户参与
星數421
關注者數18
派生數94
提交數35
已啟用問題?
問題數10
打開的問題數2
拉請求數2
打開的拉請求數0
關閉的拉請求數1
项目设置
已啟用Wiki?
已存檔?
是復刻?
已鎖定?
是鏡像?
是私有?