A ‘Clash Royale’* Artificial Intelligence Experiment

Abstract

Using machine learning and online training, RoyalGhost will train an agent to behave rationally in a competitive game of Clash Royale. By recreating the game environment at a high level and training a model against previous iterations of itself, this will teach the model how to appropriately counter a foe’s actions and triumph in games.

https://github.com/samellgass2/royal_ghost

link to project github repo.

Environment

The game environment can be understood to be a 30 x 18 grid of cells where cells occupied by hazards (Princess Towers, King Towers, other troops, the river) are inaccessible. Troops are deployed in a legal space (depending on troop type), and then their actions after deployment are completely defined by targeting and attacking policies.

The game board training environment is a recreation of the in-game board, simplified with no animations or updates throughout the game except troop movement and health updates.

A screenshot from a gameboard in play.

A screenshot from a gameboard in play.

The gameboard displayed as accessible and inaccessible cells.

The gameboard displayed as accessible and inaccessible cells.

Simplifications

Since the regular game performs dozens of updates a second, in order to quickly train a model, various concessions were made, which are currently:

Models

Q-Learning Nearest Troop Agent

The first agent that I’ve implemented is a reinforcement learning agent that uses q-values of states to make decisions. The definition of a state for this agent is (closest_troop_name, dist_to_agent’s_nearest_unit, elixir_count). Thus, the agent has some ability to reason about counter play as well as thinking ahead to how placing troops will impact its elixir count.

This agent receives rewards equal to