Neuroevolutionary Artificial Intelligence for Strategy Games

Bachelor’s Thesis focused on developing an artificial intelligence framework that combines neural networks with genetic algorithms to learn competitive strategies in Magic: The Gathering, a card game with high strategic complexity.

The Challenge

Magic: The Gathering presents an extremely large state space with imperfect information, thousands of possible cards, and multiple game phases. Traditional AI approaches based on exhaustive search are infeasible due to the game’s combinatorial complexity. The challenge was to design a system capable of learning competitive strategies without direct human supervision.

My Role

As lead researcher and developer, I designed and implemented the complete framework:

  • Designed the neuroevolutionary system architecture from scratch
  • Implemented genetic algorithms for neural network evolution
  • Developed the game state evaluation system
  • Integrated the framework with the Magarena game engine for automated training
  • Designed and executed validation experiments
  • Analyzed results through quantitative performance metrics

Technical Approach

The framework implements a complete neuroevolutionary pipeline:

System architecture:

  • Neural networks as decision agents evaluating game positions
  • Genetic algorithms evolving network weights and topology across generations
  • Fitness system based on automated matches against reference agents
  • Parallel evaluation of multiple agents per generation

Game engine:

  • Integration with Magarena, an open-source MTG engine in Java
  • Groovy automation scripts for tournament and generation management
  • Python for result analysis and visualization

Evolutionary process:

  • Population of neural networks competing against each other and baseline agents
  • Selection, crossover, and mutation of the fittest individuals
  • Complexity metrics evaluation of the game environment

Key Technical Decisions

  • Neuroevolution over reinforcement learning: I chose neuroevolutionary methods because they explore the solution space without requiring an explicit reward function, which is difficult to define in a game with such complex states
  • Java as primary language: Native integration with Magarena avoided inter-process communication overhead
  • Multi-agent evaluation: Each generation tested agents against each other and fixed baselines to measure both absolute and relative improvement

Results

The AI demonstrated the capability to learn competitive strategies and consistently improve its performance generation after generation. The system validated the applicability of neuroevolutionary methods in highly complex strategic games with imperfect information, opening a research line on emergent behavior evolution in complex game environments.