Emergent behavior in neuroevolved agents
Date
2018-11Author
Maresso, Brian
Publisher
University of Wisconsin--Whitewater
Advisor(s)
Nguyen, Hien
Mukherjee, Lopamudra
Zhou, Jiazhen
Metadata
Show full item recordAbstract
Neural networks have been widely used for their ability to create generalized rulesets for a given set of training data. In applications where no such training data exists such as new video games, they are often overlooked in favor of hard-coded artificial intelligence behaviors. By applying a genetic algorithm instead of the traditional back-propagation technique, neural networks can develop video game AI while not requiring training data or preexisting knowledge of their environment. This natural approach leads to more natural ‘human-like’ behavior both in terms of the learning process and in qualitative analysis. In this thesis, we evaluate the ability of neuroevolved videogame AI to show human-like learning patterns and adaptability to new environments. For applications in videogames or computer simulations, we explore how a set of hyperparameters can be modified to achieve the desired level of intelligence or difficulty- a key factor for videogame AI. The prospect of changing hyperparameters then passively waiting for training to complete offers an attractive alternative to hard-coded AI where new behaviors would have to be actively written and tested. Our evaluations focus on evolving AI for a simple car racing videogame. We found that our approach was indeed capable of creating a suitable AI for our test environments, and we were able to evolve new behaviors using both old ones (adaptation) and new ones (learning from scratch). Our process could be repeated for other game genres or applications, presumably with similar success.
Subject
Neural networks (Computer science)
Genetic algorithms
Artificial intelligence
Machine learning
Video games
Reinforcement learning
Permanent Link
http://digital.library.wisc.edu/1793/78967Type
Thesis
Description
This file was last viewed in Microsoft Edge.