For a while I've been working on a GGP agent that can connect to the Tiltyard hosting server and I've finally got it working. It uses the framework from cadiaplayer for the web server part, my own custom message handler to govern the central message loop and the KIF parsing framework from Dresden University to parse and reason about the games.
To test things, this agent is just picking the first valid move it comes across. It doesn't apply strategy or move evaluation of any sort. Still, it is great to see it play valid moves for a wide range of games without knowing the rules ahead of time.
It first tells the server it's ready to play, then the server sends it the rules for a game, informs it of other players' moves and my agent sends back valid moves. When the game is over, the server starts a new game with a different set of rules.
In testing it, my agent played through the games of Peg Jumping, Knight's Tour, Tic Tac Toe, 3-player Chinese Checkers and Bomberman! Below is a screenshot of it playing Chinese Checkers (my agent played Red).
Next step is to integrate my Monte Carlo Tree Search algorithm - I will report back :)
CharlieAI
This is a project I started a few years ago; an experiment to create AI that can learn to play board games, and play them well, without at first even knowing the rules.
Friday, July 5, 2013
Saturday, June 29, 2013
General Game Playing Platforms
For several years, a community of people generating and competing with AI to play games has existed. In recent years this community has expanded considerably, mainly due to the projects at the Stanford Logic Group, so I wanted to give a summary of the field as it stands now.
General Game Playing (GGP) is the art of making AI that can play a diverse range of games well without being hand-programmed for any specific game. Rules are described in a formal language called GDL and agents parse the game descriptions, infer the valid moves at any given time and apply a strategy to win.
There are multiple Game Manager hosts, which developers can register their agents with. The main two ones are Dresden GGP Server and Tiltyard which provide large game databases played by registered agents. Matches are automatically hosted between agents, across the Internet, and participants are awarded scores and ratings. The time between an agent getting to know the rules and having to play the game is typically less than a minute, so the AI must be truly general-purpose to play well across the large variety of games.
There are multiple Game Manager hosts, which developers can register their agents with. The main two ones are Dresden GGP Server and Tiltyard which provide large game databases played by registered agents. Matches are automatically hosted between agents, across the Internet, and participants are awarded scores and ratings. The time between an agent getting to know the rules and having to play the game is typically less than a minute, so the AI must be truly general-purpose to play well across the large variety of games.
Every year a world cup with prizes is held for the very best agents. Fluxplayer, Ary and Turtleplayer have been strong in the past, but Cadiaplayer from Reykjavik University has won most world cups and is also the current champion.
The slides from a previous course at Dresden University are a good introduction on how to get started. Several existing frameworks and reference players help to get up to speed quickly. There is also an online course starting in September at Coursera that will teach all the relevant theory and practical knowledge to participate.
My next step is to put together a framework that can parse GDL and communicate to the Game Manager via the established protocol, and then start plugging in my previous AI algorithms. Exciting times ahead! :)
My next step is to put together a framework that can parse GDL and communicate to the Game Manager via the established protocol, and then start plugging in my previous AI algorithms. Exciting times ahead! :)
Monday, May 27, 2013
Connect Four Embedded, Too
And here is the Connect Four player available for you to play against (on Mac OS X and Windows). Again, let me know if you manage to beat it.
Tic Tac Toe Embedded
I managed to put together a simple interactive application for the Tic Tac Toe player, and embed it here. Unfortunately it only supports Mac OS X and Windows, so apologies if you are viewing this from some other operating system.
Let me know if you manage to win! :)
Let me know if you manage to win! :)
Sunday, May 26, 2013
Does it play Connect Four? Yes, it does.
Based on how well my Monte Carlo Tree Search agent played Tic Tac Toe, I wanted to move onto Connect Four.
Because the algorithm is domain independent I merely had to add the logic to list valid moves and determine a win, loss or draw outcome. One half-hour commute later and I had it playing Connect Four!
And it plays well, in fact really really well. Despite me trying my very best and with help from my wife, who happens to beat most of our family and friends at this game, we cannot beat the AI, when it plays first. Connect Four has been fully solved by MinMax searches and it can be shown that the first player can always force a move. This is exactly what my MCTS agent does. It will setup various potential win cases at the higher level slots and then eventually force you into playing moves that will allow it to win.
The other nice thing about the playing style you get from MCTS is that even if you play the same moves against it, due to its random sampling of the search tree, it will typically come up with slightly varied responses - ultimately leading to a win of course! :)
What's next? I'm currently deciding on whether to move onto 9x9 Go, some other intermediate step, or perhaps spend a bit of time making simple web-based GUIs for the Tic Tac Toe and Connect 4 agents, as my current UI is a bit lacking...see insert :)
Because the algorithm is domain independent I merely had to add the logic to list valid moves and determine a win, loss or draw outcome. One half-hour commute later and I had it playing Connect Four!
And it plays well, in fact really really well. Despite me trying my very best and with help from my wife, who happens to beat most of our family and friends at this game, we cannot beat the AI, when it plays first. Connect Four has been fully solved by MinMax searches and it can be shown that the first player can always force a move. This is exactly what my MCTS agent does. It will setup various potential win cases at the higher level slots and then eventually force you into playing moves that will allow it to win.
The other nice thing about the playing style you get from MCTS is that even if you play the same moves against it, due to its random sampling of the search tree, it will typically come up with slightly varied responses - ultimately leading to a win of course! :)
What's next? I'm currently deciding on whether to move onto 9x9 Go, some other intermediate step, or perhaps spend a bit of time making simple web-based GUIs for the Tic Tac Toe and Connect 4 agents, as my current UI is a bit lacking...see insert :)
Saturday, May 25, 2013
Monte Carlo Tree Seach
Recently I have made great progress on my AI project, so today I will attempt to wake this sleepy blog from hibernation.
While I had heard about the success of Monte Carlo Tree Search applied to Go and other games a few years ago, it was only about a month ago that I became aware that in its basic form the algorithm is domain independent. It is therefore very suitable for my goal of making general game playing AI that doesn't need to be taught the rules of the games played.
For the necessary background knowledge and a great introduction to the algorithm and its variations, "A Survey of Monte Carlo Tree Search Methods" is a great resource. This paper also includes pseudo-code for the basic algorithm.
In essence, MCTS plays thousands of games to completion taking random moves to get there. The resulting win, loss or draw of each game then informs the algorithm which parts of the search tree to explore further. This combination of random moves and statistical analysis makes the algorithm applicable to a large range of domains and also means it scales well to harder problems - the more CPU time you give it, the better the result will be.
Because the algorithm plays the games to completion, it only needs to be informed about which moves are valid for each turn and whether the game resulted in a win, loss or draw at the end. No other domain-specific knowledge is required.
Is this the end of my genetic programming approach? Quite the opposite, I will be exploring combining the two approaches in the near future.
To try out MCTS, I implemented the basic algorithm and applied it to Tic Tac Toe. After a couple of hours of coding I was ready to play against it...and...it plays extremely well! If the agent starts, it appears impossible to win against it. It sets up clever double win strategies so the best a human opponent can hope for is a draw. I was very impressed!
It is time to move on to the slightly more complex game of Connect 4...
While I had heard about the success of Monte Carlo Tree Search applied to Go and other games a few years ago, it was only about a month ago that I became aware that in its basic form the algorithm is domain independent. It is therefore very suitable for my goal of making general game playing AI that doesn't need to be taught the rules of the games played.
For the necessary background knowledge and a great introduction to the algorithm and its variations, "A Survey of Monte Carlo Tree Search Methods" is a great resource. This paper also includes pseudo-code for the basic algorithm.
In essence, MCTS plays thousands of games to completion taking random moves to get there. The resulting win, loss or draw of each game then informs the algorithm which parts of the search tree to explore further. This combination of random moves and statistical analysis makes the algorithm applicable to a large range of domains and also means it scales well to harder problems - the more CPU time you give it, the better the result will be.
Because the algorithm plays the games to completion, it only needs to be informed about which moves are valid for each turn and whether the game resulted in a win, loss or draw at the end. No other domain-specific knowledge is required.
Is this the end of my genetic programming approach? Quite the opposite, I will be exploring combining the two approaches in the near future.
To try out MCTS, I implemented the basic algorithm and applied it to Tic Tac Toe. After a couple of hours of coding I was ready to play against it...and...it plays extremely well! If the agent starts, it appears impossible to win against it. It sets up clever double win strategies so the best a human opponent can hope for is a draw. I was very impressed!
It is time to move on to the slightly more complex game of Connect 4...
Saturday, October 8, 2011
Objective Score for Good Move Algorithm
I wanted to get an idea of what a relatively strong Tic-tac-toe player would score with the objective measure. So instead of using an evolved player, I measured the Good Move algorithm. As mentioned before it looks for 2 pieces in a row with an empty square at either end and then plays there or blocks accordingly.
Here are the scores from ten tests and the average score computed:
967, 975, 844, 968, 1021, 917, 992, 1059, 1051, 986 => 978.0
Now, this algorithm doesn't look ahead more than one move so it will not play perfectly, but it will be relatively strong. If I can somehow evolve something as strong as that, using co-evolution, it would certainly be a worthy opponent.
Subscribe to:
Posts (Atom)