How to get ants to solve a chess problem

How to get ants to solve a chess problem

By Graham Kendall, University of Nottingham

Take a set of chess pieces and throw them all away except for one knight. Place the knight on any one of the 64 squares of a chess board.

Can you make 63 legal moves so that you visit every square on the chess board exactly once? As a reminder, a knight can move two squares in a straight line, followed by a ninety degree turn and a move of one further square. It might seem like a hard task, but this set of moves, called the knight’s tour, can be achieved in too many ways to count.

The knight's tour

If you are able to make the 63 moves and end up on a square from which you can move back to the original square with the 64th legal move, then this is known as a closed tour. Other tours are called open tours.

Mathematicians have pondered how many closed tours exist, and they have come up with an astonishing number: more than 26 trillion. There are so many more open tours that we do not know the exact number.

Both Philip Hingston and I were so captivated by the knight’s tour problem that we wanted to find a different way to solve it. We found that motivation in nature – specifically in ants.

Ants use a certain pattern, or algorithm, to forage for food. This algorithm can be used to tackle many types of problems including the Travelling Salesman Problem and Vehicle Routing Problems. Philip and Graham wondered if they could use the ant colony optimisation algorithm to solve the knight’s tour problem.

Here’s how that algorithm works: a computer program is used to simulate a population of ants. These ants are assigned the task to find a solution to a problem. As each ant goes about their task they lay a pheromone trail – a smelly substance that ants use to communicate with each other. In the simulated algorithm, the most successful ants (the ones that solve the problem better), lay more pheromone than those that perform poorly.

L. Shyamal

We repeat this procedure many times (perhaps millions of times). Through repetitions, the pheromone trails on good solutions increase and they decrease on the poorer solutions due to evaporation, which is also programmed in the simulation algorithm.

In the simulation to solve the knight’s tour problem, the ants could only make legal knight moves and were restricted to stay within the confines of the chess board. If an ant successfully completes a tour then we reinforce that tour by depositing more pheromone on that tour, when compared to a tour that was not a full tour.

Ants which attempt to find later tours are more likely to follow higher levels of pheromone. This means that they are more likely to make the same moves as previously successful ants.

There is a balance to be struck. If the ants follow the successful ants too rigidly, then the algorithm will quickly converge to a single tour. If we encourage the ants too much, not to follow the pheromone of previous ants, then than they will just act randomly. So it is a case of tuning the algorithm’s parameters to try and find a good balance.

Using this algorithm, we were able to find almost half a million tours. This was a significant improvement over previous work, which was based on a genetic algorithm. These algorithms emulate Charles Darwin’s principle of natural evolution – survival of the fittest. Fitter members (those that perform well on the problem at hand) of a simulated population survive and weaker members die off.

It is not easy to say why the ant algorithm performed so well, when compared to the genetic algorithm. Perhaps it was down to tuning the algorithmic parameters, or perhaps ants really do like to play chess!

The knight’s tour problem was being worked on as far back as 840 AD. Little did those problem-solvers know that ants, albeit simulated ones, would be tackling the same puzzle more than 1,000 years in the future.

The Conversation

Graham Kendall does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

This article was originally published on The Conversation.
Read the original article.

2010 Pac-Man Competition at CIG 2010


Last week I attended the IEEE Conference on Computational Intelligence and Games (CIG 2010). This conference (it was actually a symposium in the early days) was started by Simon Lucas and myself in 2005. I also co-chaired the conference with Susil Louis in 2006. The conference has been run every year since 2005, with the lastest one being held in Copenhagen, chaired by Georgios N. Yannakakis and Julian Togelius.

This year I had a paper that was developed as part of a 2nd Year Group Project (the teams members are shown in the picture).

As part of the Computer Science undergraduate degree at the University of Nottingham, second year students are assigned to groups and given a software engineering task to work on. Last year, I set the following:

Your task is to write a computer program that can play the game of Pacman, without human intervention. This is a challenging task and, without some support would, perhaps, be impossible for a second year group project. However, this task has been an ongoing competition for a number of years and there is lots of information/support available, as well as examples as to what can be achieved.

Your starting point should be the web pages of Professor Simon Lucas at the University of Essex, who organises the competition. Therefore, take a look at this web page. This is ESSENTIAL reading and you should study this page (and associated links) before our first meeting. You should also make regular visits to the page as I know that Professor Lucas is planning to update it on a regular basis. Using the information on this web site you should be able to get a system up and running quite quickly and then you have to develop your own algorithms to produce the best automated player that you can.

I would expect you to carry out (at least) the following tasks, with the first two feeding directly into your literature review:

By referencing the competition entries, find out what algorithms appear to have worked well in the past.

Investigate and draw up a set of algorithms that you think could be used as a Pacman Controller. These algorithms might be based on various criteria such as those identified in 1 (above), “Manhattan Distance”, “Straight Line Distance”, “Trying to eat the ghosts”, “Trying to avoid the ghosts”, “Trying to eat the fruit”, “Maximising the score, whilst minimising your chances of being eaten”, etc. There is probably not one good overall strategy and you might consider changing strategies as the game state changes. You might also want to consider how much you plan ahead and how much you just make quick decisions, given that this is a real time game.

As part of the supplied toolkit you are provided with screen capture software (and a window showing you a representation of the captured game state), the game screen itself and a small screen showing the current direction of the Pacman character. As part of this project, I would also like you to develop another GUI element which enables you to select from amongst the algorithms that you have implemented, keeps track of their high score, enables you to choose how “adventurous” the player will be, enables you to mix/match the algorithms etc. Part of the project will be to design this GUI element, deciding what role it should play in the overall software architecture. You can then use this captured information as a basis for the analysis in your final dissertation.

Implement the int move(GameState gs) that is provided in the sample toolkit in order to test out the various game playing algorithms that you have investigated in points 1 and 2 above.

You might want to look at look at the YouTube Video for the competition entry from WCCI 2008 (google “YouTube WCCI pacman”). To give you some indication of the current state of the art, the most recent competition (run in Milan in August 2009), the winning entry achieved a score over 30,000 and reached (from memory) level 5 (it might have been level 4). If you can get anywhere near that you will be doing very well.

The project group did so well that the team not only won the prize for the best project but the School also agreed to fund one of the team members to attend the conference to enter the competition. As a result we also wrote a paper which was a accepted at the conference and the student presented it (a big undertaking for an undergraduate student, but he did very well).

In the competition, we came a VERY respectable third. In second place were last years winners. In first place was a new team who used an Ant Colony based method.

The difference between the top three entries was not that large, with the winning entry getting about 21,000 points.

In fact, during pre-competition testing our team achieved scores of over 22,000 on many occasions, with our best ever score being just over 30,000 (I am sure that the other entrants also have similar sob stories to tell :-)).

If you are interested the reference for our paper is.

Bell N., Fang X., Hughes R., Kendall G., OReilly E. and Qiu S. (2010) Ghost Direction Detection and other Innovations for Ms. Pac-Man. In proceedings of the the 2010 IEEE Conference on Computational Intelligence and Games (CIG’10), 18-21 Aug 2010, Copenhagen, Denmark, pp 465-472

It can be downloaded from here.

The references for the first and second placed entries are.

  1. Emilio M., Moises M., Gustavo R. and Yago S. (2010) Pac-mAnt: Optimization Based on Ant Colonies Applied to Developing an Agent for Ms. Pac-Man. In proceedings of the the 2010 IEEE Conference on Computational Intelligence and Games (CIG’10), 18-21 Aug 2010, Copenhagen, Denmark, pp 458-464
  2. Thawonmas R. and Ashida T. (2010 Evolution Strategy for Optimizing Parameters in Ms Pac-Man Controller ICE Pambush 3. In proceedings of the the 2010 IEEE Conference on Computational Intelligence and Games (CIG’10), 18-21 Aug 2010, Copenhagen, Denmark, pp 235-240

More details about the Pac-Man competition can be found at Simon Lucas’ Pac-Man page.

The 2009 IEEE Symposium on Computational Intelligence and Games

I have just arrived in Milan for the 2009 CIG (Computational Intelligence and Games) conference. This was a conference that Simon Lucas and I started in 2005. Simon is now editor-in-chief of the IEEE Transactions on Computational Intelligence and Artificial Intelligence in Games. The 2005 conference, I believe, was partly responsible for paving the way to enabling this journal to be established. I am fortunate enough to serve as an Associate Editor for the journal.

As I said above, the first conference (actually it’s a symposium, as the correct title is The IEEE Symposium on Computational Intelligence and Games) was held in 2005 in Essex (UK). The original plan was to hold the symposium every two years, but it was so successful that we decided to hold it again in 2006. This time, Sushil Louis and I chaired it. It was held at the University of Reno in Nevada.

In 2007 it was held in Hawaii, as part of the then newly formed Computational Intelligence Symposium Series of conferences (CISS). Simon, again chaired this symposium, along with Sung-Bae Cho and Alan Blair. Actually, at the 2007 CISS, I chaired the associated scheduling conference (Computational Intelligence in Scheduling (CISched).

In 2008, CIG was held in Perth, Australia, chaired by my good friends Luigi Barone and Phil Hingston.

The sympsium has now moved to Milan (chaired by Pier Luca Lanzi). It has certainly done the rounds (Essex, Reno, Hawaii, Perth and Milan) and, having been to all of them, I know from first hand experience that it is going from strength to strength.

Looking at this years program it promises to be a very interesting week. If you have an interest in computational intelligence or games, take a look at http://www.ieee-cig.org/.