PATAT 2012: Practioners Welcome

The PATAT (Practice and Theory of Automated Timetabling) conference officially started today, although there was a social event yesterday.
One of the innovative aspects of PATAT, which is reflected in the conference title, is that it attracts both academics and practitioners.

The 2012 conference (the ninth in the conference series) has probably attracted more practitioners than previous conferences. The practitioners are drawn from people who work in universities who have the unenviable task of actually generating timetables of various types (for example, examination timetable, course timetable etc.) as well as companies that develop software to assist the timetabling community. Whilst not being a complete list, the following companies are present at PATAT 2012.

These three companies are arguably some of the leading software companies in the timetabling sector and it is good to see them interacting with the scientific community.

Given the number of practitioners present at PATAT, the conference program reflects this important element of the conference, with a stream running throughout the conference called Practical University Timetabling where the presentations are given by colleagues who work in timetabling offices or by software vendors.

It is good to see that conferences, such as PATAT, recognise the importance of including the end users and software vendors in helping to direct the research direction which, in my view, can only be a good thing.

If you are interested, more PATAT posts can be seen here.

PATAT 2012 Conference

I am just about to head off to the PATAT 2012 conference. This year it is in Oslo which would normally be a short hop from the UK but when you live in Malaysia it is a 15 hour flight! This is the 9th PATAT (Practice and Theory of Automated Timetabling) conference in the series, which can be traced back to 1997 (see here, and a full list is given at the end of this post).

The venue for this years conference looks superb and although the weather might be a bit cooler than Kuala Lumpur I am looking forward to visiting Oslo and Norway.

The conference oragnisers (this year the conference has been organised by SINTEF), have put together an excellent programme. Underpinning it are three plenary speakers (Gilbert Laporte, Greet Vanden Berghe and Paul McMullan), backed up by keynote talks from Ruth Drysdale and Lee Hollins. The main sessions comprise about 55 talks and a number of software demonstrations.

One of my co-authors is presenting one of our papers.

Jerry Swan, Ender Ozcan and Graham Kendall, Co-evolving add and delete heuristics.

PATAT would not be PATAT with a strong social program, and that is certainly in evidence this time round.

So, it looks like being a good week, once the 14 hour flight is out of the way.

Finally, although this information is given on this page, for ease of reference, here is the complete list of PATAT conferences to date.

  1. The 9th conference will be held at Son, Norway, from 28th to 31st August, 2012.
  2. The 8th conference was held at Queen’s University Belfast, Northern Ireland, from 10th August to 13th August, 2010.
  3. The 7th conference was held at CIRRELT, Universit de Montral, Montreal, Canada from 19th August to 22nd August, 2008.
  4. The 6th conference was held at the Masaryk University, Brno, Czech Republic from 30th August to 1st September 2006.
  5. The 5th conference was held at Sheraton Station Square, Pittsburg PA, U.S.A. in August 2004.
  6. The 4th conference was held at KaHo St.-Lieven, Gent, Belgium in August 2002.
  7. The 3rd conference was held at Fachhochschule Konstanz – University of Applied Sciences, Constance, Germany in August 2000.
  8. The 2nd conference was held at the University of Toronto, Canada in August 1997.
  9. The 1st conference was held at Napier University, Edinburgh, Scotland in September 1995.

Tracking Paper Downloads: Database

In my last post I outlined a few thoughts about tracking downloads of papers from the MISTA web site. Of course, the ideas can be used on any web site but I am particularly interested in MISTA at the moment.

I have now started to develop the database, which will be a MySQL  database which will be updated via PHP.

The database design is still very much work in progress but my initial thoughts are to hold the following fields.

The first table is the paper downloads table. This will hold the following:

id: Auto incrementing index just to track the number of downloads.

bibtex: This is the bibtex key of the paper that was requested. In the future I might use the doi (Digital Object Identifier) but bibtex is the best thing for me to uniquely identify apaper at the moment.

whenRequested: This is be a time stamp indicating when the request was received.

whenRetrieved: This is a time stamp indicating when the paper was actually downloaded.

accessCode: This will be a link between when the paper is requested and when it is retrieved. I will talk more about this in a later blog.

givenName: This is the given name of the person requesting the paper. As I said on my previous blog, I may not actually use this.

familyName: This is the family name of the person requesting the paper. As I said on my previous blog, I may not actually use this.

affiliation: This is the affiliation (university or company) of the person requesting the paper. As I said on my previous blog, I may not actually use this.

email: This is the email address of the person requesting the paper. This field will definitely be used.

retrieved: This is a boolean flag, indicating if the paper has been retrieved. I could use the retrieval date for this so I suppose I am breaking at least one the rules for defining a database, but I think a boolean flag is useful. I will outline the use of this flag in a later blog.

 

There will be another table (papers). This will hold three fields:

bibtex: This is a unique identifier (for this table) which links it to the downloads table (above). Again, I could use the doi but, for now, I will use the bibtex key.

title: This is the title of the paper.

timesDownloaded: This will maintain a count of the number of times that the paper has ben downloaded. I could get it from the download table but having it stoed in this paper means that it is much quicker to access.

 

These are my thoughts so far. As I say, very much work in progress and I have no doubts that it will change but, at lest, it’s a start.

 

 

Tracking Paper Downloads for MISTA

The fifth MISTA conference  (www.mistaconference.org) has just taken place in Arizona. The web site, I think looks pretty good but there is a lot more that I would like to do with it. For example, I should have all the papers available for download and, over the next few months, I am going to put in the effort required to make them all available. I guess that there will be at least 500 of them (including abstracts) which, I believe, is a useful resource for the scheduling community.

But, when I do make them available I would like to do a few things:

  1. I would like to know how many times a paper is being downloaded. This is useful information for the authors as well as for the MISTA organisers.
  2. I would like to collect email addresses as potential conference delegates. I know that people may not like this sort of thing but as long as we are up front about it and, in any case, they are getting the paper for free.

So, for a few days now, I have been sketching out a few ideas as to how I could get this to work. I think I would have the same look/feel as my own publications (see http://graham-kendall.com/publications/index.php?type=all) where I list my publications but, for each one, you can go to another page and see all the details about that paper; including being able to download it.

The difference with the MISTA web site would be the fact that when you wanted to download a paper a couple of things would happen. Firstly you would be asked for your email address (and perhaps name and company/institution – but that might be a bit much). It would also say that they would be email’ed about future MISTA conferences and obtaining the paper says that they agree with this.  Once you had entered all the required information, you would be sent an email, with a link in it which would enable you (for one time only) to download the paper.

That’s the idea. Now all I need to do is design the database and write the associated PHP scripts. Oh, and get all the papers in a form that they can be downloaded, which is actually the most time consuming part.

Views welcome.

 

PATAT 2010: Googlemaps and Multimaps

In an earlier PATAT conference blog I described the multi-objective methodology that I used. I finished by saying that in basing any methodology on travel distances between football clubs you have to somehow calcuate the distances between all the clubs (or at least those in the same division).

When I first started collecting all this data I decided to use one of the motoring organisation web sites. The three main contenders were the AA, the RAC and Green Flag. All of these web sites enable you to enter a start and end postcode (zip code if you are stateside) and they return various information such as a map of the route, the distance and the estimated time that it will take you to drive between the two locations. Of course, I was particularly interested in the distance.

To collect all the data I finally decided to use Green Flag. This was simply because when you enter the two postcodes and hit the “Find Route” button this information is stored in the URL that is shown in the address bar. The other two (the AA and RAC) did not do this. As none of these services had an API, being able to use a URL that contains this information was very useful, else collecting the data would have been even more labourious than it was (even so, it was still labour intensive). Being able to use the URL meant that you could build up the URL using something like Excel (which is what I did) and then upload that to a web site. It was now a case of simply clicking each link and recording the information. It was still a very long process, but not as long as having to type in each postcode separately.

I needed a better method, especially as I wanted to collect data each season and as the current (very manual) process takes me many hours I typically delayed doing it.

The candidates were essentially googlemaps or multimap (which loos like they have now been taken over by Bing); at least they offered both and API and are free. Of course, there are many paid services around but they tend to be very expensive.

In a future blog(s), I’ll let you know how these work and what use I made of them.

Other posts about PATAT can be seen here.

PATAT 2010: Multi-objective Sports (Football) Scheduling


At the recent PATAT (8th International Conference on the Practice and Theory of Automated Timetabling) conference I was fortunate enough to be invited to give a plenary presentation.

My talk focussed on sports scheduling. Indeed, the title was “Scheduling Football (Soccer) Fixtures: Progress Made to Date and Future Challenges“. I focussed on the conflicting objectives when trying to minimise travel distances, whilst also trying to reduce pair clashes (which can be considered as local derbies for the sake of this discussion).

This is a classic case of a multi-objective problem where minimising one objective causes the other to increase and vice versa. It is not (usually) possible to minimise both objectives, instead you are looking for a trade off. These are plotted on a pareto front where a user would then decide which trade off solution is the best.

In the plenary talk, I showed that it was possible to reduce both the distance and the pair clashes such that (sometimes) the distance did not significantly increase. This is a potentially useful result as it means that supporters do not have to travel any further (statistically) and the policing costs are reduced as they do not have to police so many local derbies.

I should say that this result is only work in progress at the moment in that it has not been verified by the football authorities or the police, but I would hope that it would be of interest to them.

In the same talk, I also discussed how I collected the data for this work (which essentially is the various distances between football clubs). This involved using Google maps and Multimap APIs. I’ll talk about this in the next blog. I’ll also provide a link to the paper (as I don’t have it to hand at the moment). But, if you are interested the reference is:

Kendall G., McCollum B., Cruz F. and McMullan P. (2010) Scheduling English Football Fixtures: Consideration of Two Conflicting Objectives. In proceedings of the 8th International Conference on the Practice and Theory of Automated Timetabling (PATAT 2010), 11-13 August 2010, Queen’s University, Belfast, UK, pp 1-15

and you can access the paper here.

Other PATAT posts can be seen here.

2010 Pac-Man Competition at CIG 2010


Last week I attended the IEEE Conference on Computational Intelligence and Games (CIG 2010). This conference (it was actually a symposium in the early days) was started by Simon Lucas and myself in 2005. I also co-chaired the conference with Susil Louis in 2006. The conference has been run every year since 2005, with the lastest one being held in Copenhagen, chaired by Georgios N. Yannakakis and Julian Togelius.

This year I had a paper that was developed as part of a 2nd Year Group Project (the teams members are shown in the picture).

As part of the Computer Science undergraduate degree at the University of Nottingham, second year students are assigned to groups and given a software engineering task to work on. Last year, I set the following:

Your task is to write a computer program that can play the game of Pacman, without human intervention. This is a challenging task and, without some support would, perhaps, be impossible for a second year group project. However, this task has been an ongoing competition for a number of years and there is lots of information/support available, as well as examples as to what can be achieved.

Your starting point should be the web pages of Professor Simon Lucas at the University of Essex, who organises the competition. Therefore, take a look at this web page. This is ESSENTIAL reading and you should study this page (and associated links) before our first meeting. You should also make regular visits to the page as I know that Professor Lucas is planning to update it on a regular basis. Using the information on this web site you should be able to get a system up and running quite quickly and then you have to develop your own algorithms to produce the best automated player that you can.

I would expect you to carry out (at least) the following tasks, with the first two feeding directly into your literature review:

By referencing the competition entries, find out what algorithms appear to have worked well in the past.

Investigate and draw up a set of algorithms that you think could be used as a Pacman Controller. These algorithms might be based on various criteria such as those identified in 1 (above), “Manhattan Distance”, “Straight Line Distance”, “Trying to eat the ghosts”, “Trying to avoid the ghosts”, “Trying to eat the fruit”, “Maximising the score, whilst minimising your chances of being eaten”, etc. There is probably not one good overall strategy and you might consider changing strategies as the game state changes. You might also want to consider how much you plan ahead and how much you just make quick decisions, given that this is a real time game.

As part of the supplied toolkit you are provided with screen capture software (and a window showing you a representation of the captured game state), the game screen itself and a small screen showing the current direction of the Pacman character. As part of this project, I would also like you to develop another GUI element which enables you to select from amongst the algorithms that you have implemented, keeps track of their high score, enables you to choose how “adventurous” the player will be, enables you to mix/match the algorithms etc. Part of the project will be to design this GUI element, deciding what role it should play in the overall software architecture. You can then use this captured information as a basis for the analysis in your final dissertation.

Implement the int move(GameState gs) that is provided in the sample toolkit in order to test out the various game playing algorithms that you have investigated in points 1 and 2 above.

You might want to look at look at the YouTube Video for the competition entry from WCCI 2008 (google “YouTube WCCI pacman”). To give you some indication of the current state of the art, the most recent competition (run in Milan in August 2009), the winning entry achieved a score over 30,000 and reached (from memory) level 5 (it might have been level 4). If you can get anywhere near that you will be doing very well.

The project group did so well that the team not only won the prize for the best project but the School also agreed to fund one of the team members to attend the conference to enter the competition. As a result we also wrote a paper which was a accepted at the conference and the student presented it (a big undertaking for an undergraduate student, but he did very well).

In the competition, we came a VERY respectable third. In second place were last years winners. In first place was a new team who used an Ant Colony based method.

The difference between the top three entries was not that large, with the winning entry getting about 21,000 points.

In fact, during pre-competition testing our team achieved scores of over 22,000 on many occasions, with our best ever score being just over 30,000 (I am sure that the other entrants also have similar sob stories to tell :-)).

If you are interested the reference for our paper is.

Bell N., Fang X., Hughes R., Kendall G., OReilly E. and Qiu S. (2010) Ghost Direction Detection and other Innovations for Ms. Pac-Man. In proceedings of the the 2010 IEEE Conference on Computational Intelligence and Games (CIG’10), 18-21 Aug 2010, Copenhagen, Denmark, pp 465-472

It can be downloaded from here.

The references for the first and second placed entries are.

  1. Emilio M., Moises M., Gustavo R. and Yago S. (2010) Pac-mAnt: Optimization Based on Ant Colonies Applied to Developing an Agent for Ms. Pac-Man. In proceedings of the the 2010 IEEE Conference on Computational Intelligence and Games (CIG’10), 18-21 Aug 2010, Copenhagen, Denmark, pp 458-464
  2. Thawonmas R. and Ashida T. (2010 Evolution Strategy for Optimizing Parameters in Ms Pac-Man Controller ICE Pambush 3. In proceedings of the the 2010 IEEE Conference on Computational Intelligence and Games (CIG’10), 18-21 Aug 2010, Copenhagen, Denmark, pp 235-240

More details about the Pac-Man competition can be found at Simon Lucas’ Pac-Man page.

Summer Conferences

It’s been a busy couple of weeks as I have been attending two scientific conferences. Last week I was at the PATAT (Practise and Theory of Automated Timetabling) conference in Belfast and I have just returned from the IEEE Conference on Computational Inetlligence in Games (CIG 2010) in Copenhagen.

Both conferences consider very different areas (not surprisingly one is about timetabling and the other is about computer games).

I’ll report more on each conference in blogs in the next few days.

INFORMS 2010: Buenos Aires

I have just returned (well returning actually – I am at the airport) from the INFORMS conference in Buenos Aires. In fact, we stayed on a few days and a very good colleague of mine is Argentinian and we toured round the North of the country. The highlight, undoubtedly, being Iguazu Falls. They are spectacular (see picture for just a small part of the falls).

This is only the second INFORMS conference I have been two (the previous one was in Puerto Rico in 2007). I find them very good.
The tutorials are especially useful and the one by Mike Trick on Benders Approach for Hard Problems was particularly beneficial. At the time of writing, the powerpoint slides for this presentation we available from Mike’s web site (PPTX and PPT). If I can ever get my head around the ideas underlying Bender’s approach, I can see this being a very useful technique for a variety of problems.

The 2009 IEEE Symposium on Computational Intelligence and Games: Report

I have just spent the last week at the 2009 IEEE Symposium on Computation Intelligence and Games in Milan (see last post).

It has been an excellent week both from a scientific point of view and from a networking point of view (I have even added a few new facebook friends as a result of the symposium).

There really is some fantastic work going on around the world. The plenaries and tutorials showed just some of this. The work being done by people such as Ken Stanley, Michael Mateas, Yngvi Björnsson, David Stern (Microsoft Research), Stefano Lecchi (Milestone) and James Vaccaro is very impressive and the products they produce have the commercial sector (some of which work in this area anyway) really taking notice.

Of course, there is other excellent work being done, by a great many other people, and to list them all would mean listing most of the presentations given at the conference. So, although I have highlighted just a few presentations, it does not detract from all the other work being done in this area. Indeed, the newly established IEEE Transactions journal in this area is a testament to how bouyant the area is.

Another highlight at CIG’09 was the competitions. This year they really seem to have come of age. In previous years (at both CIG and other conference) the competitions were very popular, and well received, but it just has a slightly different feel this time around.

At this year’s conference there were four competitions:

I will try and blog about each one in the future.

We hope that the presentations from the symposium will be available to view soon. They were all recorded but there are some copyright issues to resolve.

The symposium will be run again in 2010. The venue has just about been decided but it needs to be rubber stamped so I cannot say on a public forum where it will be.