By coincidence, I had just read another article about an analysis of sporting rules from the perspecive of Operations Research, written by a good friend of mine – Mike Wright from Lancaster.
The two articles had some similaraties and, after reading Mike Wright’s article I was already planning to write a follow up. Reading Liam’s Conversation article I posted some comments and he was kind enough to respond.
One thing led to another and we agreed to write a paper together.
I am pleased to report that this article (“When Sports Rules Go Awry“) has just been accepted in the European Journal of Operational Research.
The abstract of the paper reads:
“Mike Wright (Wright, M. OR analysis of sporting rules – A survey. European Journal of Operational Research, 232(1):1–8, 2014) recently presented a survey of sporting rules from an Operational Research (OR) perspective. He surveyed 21 sports, which consider the rules of sports and tournaments and whether changes have led to unintended consequences. The paper concludes: “Overall, it would seem that this is just a taster and there may be plenty more such studies to come”. In this paper we present one such study.
This is an interdisciplinary paper, which cuts across economics, sport and operational research (OR). We recognize that the paper could have been published in any of these disciplines but for the sake of continuity with the paper that motivated this study, we wanted to publish this paper in an OR journal. We look at specific examples where the rules of sports have led to unforeseen and/or unwanted consequences. We hope that the paper will be especially useful to sports administrators, helping them to review what has not previously worked and also encouraging them to engage with the scientific community when considering making changes.
We believe that this is the first time that such a comprehensive review of sporting rules, which have led to unexpected consequences, has been published in the scientific literature.”
When Sports Rules Go Awry is really a review of where sporting rules have been introduced by sports administrators, but which have led to unintended consequences. For example, when it is sensible to score an own goal. The paper has several tanking (the act of deliberately dropping points or losing a game in order to gain some other advantage) examples.
We hope that the paper will be of interest to anybody who likes sports, as well as sports administrators.
It is pleasing to note that TheConversation was instrumental in making this paper happen. If it were not for them, I would have been unaware of Liam’s work. The paper may have still be written (either by Liam or me) but it would not have been as good as the paper that has now been accepted.
If you are interested, you can see my Conversation articles here and Liams articles are here.
This post also appeared on the University of Nottngham blog pages.
Today (14:00, 07 Mar 2016 in F1A09 at the University of Nottingham Malaysia Campus) I am giving a two hour lecture on Computers and Game Playing.
This lecture could not have come at a better time.
At the end of January, Google’s DeepMind reported that AlphaGo had beaten the top European Go player (Fan Hui) 5-0. This was probably about ten years earlier than most people expected a computer to defeat a human expert at Go. This You Tube video gives a very good overview of the achievement.
This development is, in my view and the view of many others, one of the most significant landmarks since Gary Kasparov was defeated by Deep Blue in May 1997.
The defeat of Fan Hui has generated so much interest that the World’s best player (Lee Sedol) has agreed to play AlphaGo in March (9th – 15th) 2016.
My lecture is nicely sandwiched between the recent defeat and the upcoming match against the current best player in the world.
In the lecture I will still cover the material that I need to get across (such as mini-max, alpha-beta search) but it would be remiss of me not to talk about the recent successes of AlphaGo and the technologies that have led to this remarkable achievement.
Indeed, in my next lecture, I will be talking about Deep Blue (Chess), Chinook (Checkers) and Blondie24 (Checkers), where some of the methodologies that led to their successes will be discussed. No doubt AlphaGo will also get a mention and, by then, we should have some more knowledge about it it has performed against Lee Sedol.
It’s a great time to be a student who is interested in game playing and artificial intelligence!
Finally, a plug. I am the Editor-in-Chief of the IEEE Transactions of Computational Intelligence and AI in Games (TCIAIG) and we would welcome any articles that are motivated by the recent successes in Go.
In 1906 Francis Galton was at a country fair and there was a guess the ox competition. He took all 787 guesses and took the average. This was 1,197 pounds. The actual weight? 1.198 pounds! In effect the wisdom of the crowds gave a perfect answer. This was the start of The Wisdom of the Crowds.
Our Graduate School held its Christmas Party yesterday (18th Dec 2015) and they were kind enough to invite me. When I got there, I noticed that there was a jar of sweets, inviting people to guess how many sweets were in the jar. This reminded me of the story above.
When the competition ended, the person who had the closest guess would win the jar of sweets.
The jar held 149 sweets. The closest guess (by Oppong Kyekyeku) was 130; nineteen away – but good enough to win the prize.
When we looked at all the entries (see below), we found we had 22 entries, with an average of 152 (actually 152.409). That is just three away.
To be honest, with such a small sample size, I was surprised that the wisdom of the crowds (well small gathering) had beaten every other guess and had got within four of the right answer.
Francis Bacon would have been proud!
(This post also appeard on the University of Nottingham Research and Knowledge Exchange blog)
You’re on a game show and the host asks you to pick one of three doors. Behind one of them is the star prize: a sports car. Behind the other two are goats. Once you have made your pick, the show host opens one of the other doors – always revealing a goat. The host then asks if you are happy with your original choice, or whether would you like to switch.
Would you switch? Think about it before reading on.
This problem was posed to the mathematical community in a letter to The American Statistician in 1975, but was popularised by Marilyn Vos Savant in 1991.
Marilyn’s article suggested that the right thing to do is to switch. If you read the article – and please do – you will see that Marilyn received many very negative comments – some would say rude, to her conclusion. Here are some examples:
“You blew it, and you blew it big! Since you seem to have difficulty grasping the basic principle at work here, I’ll explain. After the host reveals a goat, you now have a one-in-two chance of being correct. Whether you change your selection or not, the odds are the same. There is enough mathematical illiteracy in this country, and we don’t need the world’s highest IQ propagating more. Shame!“
“Since you seem to enjoy coming straight to the point, I’ll do the same. You blew it! Let me explain. If one door is shown to be a loser, that information changes the probability of either remaining choice, neither of which has any reason to be more likely, to 1/2. As a professional mathematician, I’m very concerned with the general public’s lack of mathematical skills. Please help by confessing your error and in the future being more careful.“
“Maybe women look at math problems differently than men.“
“You are the goat!“
It is incredulous that people could write such comments and – and it is now accepted that Marilyn was correct and the right thing to do is to switch doors. There would have ben some red faces and I hope they apologised.
If you are faced with three doors and choose one of those at random, I think we’d all agree that you have a one in three (1/3) chance of choosing the car.
If one of the other doors is opened, showing a goat, one argument says that switching doors now gives you a 1/2 chance of winning the car.
Another argument says that once you have chosen a door, you have a 1/3 chance of winning the car. If you could choose the other two doors instead you would have a 2/3 chance of winning the car. If you are now showed one of these doors – revealing a goat – by switching, you still have a 2/3 chance of winning the car. Of course, you will not always win a car but over many games, by switching you will win 2/3 of the time, compared with only winning a 1/3 of the time if you do not switch.
There are two ways that might persuade you that switching is the right thing to do.
Increase the number of doors to 50. When you choose a door you have a 1/50 chance of winning the car. This means that there is 49/50 chance that the car is behind one of the other doors. The host now opens all the remaining doors, just leaving your original door and one other. If you stick with your original door you still have a 1/50 chance of winning the car. If you switch, you have a 49/50 chance of winning the car.
We can run a simulation. In preparing this article, I wrote a Java program to do that. We ran the Monty Hall Problem 5,000 times, both switching doors and not switching. If we have three doors, by not switching, the car was won 1702 times, compared to 3310 when switching. These are close enough to the 1/3 and 2/3 win ratios that we would expect. If we now change the number of boxes to 50, if we don’t swap we win the car 99 times. If we swap, we win 4,914 times. Again, this is close enough to the theoretical figures to persuade us that switching is the correct thing to do.
Hopefully, you are now persuaded that you should switch doors when faced with a similar situation. Good luck!
Scientific publishing has undergone a revolution in recent years – largely due to the internet. And it shows no sign of letting up as a growing number of countries attempt to ensure that research papers are made freely available. Publishers are struggling to adapt their business models to the new challenges. But it is not just the publishers who struggle.
Peer-reviewed publications are extremely important for academics, who use them to communicate their latest research findings.
When it comes to making decisions about hiring and promotion, universities often use an academic’s publication record. However, the use of publication consultants and increasingly long lists of authors in certain disciplines are changing the game.
So where will it all end?
When a scientific paper is published, the authors have an obligation to report who has contributed. This recognition can take the form of authorship, acknowledgements or by citing the work of others. Most publishers will provide details about how to recognise various types of contribution. For example, the Institute of Electrical and Electronics Engineers (see page 14, section 6) says that a statistician helping with analysis, a graphic artist creating images or a colleague reviewing an article before submission should all be recognised in the acknowledgements section of an article.
The academic publishing system Wikimedia Commons
However, recent years have seen a growing industry where publication consultants offer to help authors, or even institutions, to get their work published. The consultants charge a fee for this service. The type of help that is available ranges from proof reading, data collection, statistical analysis, helping with the literature review and identifying suitable journals to approach for publication.
We should ask why academics need these kind of services. Surely, institutions already provide this type of support to its less experienced researchers – and more experienced researchers, especially those with a PhD, should be qualified to carry out these activities themselves. After all, carrying out research and writing scientific papers is an essential part of PhD training.
If researchers do feel the need to use the services of a consultant, it should be made transparent either including the consultant as an author on the paper, or at least acknowledging their services – otherwise a prospective employer, a promotion panel or future collaborators can never be sure if there was somebody else helping with the paper. It might also be appropriate for publication consultants to provide an annual return detailing the papers on which they have consulted.
Growing author lists
To increase the transparency of academic publishing it may therefore seem that adding more people on a paper is the way forward. But there’s also another way of looking at it. Earlier this year, Physical Review Letters set a record when it published a paper with 5,154 authors. Such huge author lists are becoming increasingly common. In most disciplines this would seem excessive and we might ask whether all these authors did contribute to the paper?
Some have argued that this development is threatening the entire system in which academic work is rewarded. So what should we do about it? A radical suggestion could be to remove authors on papers completely and replace them with project names. Another suggestion, already practised by journals such as Plos One, is to list the contribution of each author. Whatever your view, there can be little doubt that some disciplines use different metrics to measure contribution.
The traditional way to publish a scientific article is to submit it to a journal and, if accepted, you sign over the copyright to the publisher. Your article is then sold via institutional subscriptions or individual payment when it is downloaded.
There are problems with this model: a common objection is that the people who do all the work – the authors and reviewers – get no payment and yet the copyright is assigned to a publisher. Worse, the authors, reviewers and taxpayer (who funded the research to start with) then have to pay to read the article. Of course, the publishers do have costs, such as staff, printing, web site maintenance, registering DOI’s etc –and they are typically companies that need to make a profit.
Open Access publishing is a different model, where the copyright remains with authors, who pay the journal to publish their articles which are then freely available. Launching this model in the UK, former science minister David Willetts argued it would boost the transparency of research institutions. Giving individuals, as well as industry, the “right-to-roam” academic journals would help people make better-informed choices (for example about their education) and could unleash the UK’s entrepreneurial spirit, he argued.
When open access was first introduced it initially had a reputation for vanity publishing – but as funding councils have embraced the idea it is becoming more mainstream. The UK funding agencies (Research Councils UK) have a policy that states that any outputs from research that it funds should be available via open access. Many other countries now also follow this model.
Open access has a few variants. Gold open access is the model described above, where the paper is freely available on the journal’s website. There is also a Green option where you do not pay for open access but you are allowed to archive a version of your paper – typically the last version you submit before it is typeset – on your web site, or in an institutional repository, usually after some time. Institutions have to decide whether to adopt a Gold or a Green open access policy. The Romeo Sherpa is a very useful, enabling you to find out a journal’s position on open access.
Open access still struggles with its reputation. Only recently there was a report in the journal Science that: “Predatory publishers earned $75 million last year”.
The internet and open access, combined with the publish-or-perish culture is changing the industry, arguably, faster than at any other time in history. What will it look like in ten years time?
I suspect that open access will be the norm, forcing universities to think about how to manage this and how they divert library funds from journal subscriptions to researchers to enable them to pay the open access charges. There is also the challenge of what to fund; all journals, only journals with an impact factor, or consider each discipline individually?
The contribution of the authors may also need to become more transparent, not only in reporting the use of publication consultants but also noting how each author has contributed. Perhaps it is a radical idea but the percentage contribution of each author could be given, which would also remove the problem of the order the authors.
The underpinning idea behind scientific publishing is peer review, in which research is forensically scrutinised by experts in the field before it’s published. But the process should also be transparent and fair. At the moment, there could be room for improvement.
We have just had an article accepted in Science and Engineering Ethics which questions the need for publication consultants. Or, at least, if their services are used their contribution should be recognised either by being an author on the paper or by stating what their contribution was in the acknowledgments section.
The type of services these companies/individuals offer range from proof reading, conducting literature reviews, responding to reviewers comments, finding suitable journals and carrying out statistical analysis.
Our argument as to why these services are not needed, or should be acknowledged, is essentially three-fold.
If you are an experienced researcher (e.g. have been awarded a PhD) then you should be trained in carrying out all aspects of research and should not need to call on an outside agency.
Early career researchers should have the support of their institution. They are paying fees and/or are a registered student. In either case, they should be able to expect support from their supevisor, colleagues, Graduate School, mentors etc., rather than having to call on the services of a consultant, for which they have to pay.
If researchers have used a publication consultant, unless this has been acknowledged, then others reading the paper will not know that other people have contributed to the paper. This may be important for promotion committess, considering somebody for a job, when you are looking for collaborators etc.
We believe that if a publication consultant has been used this should be acknowledged on the paper, detailing what assistance they gave. Moreover, if the support provided would normally warrant authorship on the paper then the consultant should appear as an author.
Finally, we also suggest that publication consultants should be required to submit an annual return (perhaps just to the journal editor) stating which papers they have provided assistance for.
Once our paper is in press, we’ll provide the details.
I am the editor-in-chief of a journal (not the journal you see here, that is a (very nice) public domain image from 1869) and, as such I see a lot of submissions. One of the biggest frustrations I face is when researchers submit papers that are simply not appropriate for the journal.
Sometimes their article title just has one of the same words as the title of the journal and, sometimes, the article title, keywords, abstract, content, references etc. are so obviously not appropriate to the journal that you wonder why they chose to submit to this journal?
I would love to know why researchers do this?
The paper will inevitably be rejected and it takes my time, the administator staff time, the Associate Editor’s time (the policy of the journal is that any reject must have the view of at least two people) and the researchers time. I do not see any benefit to any any of those people, and it simply means that the paper will not be published for even longer as authors have to wait for a reject decision.
The only good thing that comes out of it is that the journal I represent has a lower acceptence rate than it should do, which some people see as a sign of quality.
The 2015 UK General Election looks like being one of the closest, and hardest to predict, for many years. With 650 seats being contested, one party needs to win more than half the seats (326) to be able to form a government. Most, if not all, polls are predicting a hung parliament, with the likelihood that the UK will have another coalition government, though what form that will take is open to much debate.
It is not difficult to find predictions for the election result. They tend to fall into two categories; the percentage share of the vote or the number of seats that will be won by each party. Of most interest is the number of seats that will be won by each party, as this is what determines the formation of the next government.
Wisdom of the Crowds
In 1907, Francis Galton reported in Nature an event that had taken place at a country fair, where around 800 people were asked to guess the weight of an ox. The average guess was 1,197 pounds. The actual weight was 1,198 pounds, which is close to the average guess to be considered just about spot on. Importantly, many of the people who participated could be considered experts, such as farmers and butchers, but many people were far from experts – just being people attending the fair. Also, importantly, not a single person guessed the correct weight and only one person guessed 1,197 and two people guessed 1,199.
This concept of the Wisdom of the Crowds was popularised in a 2004 book by James Surowiecki, arguing that the opinion of a large number of people will do better than the judgement of a few experts.
2010 General Election
Wisdom of the Crowds was used to predict the 2010 general election. Martin Boon, of ICM Research, showed that “that the Wisdom of Crowds approach at the 2010 general election would have produced the most accurate final pre-electionprediction.”
Henretty and Jennings
Chris Henretty and Will Jennings have used the Wisdom of Crowds to predict the number of seats for each major party in the 2015 General Election. They surveyed 2,338 people, with 537 responding. They asked two questions, one about the percentage share of the votes and one about the number of seats for the major parties. Their report (published on 03 Mar 2015) gives the following predicted seats.
Drawing inspiration from this study, we utilise other predictions, to see how it compares with the study of Henretty and Jennings. Our study looks at 24 different predictions, aggregating them to produce our predictions.
Our data is drawn from a variety of sources.
The Henretty and Jennings study is used, recognising that it incoporates over 500 individual predictions.
A recent BBC Panorama program asked Nate Silver for his predictions. Silver is an American statistician who has successfully predicted the outcome of the last two US presidential elections.
Data was taken from several spread betting firms, taking the middle of the spread as their prediction.
The London School of Economics asked a number of election forecasters at a conference they held on the 27 Mar 2015 for their predictions. These have been incorporated into our predictions below.
Some newspapers publish predictions, and these are used in our model.
Some on-line prediction web sites were used.
We decided against using the 2010 results, or the current parliamentary standings, although we show the predictions using these two additional pieces of data, just for completeness.
One issue that has to be considered is missing data. Predictors do not always provide predictions for all the parties, but provide an aggregated figure in Others for some of the parties. Some predictors also exclude Northern Ireland so they only supply 632 predictions, rather than the full 650. We work around this as best we can.
In order to calculate our predictions, we averaged all the polls under consideration. We normalise the figures for each party so that the total number of seats adds up to 650.
Our predictions are shown in the table below. The Excl. 2010 column shows the predictions when the 2010 results or the current parliamentary standings are not taken into account. The Incl. 2010 results are shown just for comparison.
The two sets of figures are reasonably close with the obvious differences being the higher prediction of the SNP and the lower prediction of the Lib Dems, which reflects the (potential) changing fortunes of the two parties since the last general election.
I guess, not surprisingly, we are also predicting a hund parliament, with the Conservatives having a slight lead over Labour. If our predictions are accurate, a coalition with the SNP would give a combined total of 325 seats – not quite the 326 needed to give an overall majority. Now that would be interesting!
I have been using Twitter for a few years now, but it has been a while since I blogged on this topic.
When I started tweeting I know that I would not be able to tweet every day, and weeks (or even months) could go by without me tweeting, and, to me, that would not give a very good impression.
Before I signed up I looked at how I could tweet automatically. You quickly come across services such a Hootsuite, which enable you to schedule tweets. This is a very nice service but you still have to do something on a regular basis. Like before, it is easy to forget, not have time etc. and you suddenly find yourself tweetless for weeks (or months).
That was the main reason I developed my own Twitter application. This enables me to do one tweet a day. This means that I am tweeting regularly and I hope that it is supplemented by a liberal smattering of ‘live’ tweets, as well as retweets.
I have automated two types of tweets.
I have a list of my publications, held as bibtex, and I tweet those.
I have another database, which holds (I hope) items of interest – let’s call them News.
Each day (it could be set up to do more), I choose (randomly) whether to tweet a News item, or a paper. Once that decision has been made, I just choose randomly from the relevant database/bibtex file.
But, this set me thinking. What would academics actually want from a Twitter service? I am specifically thinking about sending tweets, but I would also be happy to hear about reading them.
If you have any views, I would be really interested to know (as I might do it!).
Please just add a comment which specifies your ‘Tweeting wish list’, from your academic perspective.