I knew I was dreaming when I saw the scoreline in the paper: Zimbabwe 1 Germany 0. In hockey? In any sport for that matter. Sure enough, my reverie was broken by a series of desperate knocks on my door. I dragged myself off the floor (my bed's too full of books to be used) and to the door. I opened it to find the ominous 5-foot shape of Themba Marara in its place. He was in my room before my brain made the connection that he was trouble.
"You must help me! I've made a bet with Rutendo and I'm in deep shit!''
"Oh?'' I tried to sound vague, disinterested and utterly bored. I failed.
"Excellent! You see, last week I told her I could hit all the nightclubs in Harare in one night --- yes of course you remember, you helped me find the shortest route between them. Anyway, now she wants me to give her a method that will work well for an arbitrary large number of clubs.''
"Let me get this straight. You told her that given any number of nightclubs, you could find the shortest path between them that visits each club at least once?"
"Oh, yes. That's it."
"You actually bet on this."
"Yes, you know what I'm like! You can help, right?"
"Perhaps. Tell me, what did you bet her?"
"The loser buys the winner pizzas every night next week."
"Then it's quite clear what's to be done."
"Fantastic! Let me have it!"
"If you insist. You will have to ask her what type of pizzas she likes."
While Themba practised his limited vocabulary of swear words, with occasional references to my Roget's, I may as well tell you about the problem. It's commonly called the Travelling Salesman problem and can be stated as follows:
"Given n points and the distance between every pair of them, find the shortest route which visits each every point at least once and then returns to the starting point."
An equivalent problem can be created by replacing "at least" by "exactly", and this version is often referred to, since mathies have another nice way of formulating it: finding the shortest Hamiltonian cycle in a graph of the points. The actual history of the problem is hideously hard to trace. A German book was published on it in 1832, but it only entered the mathematical world proper a century later, thanks to the efforts of one Merrill Flood. He urged the RAND computer company to offer a prize for its solution, and the fact that it was so easy to state no doubt helped. It sounds quite do-able, till you actually try to do it.
In 1954, Dantzig, Fulkerson and Johnson published a paper showing that you could show that a given solution was optimal by looking at a few (?!) additional inequalities. To illustrate their method, they examined a 49-city map of the (then) 48-state United States. They found a solution using strings on a model, and then showed that this was best possible with just 25 more inequalities. Just 25? Yes, because there might have otherwise been billions of possibilities!
Unfortunately finding a solution to start with was still a problem. String models could not be used in general. This was a pity, because such problems were of immense practical importance, especially with scheduling problems like `how can you order the thousand-plus processes in a factory so that least time is wasted?' So the search went on, with everyone believing that a solution would be found. As time went on, belief turned to hope, then to no hope.
For it seemed that nearly all problems were falling into two categories --- good and bad. The good ones could be solved using an efficient method that could be applied to larger and larger problems. But as for the bad ones, even their best solutions made excessive demands on time. For instance, if the size of the problem was, say, n, then the length of the best solution might be proportional to 2n. So if the size of the problem increased by ten, its solution would be a thousand times worse! Even the best computers would be hopeless in dealing with large cases. The Travelling Salesman Problem was one of these, as shown by R. M. Karp in 1972.
But what was really interesting was that many of the bad problems seemed equivalent! In other words, if you found a solution to one of them, you found a solution to all the others. But as I said before, we will probably never find a solution to any of them. I say probably because there is no proof that we won't find a good solution. This was the birth of complexity theory, a topic which deserves far more than this article provides!
So how do we deal with bad problems? Often, their special cases turn out to have good solutions. Or, more usually, we have to settle for approximate solutions. This is one of the most active research areas in mathematics, called (predictably enough) Operations Research.'