011 083 9001 info@bpslink.com

The avalanche of data continues. An Exabyte (EB) of data is 270 bytes or a million Terrabytes (TB) and International Data Corp (IDC) estimates that 1,200 EB of digital data will be generated in 2010. The Large Hadron Collider generates 40TB worth of data every second, though scientists can only capture and analyze a fraction of this amount.

Whilst Google has been criticized in the past for its privacy policies, it seems that that some of the vast amounts of data which it holds are being put to good use. Whenever we interact with the internet, we leave behind a trail of clicks. This “data exhaust” can be analyzed to improve service levels and understand how to make more money from advertisers or customers. Google’s original search algorithm was modified to include the feedback provided by users which was provided in the form of the link they chose to follow from the search results. The more people searching for something who select a specific link, the better the ranking of that link in subsequent searches.

Artificial intelligence, or rather the attempts of programmers to engineer computers to mimic and surpass the achievements and capabilities of human beings is a fascinating subject. The history of chess-playing computers dates back to the late 1940’s where the first attempts to develop strategies were postulated by, amongst others, Alan Turing. Early chess programs employed 2 main strategies to determine their next move. The minimax algorithm (type A) is a brute-force approach which examines all positions for a fixed number of moves ahead. If there is an average of 30 moves possible in a game of chess, you need to evaluate 109 positions to look 3 moves ahead for both sides. The second strategy (type B) is that of forward pruning, where the computer is programmed to identify good moves which will reduce the number of possibilities to be evaluated and enable promising moves to be evaluated in greater depth.

A breakthrough came in 1973 when programmers from Northwestern University in Illinois, US adopted a new approach in their software Chess 4.0 and subsequently won the ACM Computer Chess Championship which they went on to win in 1975, 1976 and 1977. International Master, David Levy won a bet he had made in 1968, that no computer would beat him in the next 10 years, by overcoming Chess 4.7 in 1978. Levy recognized, however, that the writing was on the wall and he was beaten by IBM’s Deep Thought in an exhibition match in 1989. Gary Kasparov was eventually defeated in a match by IBM’s Deep Blue in May 1997.

So what was the breakthrough from Chess 4.0? It turns out that forward pruning, or getting the computer to determine what looks like a good move, took longer to process than just evaluating all possible moves. So rather than making a computer behave like a human (recognizing patterns, identifying good positions worthy of further evaluation) the best chess computer programs relied on the computer doing what it’s best at – crunching the numbers. Today, the best programs combine a limited selective approach (to eliminate null moves and obviously bad moves) with brute force.

So where does this fit with Google’s latest efforts? The problem is that of translation, where the programmers have historically struggled to enable computers to translate from one language to another correctly. Teaching a computer the rules of a language, tenses, sentence structure, common expressions and so on is a complex exercise. The best known example, probably anecdotal, concerns the translation of the expression “out of sight, out of mind” from English to Spanish (fuera de la vista, fuera de la mente) and back again into English, yielding the result “Invisible Idiot”. Google’s approach now emulates the brute force used so successfully in chess programs. It has thousands of books it has scanned which have been expertly translated into multiple languages. It has billions of documents at its disposal. Translation now becomes a matter of probability, a statistical likelihood based on the cumulative processing of all that data that a word or phrase in one language is most likely to have an equivalent in another. If the translation can be bettered, you can provide Google with the feedback for an improved version.

In 2007, a Canadian team led by Jonathan Schaeffer at the University of Alberta announced that they had “solved” the game of checkers by computing the 500 billion billion possible moves with between 50 and 200 computers simultaneously processing the problem for 18 years. This guarantees that the computer can never again lose with the worst possible result, if the human plays perfectly, being a draw.

Intelligent computers are a way off yet. In the meantime, brute force can work pretty well.

 

This article is based, in part, on information in “The Data Deluge and How to Handle It”, a special report by The Economist, February 27 2010.