No, You Can’t Machine Learn Everything
Frank and Martin discuss how the quest for Artificial Intelligence gave us Machine Learning.
Machine Learning is fast becoming a source of both confusion and anxious hope to many organizations. So much so that several customers last year told us, “please don’t talk about analytics to our senior stakeholders, because we’ve told them that we are going to machine learn everything!”
Now, Machine Learning already provides enormous value in just about every industry you can imagine – with use-cases that span from preventative maintenance through smart recommender systems to fraud detection. But you can’t “machine learn everything” – and even if you could, there would still be quicker routes to goal to solve some problems. The most successful data-driven organizations tend to think first in terms of the business problem that they are trying to solve; second about the data that are – or that could be – available to solve it; and only then about the methods, techniques, algorithms and technology that they should employ.
Part of the problem, we think, is that terms like “Analytics”, “Data Science”, “Machine Learning” and “Artificial Intelligence” are used by commentators both interchangeably and to mean different things. By understanding the history of the field and the origin of these labels, our hope is that business and technology managers will be able to truly understand the possibilities – and the limitations – of Machine Learning.
The recent history of Machine Learning arguably begins with the brilliant British mathematician and early computer scientist, Alan Turing. Turing and his contemporary, Alonzo Church, had already produced what subsequently became known as the Church-Turing thesis – proof that digital computers are capable of computing anything that is computable – when in 1950, Turing turned his attention to another, related question. Could a machine exhibit intelligent behaviour, equivalent to – or even indistinguishable from – that of a human? And if it could, how would we know?
Turing proposed what came to be known as “the Turing Test”; that a human evaluator, eavesdropping on a conversation between a human and an “Intelligent Agent”, should not be able to tell which is which at least 70% of the time.
The Turing test - or “Imitation Game” - is now often held to be flawed, for all sorts of very good reasons that we don’t have time to explore here. But in the 1950s it was a revolutionary idea that helped to give birth to the idea of Artificial Intelligence (AI) – and led to the first academic study of the subject at Dartmouth College in 1956. As the author of the proposal, J McCarthy, put it: “The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.”
In 1956, researchers believed that they were only a decade away from computers that could achieve true Artificial Intelligence. That turned out to be wildly optimistic, with the field going through at least two “winters” – epochs when research money dried-up in the face of AI’s apparently intractable problems and when other approaches, like rule-based systems, looked more promising. But Artificial Intelligence had now entered the academic mainstream as a sub-field of Computer Science.
Research into Artificial Intelligence can be divided into disciplines that focus on specific problems. Among the more important problems is enabling the Intelligent Agent to harvest data from the environment - and then using those data to improve its performance of a task. And so the quest for Artificial Intelligence led naturally to the study of “Machine Learning”.
Since Artificial Intelligence is also concerned with many other issues – reasoning and problem-solving, knowledge representation, agency and cognition, Hollywood movies about a dystopian future ruled by killer robots, etc. – Machine Learning is only a sub-field of Artificial Intelligence, which is itself a sub-field of Computer Science.
It was the quest for Artificial Intelligence that gave us Machine Learning. And in the next installment of this blog, we’ll explore how machine learning gave us Data Mining – and how vendor marketing departments have now taken Machine Learning back to the future.
Restez au courant
Abonnez-vous au blog de Teradata pour recevoir des informations hebdomadaires