Developments in machine learning and AI

Developments in machine learning and AI

Developments in Machine Learning and AI

Developments in machine learning and AI

Developments in ML & AI

18th Century- Development of statistical methods

Several key concepts in machine learning are derived from probability theory and statistics. The roots of these date back to the 18th Century. For example, in 1763 Thomas Bayes set out a mathematical theorem for probability – which came to be known as Bayes Theorem – that remains a central concept in some modern approaches to machine learning.

Real "AI Buzz" | AI Updates | Blogs | Education

1950- The Turing Test

Papers by Alan Turing through the 1940s grappled with the idea of machine intelligence. In 1950, he posed the question “can machines think?”, and suggested a test for machine intelligence – subsequently known as the Turing Test – in which a machine might be called intelligent, if its responses to questions could convince a person that it was human.

1952- Machines that can play checkers

An early learning machine was created in 1952 by the researcher Arthur Samuel, which was able to learn to play checkers, using annotated guides by human experts and games it played against itself to learn to distinguish good moves from bad.

1960- The Dartmouth Workshop/The Perceptron

1956 The Dartmouth Workshop: The birth of the term ‘artificial intelligence’ is generally credited to computer scientist John McCarthy, who, alongside key figures in the field including Marvin Minsky, Nathaniel Rochester, and Claude Shannon, brought together leading researchers at a workshop to consider the development of the field in 1956

1957-The Perceptron: Frank Rosenblatt’s perceptron was an early attempt at creating a neural network, using a rotary resistor (potentiometer) driven by an electric motor. This machine could take an input – the pixels of an image, say – and create an output, such as a label.

1970- The Lighthill report and the AI winter

By the 1970s, it was clear that progress in the field was not as fast as had been expected. A report commissioned by the UK Science Research Council – the Lighthill report – noted that “in no part of the field have the discoveries made so far produced the major impact that was then promised”. This assessment, coupled with slow progress in the field, contributed to a loss of confidence and a drop in resources for AI research.

1980- Parallel Distributed Processing volumes and neural network models

In 1986, David Rumelhart, James McClelland, and the PDP Research Group published Parallel Distributed Processing, a two-volume set of work which advanced the use of neural network models for machine learning.

1990- Deep Blue beats the reigning world champion at chess

In 1997 Deep Blue became the first computer chess-playing system to beat a reigning world chess champion. Deep Blue exploited the increased computing power available in the 1990s to perform large-scale searches of potential moves – it could reportedly process over 200 million moves per second – then select the best one.

1990/2000- Deep Blue beats the reigning world champion at chess

In 1997 Deep Blue became the first computer chess-playing system to beat a reigning world chess champion. Deep Blue exploited the increased computing power available in the 1990s to perform large-scale searches of potential moves – it could reportedly process over 200 million moves per second – then select the best one.

2010- Watson beats the two human Jeopardy! champions/ImageNet Classification and advances in computer vision

In 2011, IBM’s Watson won a game of the US quiz show Jeopardy against two of its champions.

In 2012, saw the publication of a highly influential paper by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. This paper described a model that had been used to win an annual image recognition competition, and dramatically reduced the error rate in image recognition systems.

2020- AlphaGo beats the world champion at Go/Learning to play poker

In 2016, AlphaGo – a system created by researchers at Google DeepMind to play the ancient Chinese game of Go – won four out of five matches against Lee Sedol, who has been the world’s top Go player for over a decade.

Libratus, a system built by researchers at Carnegie Mellon University, defeated four top players at no- limit Texas Hold ‘Em after 20 days of play in 2017. Researchers at the University of Alberta reported similar success with their system, Deepstack.

Read More

Developments in machine learning and AI
Developments in machine learning and AI

Leave a Reply

Your email address will not be published. Required fields are marked *

*