Human intelligence reflects our brain’s ability to learn. Computer systems that act like humans use artificial intelligence. That means these systems are under the control of computer programs that can learn. Just as people do, computers can learn to use data and then make decisions or assessments from what they’ve learned. Called machine learning, it’s part of the larger field of artificial intelligence.
For computers to solve problems, people used to just write step-by-step instructions for the programs that operate a computer’s hardware. Those programmers had to consider every step a computer would or could encounter. Then they described how they wanted the computer to respond to every decision it might be asked to make along the way.
In the 1940s, while working as an engineer at the University of Illinois, Arthur Samuel decided to program computers differently. This computer scientist would teach computers how to learn on their own. His teaching tool: checkers.
Rather than program every possible move, he gave the computer advice from champion checkers players. Think of this as general rules.
He also taught the computer to play checkers against itself. During each game, the computer tracked which of its moves and strategies had worked best. Then, it used those moves and strategies to play better the next time. Along the way, the computer turned bits of data into information. That information would become knowledge — and lead the computer to make smarter moves. Samuel completed his first computer program to play that game within a few years. At the time, he was working at an IBM laboratory in Poughkeepsie, N.Y.
Finding patterns: From checkers to pictures
Programmers soon moved beyond checkers. Using the same approach, they taught computers to solve more complex tasks. In 2007, Fei-Fei Li of Stanford University in California and her colleagues decided to train computers to recognize objects in photos. We might think of sight as using just our eyes. In fact, it’s our brains that recognize and understand what an image shows.
Li’s group plugged large sets of images into computer models. The computer needed a lot of pictures to learn a cat from a dog or anything else. And the researchers had to make sure each picture of a cat that the computer trained on truly showed a cat.
In a 2015 TED talk, Li described how her team went about this. They needed the help of other scientists. It took almost 49,000 volunteers from 127 countries about three years to sort through nearly 1 billion images.
Eventually, Li’s team ended up with a set of more than 62,000 images, all of cats. Some cats sat. Others stood. Or crouched. Or laid curled up. The pictures depicted a broad range of species, from lions to housecats. As computer programs sifted through the data in these images, those programs learned how to identify a cat in any new photo they might be shown.
In much the same way, Li’s team then taught its computer models to also recognize people, dogs, kites, cars (by make and model) and more. All of those data sets are now available for other scientists to use for free at image-net.org/.
From patterns to deep learning
Computers organize data by using algorithms. These are math formulas or instructions that follow a step-by-step process. For example, the steps in one algorithm might instruct a computer to group images with similar patterns. In some cases, such as the cat pictures, people help computers sort out wrong information. In other cases, the algorithms might help the computer identify mistakes and learn from them.
One of the more powerful machine-learning techniques is called “deep learning.” It organizes its computing efforts into systems known as neural networks (or neural nets). The networks are made from connected nodes through which data can move and be processed. In that sense, these networks are a bit like the human brain. The idea for neural nets arose in the 1940s by Warren McCullough and Walter Pitts. They developed these systems a bit later while they were working at the Massachusetts Institute of Technology in Cambridge .
For a while, neural nets fell out of fashion. But they came back big in the 1980s. Today, they continue to serve as the basis for ever more complex machine-learning systems.
In deep-learning systems today, data usually move through the nodes (connections) in one direction only. Each layer of the system might receive data from lower nodes, then process those data and feed them on to higher nodes. The layers get more complex (deeper) as the computer learns. Rather than simple choices, as in the checkers game, deep-learning systems review lots of data, learn from them, and then make decisions based on them. All of these steps take place inside the computer, without any new input from a human.
Artificial intelligence is a tool to help people
Machine learning has recently been showing up in tools, software and products that aim to make life easier. One example are the programs used in today’s smart speakers and streaming services. These look for trends in the music or videos you choose, then suggest similar ones you may like. Machine learning also is being used to help people solve bigger problems in everything from engineering to medicine. Along the way, some machine-learning systems are using video games as a training tool.
For example, one system developed by engineers at Argonne National Laboratory, outside Chicago, Ill., can test hundreds of different designs for an engine. Then, it can suggest how researchers might build the best of those designs (instead of all of them) to test in the real world. In medicine, machine learning is helping to identify illnesses based on the size and shape of a virus.
As far as it has evolved, artificial intelligence (AI) is still far from being as smart as the human brain. For instance, an AI system might be able to play checkers, or identify a cat, but it can’t yet understand why cats can’t play checkers.