Chatbots may make learning feel easy — but it’s shallow
Old-fashioned Googling leads to deeper learning than using AI does, a study finds
Both chatbots and web searches can help find information online. People who use traditional search engines gain deeper knowledge. They also care more about what they learn than those who rely on AI chatbots, a study finds.
Stanislav Kyrylash/Getty Images
Share this:
- Share via email (Opens in new window) Email
- Click to share on Facebook (Opens in new window) Facebook
- Click to share on X (Opens in new window) X
- Click to share on Pinterest (Opens in new window) Pinterest
- Click to share on Reddit (Opens in new window) Reddit
- Share to Google Classroom (Opens in new window) Google Classroom
- Click to print (Opens in new window) Print
By Payal Dhar
When learning something new, old-fashioned Googling might be a smarter move than asking ChatGPT, a new study finds.
Large language models, or LLMs, are artificial intelligence systems that power chatbots such as ChatGPT. Increasingly, people are using chatbots to get quick answers. That’s instead of traditional web search tools such as Google, Bing or DuckDuckGo. But people who used a web search to look up information developed deeper knowledge than those who relied on an AI chatbot.
“LLMs are fundamentally changing not just how we acquire information but how we develop knowledge,” says Shiri Melumad. She works at the University of Pennsylvania (or Penn) in Philadelphia. There, she studies how people make decisions about which products to use. Studies about LLMs’ benefits and risks could help people use them better, she says. It could also help design better tools.
Melumad teamed up with Penn neuroscientist Jin Ho Yun. They compared what people learn through LLMs versus regular web searches. Some search engines now give an AI-generated answer along with a list of websites. But a regular web search just gives a list of websites that have information related to what we search for.
Across seven experiments, they randomly assigned more than 10,000 people to research different topics. These included how to grow a vegetable garden or how to lead a healthier life. The people used either Google or ChatGPT. Afterward, they summarized what they’d learned and wrote it out as advice for a friend. The researchers then asked the participants to gauge how much they learned from the task.
Going deep
In some tests, the researchers accounted for what information was available. For instance, they provided identical sets of facts in the LLM and web search. The researchers judged whether this knowledge was “shallow” versus “deep” based on what participants reported. They also used evaluations by human judges and tools that extract information from text. And across all their experiments, one pattern held: Knowledge gained from chatbot summaries was shallower than that gained from web links.
The researchers shared their findings in the October PNAS Nexus.
People who learned via LLMs produced less informative content, the analysis showed. LLM users also were less invested in the advice they gave. They were less likely to adopt this advice compared to those who used web searches.
“The same results arose even when participants used a version of ChatGPT that provided web links to original sources,” Melumad reports.
Roughly 800 participants took part in the “ChatGPT with links” experiment. Only about one in four clicked on at least one link. That suggested they weren’t motivated to learn more.
Do you have a science question? We can help!
Submit your question here, and we might answer it an upcoming issue of Science News Explores
When people use web searches, they synthesize information. That means they figure out how info from different sources fits together. “LLMs can reduce the load of having to synthesize information for oneself,” Melumad concludes. “This ease comes at the cost of developing deeper knowledge on a topic.” However, she notes, developers could design AI-using search tools that encourage users to dig deeper.
Daniel Oppenheimer is a psychologist at Carnegie Mellon University in Pittsburgh, Pa. The new study shows that LLMs reduce the motivation for people to do their own thinking, he says.
People could follow the links given by ChatGPT. Or they could fact-check the information ChatGPT gives. Those actions would provide some of the benefits received by those who used web searches. So Oppenheimer doesn’t think people need to abandon LLMs. “The effectiveness of the tool depends on how you use it,” he says. The new research just shows people don’t naturally use LLMs as well as they might, he says.