To accompany ‘Can computers think? Why this is proving so hard to answer’
- Think of smart chatbots that you might have spoken to before, such as Siri or Alexa. Would you consider these computer programs intelligent, the same way people are intelligent? Why or why not? If you said no, what would it take to convince you that a talking computer system is truly intelligent?
- What is the “Turing test” or “imitation game”? How is it played?
- Why is the question of whether computers can “think” so hard to answer, according to Ayanna Howard? How might the Turing test get around that problem?
- What was the computer program ELIZA programmed to do? Did it pass the Turing test?
- How did the chatbot Eugene Goostman perform in a 2014 Turing test competition?
- How did Google demonstrate the power of its Duplex system?
- What is John Laird’s criticism of the Turing test?
- What is Hector Levesque’s critique?
- What are large language models? How are they trained? Once trained, what types of things can they do?
- What did Brian Christian learn from his experience participating in a Turing test?
- How can humans pass on their biases to artificial intelligence programs?
- The Turing test frames the ultimate goal of artificial intelligence as making machines find answers to questions and then express those answers in a way that’s as humanlike as possible. What are the potential benefits to making machines think more like humans?
- What are some potential drawbacks to making machines more humanlike? (Think of the examples provided in this story, and try to come up with a couple of your own.) Given the potential positives and negatives, do you think artificial intelligence designers should make their programs as humanlike as possible? Explain why or why not.