This researcher investigates the risks of digitally cloning the dead
Katarzyna Nowaczyk-Basińska is looking at the risks behind AI-driven grief bots

Katarzyna Nowaczyk-Basińska studies the risks of using AI to digitally clone the deceased.
K. Nowaczyk-Basińska
Share this:
- Share via email (Opens in new window) Email
- Click to share on Facebook (Opens in new window) Facebook
- Click to share on X (Opens in new window) X
- Click to share on Pinterest (Opens in new window) Pinterest
- Click to share on Reddit (Opens in new window) Reddit
- Share to Google Classroom (Opens in new window) Google Classroom
- Click to print (Opens in new window) Print
Consider this hypothetical scenario: 8-year-old Sam is texting with his mom, Anna. Anna apologizes for missing his concert last week. But Anna isn’t away on a trip. In fact, she isn’t even alive. The response was from a grief bot driven by artificial intelligence. Sam’s parents uploaded Anna’s texts, videos and audio clips to the app. The service then used these to create a digital clone.
Sam’s parents hoped the bot could help him grieve once Anna, who had been diagnosed with a rare disease, had passed. But its responses began to confuse him. It sometimes corrected Sam when he said Anna had passed, claiming that she was alive and well.
Katarzyna Nowaczyk-Basińska is an AI researcher at the University of Cambridge in England. She included this scenario in a study published last year in Philosophy & Technology, which was part of her work on digital immortality. This is when a person is preserved digitally after their death. “In a sense, you can literally live forever if you upload your data,” she says.
This technology is “a huge developing industry with many companies involved,” she notes. But it comes with risks, particularly for children who may not understand that the person being imitated has truly passed away. In this interview, she shares her experiences and advice with Science News Explores. (This interview has been edited for content and readability.)
What inspired you to pursue your career?
I came across the topic of digital immortality as a grad student pursuing media studies. I was doing research for an assignment when I discovered a website offering grief bots. I found it so strange and fascinating at the same time. That discovery ended up informing my whole career. I soon decided to pursue a master’s and then a Ph.D. on the topic.
It was such a niche topic that it was so hard for me to convince my doctoral commission to accept. They were really unsure what digital immortality even was. They also didn’t understand how I would go about studying it. So it was a bit of a struggle. But after a few years, I’m happy to say that I was right.
How did you get to where you are today?
It was a long and tough journey. I never had plans to be a researcher or end up at Cambridge. Instead, my path was about taking small steps to see what was a right fit. I was more interested in exploring opportunities and exposing myself to different things.
I was eventually awarded a grant in Poland that helped me make connections at the University of Cambridge. It allowed me to get in touch with Stephen Cave, who studies ethical issues surrounding AI and robots. I now work very closely with him. But at that time, it felt unreal that he even responded to my emails. I was just starting my career, so I had no recommendations or big achievements. I felt like I was Katarzyna from nowhere.
What was one challenge that you faced, and how did you get through it?
My job requires me to travel a lot between Poland and the United Kingdom. I’m here in Cambridge at least once a month. My family lives back in Poland. My family and I decided that we don’t want to move at this stage of our lives. My son started school in Poland, so relocating would be a huge change for him. We decided that I would commute between the two countries.
That often means I’m trying to be as productive as I can while waiting at airports.
What should we be considering when it comes to grief bots and children?
I wrote about this in my study with colleague Tomasz Hollanek, who also works at Cambridge. We recommended that children shouldn’t be exposed to these technologies at all. We suggest that this technology should only be for adults who understand all of the risks that come with it.
We don’t have enough research on how these technologies can impact children. But we can imagine that it might be devastating. These technologies are designed in a way to be immersive. That can be hard for a child to make sense of.
Adults need to work toward creating a system that is safe for children. I don’t necessarily think that includes kids using bots to cope with grief, at least not without strict oversight. If ever, this should only happen under the supervision of specialists, such as psychologists or psychiatrists and within carefully designed therapeutic settings. But we definitely need more research on this.
Instead, these bots might be used as a source of knowledge. For example, we could turn a famous figure from history into a bot. Kids could learn from them by listening to their stories and interacting with them. That might be a more appropriate use for this tech. Of course, there’s still a question of consent. Of whether we have permission to use data from those people.