Human history is full of horse stories. Cowboys told epic tales of trusty steeds guiding them home on dark, foggy nights. Legendary horses ferried wounded soldiers through battle zones. Such horses were sensible and highly trained. And they made Eakta Jain think of robots. In fact, horses have become her muse for autonomous robots.
Jain is a computer scientist and roboticist. She works at the University of Florida in Gainesville.
Autonomous robots are programmed to act with little to no human control. They can make decisions in response to things happening around them.
Machines can have different levels of autonomy. Think of self-driving cars. A car with an autonomy level of zero or one might have an antilock braking system. Level two or three would have cruise control that can maintain a set speed. Or the vehicle might be able to stay in a marked lane — even park by itself. The highest level, five, is full autonomy.
A horse that can take a rider home on its own would be level five, Jain says. “You trust them with your life.”
Jain was drawn to learning how that trust develops. She wondered if it could help improve human-robot interactions. She was especially interested in how such relationships form. To find out, Jain teamed up with a Florida colleague, Christina Garner-McCune.
Other groups had explored how people and robots initially interact. You might think of it “as the creation of first impressions,” Jain and Garner-McCune say. “Our work seeks to uncover principles for the stage after [that] first meeting.” The pair described their results in April at the CHI Conference on Human Factors in Computing Systems. It took place in Hamburg, Germany.
Educators and Parents, Sign Up for The Cheat Sheet
Weekly updates to help you use Science News Explores in the learning environment
Thank you for signing up!
There was a problem signing you up.
Horses as mentors
For a year, Jain immersed herself in the horse world. She watched classes in horse training at her university. She also talked with students, instructors, trainers and horse owners. Along the way, Jain even learned to ride.
“I had never interacted with a horse before,” she recalls. The horses, however, had worked with other people. As Jain interacted with the animals, “it was still the early stages of relationship-building.”
Horses learn cues from their trainers on how to comfortably interact with people. It starts with basic handling. This includes leading and brushing the animals. Such training helps ready horses for more advanced interactions. Those may include accepting a saddle and bridle — and later carrying someone on their first few rides.
People, too, must learn their part. Riders must learn what a horse wants or feels based on its behavior. And they must learn what cues to use to direct horses to perform in desired ways or to correct an animal when it doesn’t do what was expected.
Jain found some similarities, here, to working with robots.
Designing robots that communicate like horses
People must learn how to direct robots to do specific tasks. They also must learn what to do when robots don’t perform as planned.
The goal is to program robots that will respond predictably to inputs from people. But like many horses, autonomous robots also should be able to respond on their own as conditions change. For example, a self-driving car must stop to avoid hitting something — even if some human mistakenly tells it to keep going.
In human-horse partnerships, horses mostly communicate through actions, not sounds. For instance, their ears tend to point toward whatever they’re paying attention to. This could inspire the development of robotic “ears” that can swivel. They might turn toward a ringing doorbell or people talking nearby. This could alert people to those sounds, Jain says. Pointing its ears toward a door would be “a much less jarring — much more subtle way” of alerting people than for the robot to announce: “Knock on the door. Beep, beep.”
Such subtle responses could be used as a sign of respect.
Trainers and riders work with horses to build respect. Horses show that respect by matching their pace to a human or giving someone who is leading them a safe degree of personal space. Horses may even point their ears toward trainers to show they’re paying attention.
Trainers begin their work with a horse by getting them to show signs of respect in basic interactions, Jain says. Later, trainers will develop more complex interactions. Along the way, a horse’s respect can grow into trust.
But even with experienced horses and riders, trust is not a given. And there may be a similar limitation with robots.
Perhaps the same unspoken expressions used in human-horse communications “can be used to communicate that the robot respects [a] human,” Jain says. That might build a sense of trust, she says. It might also make someone “more open to working with a robot.”
What would it mean for robots and people to respect and trust each other? For now, she adds, this is largely uncharted territory. But she’s looking to build a path.