Christine Sunu is a natural-born inventor who focuses on designing human-centered products. We met with her during C2 Montreal, an annual international business conference that gathers members of all industries to design appropriate tools to find creative solutions to today’s challenges.

Christine came from California to present her fluffy robot – ‘Mostly’ – and to show that robots and machines can help teach empathy and trigger emotions. In fact, there are great similarities between the role of designers and that of doctors in improving well-being.

How does making a fluffy robot work towards improving the well-being of people?

Certain interfaces can have greater emotional impact when conferred with lifelike qualities. As living creatures, we’re set up to work with and accommodate other living creatures. When machines take on lifelike qualities, it can create a more natural system for us. When I build a fluffy robot, I’m not so much trying to make something adorable as I am trying to create something that promotes bonding and emotional well-being. I’m trying to create something that feels natural, and has high-impact for human motivation and self-improvement.

‘Mostly’ is a robot that makes little expressions. Most of these expressions are tied to his battery life so when he gets tired, he gets kind of angry because he is low on battery. Usually it is a choir to charge a robot. But with the facial expression of ‘Mostly’, you want to care for him and charging is brought to a more empathetic level.

What makes devices human-centric? 

I think it’s very easy to talk about user-centered design, but when it comes to products this concept has taken on a meaning of, “how do we make this easiest for the user?” My question is, “how do we create the outcome that the person wants, in partnership with them, and while accommodating their needs?” That’s a much harder question to answer. It actually gets easier if you give more control to the user, to let them decide how they want to use the system and what they want out of it. In a lot of ways people might say that isn’t as good for business, but it’s better for people.

How are robots relevant tools to make us more empathetic?

Robots can be an interesting tool for providing us with evidence of our own humanity. When we see something that appears to be alive, we feel empathy as though it is a living thing. That’s one of the cool parts about being human, that we tend to assume life first and ask questions later. But when it comes to tech, this is also a very dangerous concept; a robot does not feel back for us and can easily be used by a third party to manipulate us.

I don’t believe that we can teach empathy exclusively with robots. I think to teach empathy, we need people in the loop, we need human contact. And that is a trickier prospect, one that isn’t solved exclusively with a robotic or technological interface.

Is automation a solution to increase well-being?

Automation is when something is done for us. So, the robot vacuums the house, it cleans the clothes, it does my shopping, it walks the dog. Automation is frequently the kind of future we were promised in The Jetsons: put your feet up, the robot is here to solve all your problems.

Automation has its place, but there are some situations where an automatic solution isn’t the healthy choice. Do we want something done for us, or do we want to be better able to do it ourselves? If we are using robots to assist with relationship building, self-improvement, and self-reflection, I question whether an automatic solution is possible or effective. For something like this, I would prefer to move towards augmentation: devices that don’t do things for us, but rather give us tools to do our jobs better.

How is designing human-centered technology very similar to healthcare?

I spent a lot of time in medical school thinking about the doctor-patient relationship, and the inherent power dynamics there. As soon as you put on the white coat, people start to trust your opinion. Which is fine, except that generally they trust you more, and trust themselves less. A big part of the doctors’ work is to a relationship where the patient feels comfortable telling you what it is that they want, what it is that they think is wrong, advocating for themselves and working with you to find a solution.

I think this kind of power dynamic carries over into technology. There’s a lot of ways in which it is similar.

First: The issue of trust. People have an inherent trust in robots and apps. (Not as much as if they were wearing white coats, but it’s similar.) We generally overestimate their capacities, think they can do more or take into account things that they aren’t programmed to. We trust the robot, and we forget that there is a technologist or company behind it who built it and programmed it. We trust the robot, and we trust ourselves less. Frequently, we ask less of ourselves, and we feel less enabled.

Second: The issue of relationships. Technology can be insidiously powerful. We are hearing a lot now about certain kinds of tech being addictive or playing a role in negative moods or behaviors. And so the creators or companies behind this technology may have a kind of responsibility that is similar to the responsibility a healthcare provider has to patients. Especially as we discover more dangerous avenues of tech, we may need to swear the Hippocratic oath as technologists and engineers.

We met Christine Sunu at C2 Montreal

More on Christine Sunu’s inventions here

Tags: , ,