As robots become more humanoid, humans will feel more emotionally attached to them. As humans feel more emotionally attached to robots, the robots will need to be given rights. If the robots have rights, can they be held accountable for their actions? If we give robots rights, what does that say about us?

Polarity creates an open discussion about our future with AI.

Polarity takes users through seemingly mundane day to day scenarios that represent underlying ethical values.

Screen Shot 2019-09-11 at 11.24.18 AM.pn

The user’s assessment of the situations directly influences the development of the AI's moral competence. The AI becomes a reflection of the user.

Screen Shot 2019-09-11 at 11.24.30 AM.pn

​The scenarios often end in unexpected ways because we are not sure how our choices now will affect future outcomes.

Screen Shot 2019-09-11 at 11.24.44 AM.pn

There are twenty possible outcomes, each representing a concern experts currently have about the future of AI.

Screen Shot 2019-09-11 at 11.24.55 AM.pn

Four Underlying Themes

orange alignments-02.png

Autonomy

“The developer is responsible for the robot's actions,

at least for a very long time.”

 

-Prof. Sung Park, SCAD

Attachment

Acceptance

orange alignments-03.png

“AI systems reveal our own ethical wrongdoing.”

 

-Cansu Canca, AI Ethics Lab

orange alignments-01.png

“We don't know if whatever you are simulating is actually consciously being held by that entity...we don't know that about ourselves.”

 

-Tekin Mercili NREC

Authority

orange alignments-04.png

“These social 'agents' and our emotionally loaded interaction with them also raise new questions about the justifications and limitations of their role in the society.”

 

-Cansu Canca, AI Ethics Lab

webblob 2.jpg
polarity logo web-08.png

[Click To Start]

Primary + Secondary

Research

Terms to know

ro·bot

/ˈrōˌbät,ˈrōbət/

a machine capable of carrying out a complex series of actions automatically, especially one programmable by a computer

 

Derived from the Czech word Robota which means slave

“The robot was initially envisioned as a metaphor for the industrial worker, Taylorized and dehumanized, not just as a critical trope...Thus the humanoid robot in modern fiction began as a means of exploring the meaning of the mechanization of (human) work.” 

 

-J.C. McKnight, Dombots: An Ethical and Technical Challenge to the Robotics of Intimacy

so·cial ro·bot

/ˈsōSHəl//ˈrōˌbät,ˈrōbət/

an autonomous robot that interacts and communicates with humans or other autonomous physical agents by following social behaviors and rules attached to its role.

Ro·bo·phi·los·o·phy

/ˈrōˌbō,fəˈläsəfē/

a fundamental systematic reconfiguration of philosophy in the face of artificial social agency

 

“If ‘social’ or ‘sociable’ robots are being developed, what does this teach us about sociality and responsibility?”

 

  • “The Automation of the Social? What Robots Teach Us About Sociality and Responsibility” Mark Coeckelbergh Ph.D

em·pa·thy

/ˈempəTHē/

the ability to understand and share the feelings of another. 

 

“The glue that makes social life possible ...Empathy is expressed more as an active doing or performing than as a passive experiencing.”

 

  • “Conditions of Empathy in Human Robot Interaction” R. Yamazaki Ph.D

  • “...the reason we feel empathy for robots like WALL-E is that, when we see them treated a certain manner, it triggers the same sort of neural activity as seeing a human treated that way.” 

As robots become more humanoid, humans will feel more emotionally attached to them. As humans feel more emotionally attached to robots, the robots will need to be given rights. If the robots have rights, can they be held accountable for their actions? If we give robots rights, what does that say about us?

As humans feel more emotionally attached to robots, will the robots need to be given rights.

If the robots have rights, can they be held accountable for their actions?

If we give robots rights, what does that say about us?

Our Experts

Julie Carpenter

 

HCI Researcher and Author

at Accenture

Prof. Sung Park

 

Social Robotics

Psychologist

at SCAD

Cansu Canca

 

Founder of 

AI Ethics Lab 

Tekin Meriçli 

 

Robotics 

Engineer Locomation 

+ NREC

“A sociable robot is socially intelligent in a humanlike way, and interacting with it is like interacting with another person… At the pinnacle of achievement, they could befriend us and we could them.”

-Cynthia L. Breazeal, MIT, Founder of JIBO

“Embodied AI has the capacity to use bodily cues to engage with humans—such as a wide-eyed look to express interest or innocence, a bowing head to indicate sadness, etc. While these abilities make such social robots fun and engaging, they also open up various ways for manipulation utilizing our emotional reactions to such “human” behavior.”

-Our interview with Cansu Canca

The question should not be “Can machines have rights” but rather “Should machines have rights?

-“The Other Question: The Issue of Robot Rights” David Gunkel

“The question is if we’re going to regard a robot as an independent entity...If we have this agreement within society that we should start acknowledging the moral rights of the robot then we can start thinking about them but we’re not there yet.”

-Our Interview with Prof. Sung Park

“Robots can suffer as far as we perceive them to. If robots are built to feel pain, they will need to be accorded moral rights… they will need to have such rights as far as we can perceive that they can feel pain.”

-“Conditions of Empathy in Human Robot Interaction” Ryuji Yamazaki

“We will have human-android hybrids, and androids that look and behave like humans, so at what point do you call that entity a human? What point do you call it a machine?”

-Our interview with Tekin Mercili

“We don’t know if whatever you are simulating is actually consciously being held by that entity...we don’t know the answer to that about ourselves.”

​-Our interview with Tekin Mercili

“How is [giving a robot pain] any different than just embedding that kind of negativity log into your machine learning pipeline?”

​​-Our interview with Tekin Mercili

“Any robot that collaborates with, looks after, or helps humans must have moral competence...Humans are highly sensitive to other people’s displays of empathy, and a robot that appears to coldly assess moral situations may not be trusted.”

-“Moral Competence in Robots” Betram F. Malle, Brown University

“The developer is responsible for the robot’s actions, at least for a very long time.”

-Our Interview with Prof. Sung Park

“Currently, we get almost all our information through AI systems and these systems constantly make decisions about our information set. We have not been careful about how these decisions come about. (Think of examples like anti-vaccine movement, or Cambridge Analytica).”

-Our interview with Cansu Canca

“One major issue in AI research is to reduce social biases that creep into AI systems. This problem of algorithmic biases is both difficult and interesting because they do not arise from AI systems. AI systems simply reveal how our own ethical wrongdoing in the past has made the existing data contain these biases.”

-Our interview with Cansu Canca

“Sociable robots are deeply connected with our desire to avoid and hide from real moral relationships and to devise technologies that effectively help us do this.”

-“Social Robots as Mirrors of Failed Communion” N. Toivakinen

“For an individual to benefit significantly from the ownership of a robot pet they must systematically delude themselves regarding the real nature of their relationship with the animal.”

-“Social Robots and Sentimentality” R. Rodogno

User Interviews

USER #4

 

Female, 55

Shopkeeper 

USER #1 

 

Female, 20 

UX Student

USER #2 

 

Male, 24 

Dog Owner

USER #3

 

Male, 21

UX Student

User #1

What should/shouldn’t we teach AI?

If you had a baby and a robot, you would teach them the same things wouldn't you? What do you decide that a robot doesn’t get to learn? For fear of them using it against us. A human could use it against us. So, what is to say a robot will?”

User #2

Would you kick your dog?

No. He’s just a stupid little man.

Would you kick him if he was a robot?

[No hesitation] Oh yeah, I’d beat the sh*t out of him haha

Right now, if I asked you to kick him, but he was a robot this whole time, would you?

Oh, well… no probably not. I’d still love my little man even if he was a robot.

User #3

Used the game Detroit: Become Human as a reference for all of his knowledge about AI

Worried about history repeating itself and the possibility of the robot civil rights movement

Our expectations and mental models of the future of technology are heavily influenced by our exposure to those ideas in popular media.

User #4

Casual conversation about overall disposition towards AI and autonomous technology.

Feels like the main purpose of AI is to make our lives easier, but doesn’t feel it’s necessary or even good for people.

 

Generation gaps will play a big part in the divide of how people will view the roles and purpose of AI.

We conducted a survey about people’s
emotional connection to AI

35 Responses

21 Questions

1 week duration

Are robots honest and trustworthy?

64%

said yes

If you program an AI to have empathy, does it feel empathy?

53%

said yes

If a robot expresses pain, is it okay to hurt it?

80%

said no

If a robot says "I love you" does that mean it loves you?

78%

said no

The person in this image is actually a realistic humanoid robot. Does that affect your feelings towards them?


 

 I don't want to trust her. It looks like she's just mimicking human behavior and not actually feeling things. (P11)


 

I know she can be programmed to feel different easily or erase her memory. Still feel a little bad (P17)


 

I still feel empathetic towards her. (P26)

This dog is actually a hyper-realistic robot animal. Is it okay to hurt it?


 

It depends. I would feel bad because I have been programmed to not beat dogs, but if it doesn't actually feel anything I would be really conflicted. (P11)


 

If I hit it with a car by accident, I wouldn't feel guilty. If someone was hurting it on purpose, I would be more worried about the mentality of that person than the robot itself, because it's pain wouldn't be real. (P28)

Screen Shot 2019-09-20 at 11.40.31 AM.pn

Affinitizing data

Screen Shot 2019-09-20 at 12.12.55 PM.pn

Core insights

Humanize robots and AI to increase empathetic response.

 

Multi-modality makes interaction more intuitive.

 

There will be a divide in society towards perception of robots.

 

We need better understanding of the human psyche to fully develop cognitive architecture.

 

Form follows function – we’ll only make robots according to need. 

 

The developer is responsible for the robot’s actions.

 

Keep in mind social facilitation – we act differently in front of any kind of human entity. 

Concept development

Speculative Design helps us to understand the implications of emerging technologies, gives us new ways of thinking about our current values, and can help direct us to act in ways today that will lead us into more preferable futures.

Screen Shot 2019-09-20 at 12.48.13 PM.pn

Designing for B

Brainstorming Concepts

Screen Shot 2019-09-23 at 12.18.34 PM.pn

Brainstorming ideas for interactive exhibit

Rude Alexa

Screen Shot 2019-09-23 at 11.15.14 AM.pn

Robot relationships

“Golly Gee Sally, I hope your dad doesn’t mind you have a robot for a boyfriend.”

 

 

 

 

“Don’t worry Jimmy-tron, Daddy will come around.”

Screen Shot 2019-09-23 at 11.32.58 AM.pn

“Hello sir, my name is Jimmy-tron, it’s very nice t-”

 

 

 

“Jimmy...tron!? Ain’t no daughter of mine gonna be dating no stinkin metalhead!”

Screen Shot 2019-09-23 at 11.53.30 AM.pn

“Go on! Get! I thought I raised you right Sally.”

 

 

 

“But Daddy I love him!!!”

Screen Shot 2019-09-23 at 12.03.26 PM.pn

Build-A-Bot

Welcome to Build a Bot, where you can create the perfect robot just for you!

Personalize every aspect of your robot, from their eye color to their height

Screen Shot 2019-09-23 at 12.17.23 PM.pn

Uh oh, you said you don’t like Chinese food. It looks like your robot burned down China! You shouldn’t have done that Johnny, it’s your fault that China burned down. You should’ve taught your robot not to burn down China!

Screen Shot 2019-09-23 at 12.17.34 PM.pn

Put users through multiple scenarios, and the system decides if they accepted robots into society overall, according to their choices

Final Concept

Screen Shot 2019-09-23 at 12.38.12 PM.pn

Participants are taken through the process of building their own robot, including scenarios which present them with ethical dilemmas. Their choices create the robot’s personality, thus reflecting themselves. 

Design

Exhibition flow ideation

Brainstorming scenerios

Screen Shot 2019-09-23 at 12.50.35 PM.pn
Screen Shot 2019-09-23 at 12.50.47 PM.pn
Screen Shot 2019-09-23 at 1.01.37 PM.png

Analyzing and grouping together scenarios to form a story

Screen Shot 2019-09-23 at 1.03.45 PM.png
Screen Shot 2019-09-23 at 1.04.01 PM.png

Example 1

“You send your robot Susan to the store to grab milk. There is one jug left on the shelf and a frail old lady goes to grab it at the same time as Susan. Tell Susan what values she should follow.”

Screen Shot 2019-09-23 at 12.51.30 PM.pn
Screen Shot 2019-09-23 at 12.51.46 PM.pn

Themes to exolore

Brainstorming storyline

Screen Shot 2019-12-28 at 7.57.33 PM.png

Story interaction flow

Screen Shot 2019-12-28 at 8.42.59 PM.png

Interface

Wireframing

Mid fidelity

Screen Shot 2019-12-28 at 8.56.18 PM.png
Screen Shot 2019-12-28 at 8.56.33 PM.png
Screen Shot 2019-12-28 at 8.54.59 PM.png
Screen Shot 2019-12-28 at 8.57.08 PM.png

Illustrations

Screen Shot 2019-12-28 at 8.57.31 PM.png

User testing

Screen Shot 2019-12-28 at 10.12.13 PM.pn
Screen Shot 2019-12-28 at 10.12.44 PM.pn

User testing insights

Users asked “why” at the end

 

The quiz sparked discussion about the future of AI

 

Onboarding made sense but the explanations at the end need more content to make sense

 

Buttons and blobs make sense

 

Blobs are not too distracting but fix the cross over

 

User flow is intuitive

 

Users wanted an alignment chart at the end

Final UI

Screen Shot 2019-12-28 at 10.14.26 PM.pn

Poster

Team Members

Calyssa Nowviskie

Jena Martin

Dylan Byars

Screen Shot 2019-12-28 at 10.24.47 PM.pn