Would a Robot Make a Good President?
       |           

Welcome, Guest. Please login or register.
Did you miss your activation email?
April 16, 2024, 01:46:19 AM
News: Election Simulator 2.0 Released. Senate/Gubernatorial maps, proportional electoral votes, and more - Read more

  Talk Elections
  General Politics
  Political Debate (Moderator: Torie)
  Would a Robot Make a Good President?
« previous next »
Pages: [1]
Author Topic: Would a Robot Make a Good President?  (Read 1389 times)
SteveRogers
duncan298
YaBB God
*****
Posts: 4,174


Political Matrix
E: -3.87, S: -5.04

Show only this user's posts in this thread
« on: April 04, 2016, 06:09:07 PM »

This came to mind when I was thinking about the short story "Evidence" by Isaac Asimov. I highly recommend it, and there's a plot summary here: https://en.wikipedia.org/wiki/Evidence_(short_story)

But here's the spoiler-free, TL;DR version:

Stephen Byerley, a popular District Attorney, is running for mayor of a major American city. His opponent accuses him of secretly being a robot. He visits the chief robot scientists at U.S. Robotics and asks them how one might go about proving that Byerley is a humanoid robot. The head scientist lady says that if they can catch him violating one of the three laws of robotics, then that'll prove he's human.  For reference, Asimov's Three Laws of Robotics Are:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
3. Robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

However, if Byerley obeys all three laws, that won't prove that he's a robot. He might be a robot, or he might just be a really great guy. The story goes on with Byerley's opponent trying various tricks to prove to the people that Byerley is a robot and Byerley using the smear attack to his advantage by remaining ambiguous as to whether or not he's a human or a robot and using the discussion to raise issues of privacy and civil rights.  Another story by Asimov reveals that Byerley eventually became basically president of Earth.

Anyways, at one point it's postulated that a robot who must follow the Three Laws would actually be the most fair and just ruler imaginable.

For instance, you might think that the second law would require a robot president to obey any suggestion from a corrupt lobbyist to back their legislation, but the first law means he could only do so if that proposal would actually be in the best interest of the country and cause the least harm to the fewest people.

Thoughts? Agree or disagree?

 
Logged
Antonio the Sixth
Antonio V
Atlas Institution
*****
Posts: 58,061
United States


Political Matrix
E: -7.87, S: -3.83

P P
Show only this user's posts in this thread
« Reply #1 on: April 04, 2016, 06:26:04 PM »

In politics, no matter what decisions you make, you're always going to negatively affect some people. And since inaction is not an option, I would guess that trying to follow the first rule will make the robot crash.
Logged
🦀🎂🦀🎂
CrabCake
Atlas Icon
*****
Posts: 19,236
Kiribati


Show only this user's posts in this thread
« Reply #2 on: April 04, 2016, 07:11:58 PM »

No Rubio would have sucked.
Logged
NeverAgain
Junior Chimp
*****
Posts: 5,659
United States


Show only this user's posts in this thread
« Reply #3 on: April 04, 2016, 07:35:20 PM »

He may still be nominated in a brokered convention.
Logged
SteveRogers
duncan298
YaBB God
*****
Posts: 4,174


Political Matrix
E: -3.87, S: -5.04

Show only this user's posts in this thread
« Reply #4 on: April 04, 2016, 11:15:25 PM »

In politics, no matter what decisions you make, you're always going to negatively affect some people. And since inaction is not an option, I would guess that trying to follow the first rule will make the robot crash.

Perhaps, but a robot that can handle the three laws in a sufficiently sophisticated manner and weigh alternatives will, when forced to make a decision, choose the action or inaction that inflicts the least harm. The simplest example is a robot that sees a gunman about to shoot a group of people. The robot can stop him, but will have to use force to disarm him. The First Law would have to be interpreted as mandating that the robot harm the shooter in order to prevent much greater harm.

If the First Law is interpreted to handling dilemmas in this way, as opposed to just causing the robot's brain to explode when such a conflict arises, then it seems the robot president could govern in the best interest of the country.

The implications for robots running a large scale economy are actually explored in another Asimov short story that's worth a quick read: https://en.wikipedia.org/wiki/The_Evitable_Conflict
Logged
Antonio the Sixth
Antonio V
Atlas Institution
*****
Posts: 58,061
United States


Political Matrix
E: -7.87, S: -3.83

P P
Show only this user's posts in this thread
« Reply #5 on: April 04, 2016, 11:19:53 PM »

It's rarely so easy to determine what the greater and lesser harms are, though. That requires a moral framework that a purely rational machine could not possess unless it was also programmed into it (which would take more lines of coding than even the most skillful programmer could write in a lifetime).
Logged
Why
Unbiased
Jr. Member
***
Posts: 612
Australia


Show only this user's posts in this thread
« Reply #6 on: April 05, 2016, 03:31:46 AM »

In theory perhaps but the practicalities involved in creating such a robot would almost certainly be impossible. However it has a very low bar to reach to be better than human presidents so it could hardly be worse.
Logged
dead0man
Atlas Legend
*****
Posts: 46,244
United States


Show only this user's posts in this thread
« Reply #7 on: April 05, 2016, 07:19:14 AM »

If the AI was good enough, of course.  Much better than a human ever could be.
Logged
MASHED POTATOES. VOTE!
Kalwejt
Atlas Institution
*****
Posts: 57,380


Show only this user's posts in this thread
« Reply #8 on: April 05, 2016, 07:37:01 AM »

In politics, no matter what decisions you make, you're always going to negatively affect some people. And since inaction is not an option, I would guess that trying to follow the first rule will make the robot crash.

Although it should be noted Asimov managed to bypass his original laws in the Robots and Empire.

 
Logged
Likely Voter
Moderators
Junior Chimp
*****
Posts: 8,344


Show only this user's posts in this thread
« Reply #9 on: April 08, 2016, 11:58:51 PM »

Well in the Asimov universe the robots got past the dilema of the first law by inventing the zeroith law, allowing harm to individual humans if it protected humanity.  The eventual result of this was that the robots became a sort of secret  set of puppetmasters who guided from behind the scenes trying to take away free will in order to protect humanity from itself. 
Logged
MASHED POTATOES. VOTE!
Kalwejt
Atlas Institution
*****
Posts: 57,380


Show only this user's posts in this thread
« Reply #10 on: April 09, 2016, 12:31:18 AM »

Well in the Asimov universe the robots got past the dilema of the first law by inventing the zeroith law, allowing harm to individual humans if it protected humanity.  The eventual result of this was that the robots became a sort of secret  set of puppetmasters who guided from behind the scenes trying to take away free will in order to protect humanity from itself. 

Invoking the zeroith law didn't end well for R. Giskard.
Logged
Pages: [1]  
« previous next »
Jump to:  


Login with username, password and session length

Terms of Service - DMCA Agent and Policy - Privacy Policy and Cookies

Powered by SMF 1.1.21 | SMF © 2015, Simple Machines

Page created in 0.027 seconds with 11 queries.