ALL SCI-FI Forum Index ALL SCI-FI
The place to “find your people”.
 
 FAQFAQ   SearchSearch   MemberlistMemberlist   UsergroupsUsergroups   RegisterRegister 
 ProfileProfile   Log in to check your private messagesLog in to check your private messages   Log inLog in 

Asimov's Laws Of Robotics Is Flawed

 
Post new topic   Reply to topic    ALL SCI-FI Forum Index -> Sci-Fi Discussions in General
View previous topic :: View next topic  
Author Message
Tom
Solar Explorer


Joined: 07 Nov 2014
Posts: 53
Location: Gulf Coast

PostPosted: Fri Nov 07, 2014 10:26 pm    Post subject: Asimov's Laws Of Robotics Is Flawed Reply with quote

My issue comes with law #1

Law One - "A robot may not injure a human being or, through inaction, allow a human being to come to harm."

Just imagine that you were constrained by this rule. Lets also imagine said robot has enough AI to contemplate the rules. This gives the device an AI appx equal to our own intelligence. However, this same intelligence level gives the robot the ability to analyze scenarios a bit faster than us.

In my opinion, said robot would not be able to move due to conflicts.

Its based on the butterfly effect and said effect would incapacitate any protocol as strictly defined as rule number one.

The robot would be stagnated by the What If Scenario. Any action or movement by the robot would need to be 'approved' by its operating system before said movement could be initiated. How fast is it's processor?

The robot would analyze every possible outcome of its actions to the limits of its abilities. It could not 'take chances'. Chance might lead to a violation of rule #1. Therefore, before it would set to motion any action it would need to be assured that any coarse of action does not violate its prime directive.

Now lets imagine that it has found a coarse of action from its initial decision but time has passed from its initial calculation to its initial decision to act. No matter how much time has passed (for Super AI allow .001 second for trillions of calculations to be assessed) the robot now must reassess to account for life moving by during its processing. Sure life circumstances have not changed much in that time but they have changed. Now the robot must reassess every assessment it made to make sure nothing has changed that might allow a violation of rule #1. This process would repeat forever.

Law Two - "A robot must obey orders given to it by human beings except where such orders would conflict with the First Law."

The laws are already flawed with law #1.

Law #2 cannot work because humans are flawed. Humans do things without much thought to the consequences. A command for a robot to "go to the street and await further orders" would be simple for us but impossible for the robot. Not only due to the conflict of law #1 but because a device able to calculate rule #1 would also be able to calculate that humans have ulterior motives. So not only must it calculate law#1 for itself it must also attempt to justify the command to prove to itself that implementing Law #2 does not violate law #1.
Thus the time it requires to perform this calculation has changed the environment enough to require a new calculation to see if it is still safe to implement its action. It must also calculate to see if the travel to the street and its placement at the destination is not going to interfere events about to happen. "Going to the street and await further orders" might put it in the way of a child on a bicycle or some other out come.

Law Three - "A robot must protect its own existence, as long as such protection does not conflict with the First or Second Law."

The above endless computations and assessments would put its AI into a quantum loop of probabilities that might shut down processing and cause an overload. That overload would be harmful to the robots self interest. Laws 1 & 2 would break the robot from being able to function thus ending its existence.

Asimov later added the "Zeroth Law," above all the others - "A robot may not harm humanity, or, by inaction, allow humanity to come to harm."

Its very existence harms humanity. You see, I am part of humanity. You are as well. So is every other human in existence. If one of us is harmed, Humanity has been harmed. The fact that a robot even exists could cause a person to commit suicide then humanity has been harmed. If a group of people protest the robots creation and their passion against the creation has caused them to change their actions in any way that harms their ability to thrive then humanity has been harmed.
Back to top
View user's profile Send private message Visit poster's website
bulldogtrekker
Space Sector Admiral


Joined: 14 Dec 2013
Posts: 1024
Location: Columbia,SC

PostPosted: Sat Nov 08, 2014 10:03 am    Post subject: You may be right about the 3 laws of robotics, but...... Reply with quote

You may be right about the 3 laws of robotics, but it is such a great device to write science-fiction stories with!



An equivalent idea is Arthur C. Clark's idea that geostationary satellites would be ideal telecommunications relays. A true fact today!
Back to top
View user's profile Send private message
Tom
Solar Explorer


Joined: 07 Nov 2014
Posts: 53
Location: Gulf Coast

PostPosted: Sat Nov 08, 2014 10:39 am    Post subject: Reply with quote

The world has already created robots with disregard to the Laws of Robotics

http://www.futurefirepower.com/swords-mili...re-weapon-robot

The SWORDS military robot is basically the first evolution of robotic weapons. What used to be mere science fiction in the 80???s has now become a reality, and 10 years from now, the SWORDS robot will be in the scrap pile. That???s because this tiny guy represents a proof-of-concept project to show the world, and the US leaders, that robotic tanks and such can and should be built. It goes to show that we can create offroad robots that can tote many different types of guns (the SWORDS can be outfitted with a rocket launcher, sniper rifle, machine gun, grenade launcher, and more), successfully locate the enemy, and take him out on command. Fortunately, the tiny robot isn???t autonomous. It is controlled by a human operator on a computer from far away, which is actually better than a fully autonomous platform, because if they controlled themselves???.Terminator anyone?







http://www.metalstorm.com/



http://www.youtube.com/watch?v=d8hlj4EbdsE


http://gizmodo.com/312443/robot-cannon-goes-berserk-kills-9

Quote:
A robot cannon began wildly and autonomously firing its huge gun in South Africa last Friday, killing 9 soldiers and wounding 14. The Oerlikon GDF-005 antiaircraft gun suddenly began uncontrollably shooting as it swung back and forth, spraying hundreds of high-explosive 35mm cannon shells all over the place. The crazed robot's handlers are still trying to figure out what sort of software bug would cause such mayhem. The video you see above is not the actual incident, but is a similar occurrence from a few years ago when an XM-151 remote weapons station filled the air with lead, and then spun around toward the reviewing stand in search of even more targets. Hey, good thing it had run out of those .50 caliber bullets, or else it would have laid waste to some pretty important suits.


As for the 3 Laws being a plot device or subject of scifi stories it worked only until technology and science proved it to be nonsense. Since SciFi is SCIENCE fiction. any works that now contains the 3 Laws is more Fantasy than SciFi. A sentient robot can exist but it will not be guided or restricted to the 3 Laws. The Science in Science Fiction changes as mankind discovers new science. Older stories that use the 3 Laws are still well-written stories and still SciFi but any new stories that utilized the concept are Busted.


Now that an AI has passed the Turing Test we should be seeing more and more stories dealing with how an AI comprehends morality issues.

In reality, No actual robot possesses an AI. The most advanced REAL robot can barely pick up an object after traversing a room. It is big and clunky and has no personality.
Back to top
View user's profile Send private message Visit poster's website
Bud Brewster
Galactic Fleet Admiral (site admin)


Joined: 14 Dec 2013
Posts: 17115
Location: North Carolina

PostPosted: Sat Nov 08, 2014 12:47 pm    Post subject: Reply with quote

Mighty interesting post, Tom. Mr. Goostman says this --

Eugene Goostman wrote:
As for the 3 Laws being a plot device or subject of scifi stories it worked only until technology and science proved it to be nonsense . . . A sentient robot can exist but it will not be guided or restricted to the 3 Laws.


-- and we also hear about this from Charlie White as Gizmodo

Charlie White wrote:
A robot cannon began wildly and autonomously firing its huge gun in South Africa last Friday, killing 9 soldiers and wounding 14. The Oerlikon GDF-005 antiaircraft gun suddenly began uncontrollably shooting as it swung back and forth, spraying hundreds of high-explosive 35mm cannon shells all over the place. The


Obvious conclusion: We can't build robots that are incapable of harming people, and we can't build robots that are incapable of dangerous malfunctions.

Sounds to me like somebody is throwing up their hands and saying we should be satisfied with crappy robots, because Asimov set the bar too high and we can't solve the problem.

That, of course, is bullshit. Confused
_________________
____________
Is there no man on Earth who has the wisdom and innocence of a child?
~ The Space Children (1958)
Back to top
View user's profile Send private message
Tom
Solar Explorer


Joined: 07 Nov 2014
Posts: 53
Location: Gulf Coast

PostPosted: Sat Nov 08, 2014 1:50 pm    Post subject: Reply with quote

It's not in the premise of the Laws that I have issue. It's in the wording.
A robot is not the same as an artificial entity.

No we can not build robots or AEs that could function with those restrictions. Will we be able to someday - perhaps. My question is Will we Want to?

I believe, and this is just my opinion, that when we can construct an artificial entity that is able to function the AI program will inherently be able to consider moralities.
A robot can not operate on 'intent'. An AI entity will.

Robots are preprogrammed and stupid. They follow their directives to their programmed outcome. If you insert an obstacle in front of it the robot will move around it, go thru it or stop.

In historical science fiction the term Robot describes an artificial entity. It's given human decision making skills to make it relative. Data from ST:TNG is not governed by the 3 Laws. That is what makes that concept so appealing.

I am not saying that the 3 Laws are not a great story device. I am saying they are not realistic to the nature of the beast. Historic Scifi using this device to explore the humanity aspects is wonderful. But now the Science in Science Fiction demands that we consider a better anology.

Thanx for the discussion! Wink
Back to top
View user's profile Send private message Visit poster's website
Pye-Rate
Starship Co-Pilot


Joined: 14 Dec 2013
Posts: 625

PostPosted: Sat Nov 08, 2014 2:01 pm    Post subject: Reply with quote

I love the nonmilitary use of swords robots. From a semi take autoadjusting air suspension seat, add 3 sizes of electronic coolers [lunch, bait, fish], tackle box, rod reel gun rack, 5 miles from nearest road, FUN!
Back to top
View user's profile Send private message
Tom
Solar Explorer


Joined: 07 Nov 2014
Posts: 53
Location: Gulf Coast

PostPosted: Sat Nov 08, 2014 2:51 pm    Post subject: Reply with quote

Hi Pye-Rate,
Interesting, Fun Ride eh?
Thanx for contributing to this discussion!
Wink
Back to top
View user's profile Send private message Visit poster's website
Randy
Space Ranger


Joined: 14 Dec 2013
Posts: 127
Location: Ohio

PostPosted: Sat Nov 08, 2014 3:25 pm    Post subject: Reply with quote

Back to top
View user's profile Send private message
Display posts from previous:   
Post new topic   Reply to topic    ALL SCI-FI Forum Index -> Sci-Fi Discussions in General All times are GMT - 5 Hours
Page 1 of 1

 
Jump to:  
You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum
You cannot vote in polls in this forum


Powered by phpBB © 2001, 2005 phpBB Group