Given the extent to which I utterly loathe driving, I am
intrigued by the idea of self-driving cars such as those
being tested by Google, although if it’s Google, I can only imagine that
any drive will thoroughly inundate the rider with ads which may end up being
even more unpleasant than driving. (Actually, I’d rather walk or take public
transport but, this being the U.S., that’s not an option in most places.)
The
New Yorker, though, poses some
interesting questions vis-à-vis self-driving cars and the potentially Robot
Holocaust-like technology underlying it all.
Eventually (though not yet) automated
vehicles will be able to drive better, and more safely than you can; no
drinking, no distraction, better reflexes, and better awareness (via
networking) of other vehicles. Within two or three decades the difference
between automated driving and human driving will be so great you may not be
legally allowed to drive your own car...
Hope springs eternal! But, perhaps more importantly, we may
be ushering in
the era in which it will no longer
be optional for machines to have ethical systems. Your car is speeding along a
bridge at fifty miles per hour when errant school bus carrying forty innocent
children crosses its path. Should your car swerve, possibly risking the life of
its owner (you), in order to save the children, or keep going, putting all
forty kids at risk? If the decision must be made in milliseconds, the computer
will have to make the call.
Now, suppose there was no such thing as a hypothetical
situation...
But if we have reached this point, why would the school bus
be errant in the first place? (Yeah, I know, think about MS Windows and then
extrapolate that to a vehicle’s OS; “You’re about to die in a fiery crash. But
there are unused icons on your desktop. Would you like you like to fix them?”)
And in the second place, I’ve seen some viral videos and bits of the movie Bully, so I’m not all that convinced of the innocence of
school kids, so I say off the bridge with ’em. (Oh, I’m kidding. Sort of....Um,
can we pick which of the forty?)
There are, of course, Isaac Asimov’s classic Three Laws of
Robotics:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
And, it being Asimov, 4. All robots must sport gigantic mutton-chop
sideburns.
Asimov’s short story collection I, Robot demonstrated pretty effectively just how problematic
those laws could be. And think about how you might define terms like “injure,”
“inaction,” “harm,” “protection.” Norman, coordinate! (Star Trek’s “I,
Mudd” episode also illustrated these conundra, albeit in very silly ways,
and Futurama’s “I, Roommate” in more
intentionally silly ways.)
Still, given that after all these millennia we still have
not figured out how to get humans to act
ethically and morally, so I suppose it’s no surprise that it will be challenge
to get our machines to behave as such.
No comments:
Post a Comment