FATHER FRANK’S RANTS - Immoral Machines
Rant Number 344 18 March 2009 Teaching robots right from wrong. The subtitle of a new book, Moral Machines. Bit weird, given the subtext: the robots in question are military ones. Programmed to kill. Already operational in tragic places like Iraq and Afghanistan. So, wouldn’t immoral machines be more truthful? Groan…not really. Not until the Second Coming, when God will forever erase evil from the face of the earth, will killing be universally abolished. In the interim, the androids are likely to thrive. Teaching machines morality. Tricky. Because to act morally you have first to be able to think. There is no question of a submachine gun or a missile or a drone being moral, as such splendid artefacts are incapable of thought. But could more sophisticated automata think? ‘Machines have manifestly no life, no sense, no feelings, no purposes except their makers’, writes Professor Peter Geach. ‘There is no question of ascribing to them the activity of thinking’. But he himself admits that a) he can’t define ‘life’, and b) machines can be so human as to show vanity. Proof? Designed to seek a light source, an automaton swings to and fro before a mirror, owing to the reflected virtual image. Piffling points? Perhaps. Anyway, a tough behaviourist would brush all this aside. What matters is behaviour, not occult, metaphysical entities like ‘mind’ and ‘thought’. If a robot displays human-like behaviour and responses…that’s enough. Quite sensible. My computer is certainly not ‘alive’ now but, should it start to act purposely and independently, maybe even malevolently, like the notorious Hal of 2001 Space Odyssey, the question whether it really was conscious or not would be academic. Its blatant behaviour would override any semantic gaff. Behaviour….there’s the rub. Even the most gung-ho military might get nervous at the idea of robotic troopers running around battlefields. What if they malfunctioned and blew away the good guys? Human beings do that not infrequently in ‘friendly fire’ cases. Automata could hardly be immune from error. On another level, take sci-fi writer Isaac Asimov’s celebrated three laws of robotics.1) A robot must not harm a human being; 2) he must obey his orders; 3) he must protect its own existence. Neat. The trouble is that: 1) would make a warrior robot as useful as a toy soldier. If a combatant is not prepared to kill or die, what darned use is he? (At the most, he could be employed in non-lethal police action, a sort of benign Robocop. But even cops at times have to shoot.) More crucially, about 2): what if two human beings give contradictory orders? And what precisely counts as harm? Is stopping a person from, e.g. smoking 60 fags a day harm? Must a robot obey the orders of a ten year old child, for example? Preventing an underage girl from getting pregnant – is that injuring her? Should lunatics be obeyed too? If it came to something like a utilitarian calculus – always act so to maximise good results over bad – how would a machine go about it? How would it quantify ‘good’ and ‘bad’? What criteria would it employ? Would the grief of a father whose family had been wiped out by a utilitarian drone be offset by the capture of a nearby enemy outpost? And so on. Still, machines, moral or not, are wonderful. Your computer, where you are reading this, is the proof. Yet human response to machines can be fear and hatred. In desperation to having their old ways of life destroyed and being pauperised, the Luddites of East Midlands and Yorkshire in 1811-3 smashed the threshing machines. Workers rioted. It made the ruling classes shake in their boots. As unrest spread, a bloody upheaval after the French fashion seemed in the offing. In the end, King Ludd’s threat melted away. ‘No revolution, please, we are British’ triumphed. The machines won. Or, rather, their capitalist makers did. Machines made the industrial revolution possible. Marx and Engels’ Communist Manifesto, insofar as it celebrates the rise of those bearers of a revolutionary future, the proletarians, is an ode sung to machines. Yet machine-power also makes people anxious. When it comes to robots, the corny Frankenstein myth crops up . Are men creating unhappy monsters? Think of the haunting, crazed gunslinger Yul Brynner plays in that masterly low-budget movie, WestWorld. Or of Arnie’s enjoyable Terminator I. They make your flesh creep. But that is over-dramatic. A sentient, self-conscious automaton doesn’t have to be either wholly good or wholly bad. Why could he not be just like his designers? Ordinary, mediocre. An average sinner. Whatever ethical principles he had been programmed to follow, he could regularly ‘fall short’ of them, like normal human beings often fall short of morality. Because, you could argue, men also have been ‘programmed’ by their genes, nature, nurture, upbringing and so on. Christians, for instance, have been taught to follow the Gospel. Love your enemy, turn the other check, go the second mile, all that. What a better world would it be if they all did! Patently, the overwhelming majority of Christians do not. They are generally selfish, self-centred, greedy and uncaring. Why should a robot, even one who knows the difference between right and wrong, be different? ‘You human beings do not live up to the programmes you call your ‘ideals’. Why should we?’ a cheeky automaton could justifiably argue. How would you answer him? ‘You have no right to speak like that. You are only a machine! Shut up and obey your orders!’ Would that be fair? Still, creating machines capable of killing is a kind of tragic mockery of human generation. Human beings have a killer instinct too, sure, but they also harbour other, better potentialities. ‘You were not made to live like brutes but to follow wisdom and virtue’ says Dante’s Odysseus. ‘All men desire to know’ writes Aristotle. Here is a suitable, decent job for clever robots: classroom teachers. Well, at least they could stand up to rowdy school kids. However, one thing robots had better not be created with: sex drive. Because, if they had that drive, I suspect no human being would be safe from them. Revd Frank Julian Gelli ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
No comments:
Post a Comment