There has been a great deal of esteemed literature, film, television, airspace and satire warning humanity of the dangers posed by the machines. The buzz word of the last year is irrefutably Cognative computing. Billions are spent by IT firms building such capabilities and millions spent on marketing to exaggerate the propensity of their machines. Last year Google bought a company called Deep Mind: a collection of UCL neuro and computer scientists whose raison d’être was to build a computer that could learn. The extent to which they have succeeded in their endeavour or not is an intriguing debate that ultimately collapses into semantics: what is it to learn!?
However, for the sake of reason let us assume that we have a computer that has propensity to infinitely learn – that is – scale to accommodate the the demands of data and hardware needed for a brain to grow as a computer might have to: The most brilliant mind in the world. I will call this hypothetical phenomenon Faust. Question is – what and from where is the danger?
One’s first response is that the Computer poses the danger. It will enslave / harvest / exterminate mankind as all these films and literature lead us to believe. iRobot, The Matrix, Avengers Age of Ultron. The list goes on. While one must concede that this is a fear: Once computers develop algorithms that calculate Human’s value, they will be seen as a plague to the resources of the earth and wiped-out for to save the world. This could be done by automated contamination of drinking water or release of gasses over cities. As of yet computers have not shown this propensity. As will all the best philosophy, I invoke Ockham’s razor: the best answer is the simplest with everything else being equal. Consider our empirical evidence of the greatest danger posed to Mankind over history: it is that of other humans – often other races. No shit sherlock right?

Now marry the two, a kind of computer-Brain Minority report. What happens when a computer has the algorithm to identify the exact profile of a murderer. This equips the police with the ability preemptivetively intervene and protect society. From a Utilitarian stance this is morally good. From a jurisprudence stance this is the right cause of action if prison is about protecting those outside from those inside. But what of our intuition?… Contentious.

One step further: the algorithm shows that Males with the ethnically native skin type, if exterminated would make that society safer. Maybe we don’t go as far exterminating the existing men, but that could lead to more Authoritarian societies to instigate breeding programs, or prohibit the development of the male foetus if both parents are of the native ethnicity. Bring on the “H” word. It wasn’t me, I didn’t know any better, It wasn’t me, I was ordered: it wasn’t my choice, the computer calculated it.

Whatever this runs too, I believe that the analytical reports that a computer can make, that the development of technology make increasingly nuanced and sophisticated, are the concern: they are returned to humans which we know err. Who should be given this knowledge? Whose decision does it become do act upon this data? Like the Enigma machine that reported on the German strikes in the war and a team who decided which targets to intervene with – based on an algorithm of risk, who decides what is the good of the nation, the good of mankind? Even if the machines do not wipe-out the jews, humans cannot be given the mathematical justification that this is the right decision: machines must be moral too!

Advertisements