第三十一届韩素音英译汉原文
I.
Artificial Intelligence (A.I.) is having a moment, albeit one marked by crucial ambiguities.Cognoscenti including Stephen Hawking, Elon Musk and Bill Gates, among others, have recently weighed in on its potential and perils.
After reading Nick Bostrom’s book “Superintelligence,” Musk even wondered aloud if A.I. may be “our biggest existential threat.”Our popular conception of artificial intelligence is distorted by an anthropocentric fallacy.
Positions on A.I. are split, and not just on its dangers. Some insist that “hard A.I.” (with human-level intelligence) can never exist, while others conclude that it is inevitable. But in many cases these debates may be missing the real point of what it means to live and think with forms of synthetic intelligence very different from our own.
That point, in short, is that a mature A.I. is not necessarily a humanlike intelligence, or one that is at our disposal. If we look for A.I. in the wrong ways, it may emerge in forms that are needlessly difficult to recognize, amplifying its risks and retarding its benefits.
This is not just a concern for the future. A.I. is already out of the lab and deep into the fabric of things. “Soft A.I.,” such as Apple’s Siri and Amazon recommendation engines, along with infrastructural A.I., such as high-speed algorithmic trading, smart vehicles and industrial robotics, are increasingly a part of everyday life — part of how our tools work, how our cities move and how our economy builds and trades things.
Unfortunately, the popular conception of A.I., at least as depicted in countless movies, games and books, still seems to assume that humanlike characteristics (anger, jealousy, confusion, avarice, pride, desire, not to mention cold alienation) are the most important ones to be on the lookout for. This anthropocentric fallacy may contradict the implications of contemporary A.I. research, but it is still a prism through which much of our culture views an encounter with advanced synthetic cognition.
The little boy robot in Steven Spielberg’s 2001 film “A.I. Artificial Intelligence” wants to be a real boy with all his little metal heart, while Skynet in the “Terminator” movies is obsessed with the genocide of humans. We automatically presume that the Monoliths in Stanley Kubrick and Arthur C.
Clarke’s 1968 film, “2001: A Space Odyssey,” want to talk to the human protagonist Dave, and not to his spaceship’s A.I., HAL 9000.
I argue that we should abandon the conceit that a “true” Artificial Intelligence must care deeply about humanity — us specifically — as its focus and motivation. Perhaps what we really fear, even more than a Big Machine that wants to kill us, is one that sees us as irrelevant. Worse than being seen as an enemy is not being seen at all.
Unless we assume that humanlike intelligence represents all possible forms of intelligence – a whopper of an assumption – why define an advanced A.I. by its resemblance to ours?
After all, “intelligence” is notoriously difficult to define, and human intelligence simply can’t exhaust the possibilities. Granted, doing so may at times have practical value in the laboratory, but in cultural terms it is self-defeating, unethical and perhaps even dangerous.
We need a popular culture of A.I. that is less parochial and narcissistic, one that is based on more than simply looking for a machine version of our own reflection. As a basis for staging encounters between various A.I.s and humans, that would be a deeply flawed precondition for communication.
Needless to say, our historical track record with “first contacts,” even among ourselves, does not provide clear comfort that we are well-prepared.
II.
The idea of measuring A.I. by its ability to “pass” as a human – dramatized in countless sci-fi films, from Ridley Scott’s “Blade Runner” to Spike Jonze’s “Her” – is actually as old as modern A.I. research itself. It is traceable at least to 1950 when the British mathematician Alan Turing published “Computing Machinery and Intelligence,” a paper in which he described what we now call the “Turing Test,” and which he referred to as the “imitation game.”
There are different versions of the test, all of which are revealing as to why our approach to the culture and ethics of A.I. is what it is, for good and bad. For the most familiar version, a human interrogator asks questions of two hidden contestants, one a human and the other a computer.
Turing suggests that if the interrogator usually cannot tell which is which, and if the computer can successfully pass as human, then can we not conclude, for practical purposes, that the computer is “intelligent”?
There is a correspondence between asking an A.I. to “pass” the test in order to qualify as intelligent — to “pass” as a human intelligence — with Turing’s own need to hide his homosexuality and to “pass” as a straight man.
More people “know” Turing’s foundational text than have actually read it. This is unfortunate because the text is marvelous, strange and surprising. Turing introduces his test as a variation on a popular parlor game in which two hidden contestants, a woman (player A) and a man (player B) try to convince a third that he or she is a woman by their written responses to leading questions.
To win, one of the players must convincingly be who they really are, whereas the other must try to pass as another gender. Turing describes his own variation as one where “a computer takes the place of player A,” and so a literal reading would suggest that in his version the computer is not just pretending to be a human, but pretending to be a woman. It must pass as a she.
Other versions had it that player B could be either a man or a woman. It would seem a very different kind of game if only one player is faking, or if both are, or if neither of them are. Now that we give the computer a seat, we may have it pretending to be a woman along with a man pretending to be a woman, both trying to trick the interrogator into figuring out which is a man and which is a woman.
Or perhaps a computer pretending to be a man pretending to be a woman, along with a man pretending to be a woman, or even a computer pretending to be a woman pretending to be a man pretending to be a woman! In the real world, of course, we already have all of the above.
“The Imitation Game,” Morten Tyldum’s Oscar-winning 2014 film about Turing, reminds us that the mathematician himself also had to “pass” — in his case as straight man in a society that criminalized homosexuality. Upon discovery that he was not what he appeared to be, he was forced to undergo horrific medical treatments known as “chemical castration.”
Ultimately the physical and emotional pain was too great and he committed suicide. The episode was grotesque tribute to a man whose contribution to defeating Hitler’s military was still at that time a state secret. Turing was only recently given posthumous pardon, but the tens of thousands of other British men sentenced under similar laws have not.
One notes the sour ironic correspondence between asking an A.I. to “pass” the test in order to qualify as intelligent — to “pass” as a human intelligence — with Turing’s own need to hide his homosexuality and to “pass” as a straight man. The demands of both bluffs are unnecessary and profoundly unfair.
Passing as a person, as a white or black person, or as a man or woman, for example, comes down to what others see and interpret. Because everyone else is already willing to read others according to conventional cues (of race, sex, gender, species, etc.) the complicity between whoever (or whatever) is passing and those among which he or she or it performs is what allows passing to succeed.
Whether or not an A.I. is trying to pass as a human or is merely in drag as a human is another matter. Is the ruse all just a game or, as for some people who are compelled to pass in their daily lives, an essential camouflage? Either way, “passing” may say more about the audience than about the performers.
We would do better to presume that in our universe, “thinking” is much more diverse, even alien, than our own particular case. The real philosophical lessons of A.I. will have less to do with humans teaching machines how to think than with machines teaching humans a fuller and truer range of what thinking can be (and for that matter, what being human can be).
III.
That we would wish to define the very existence of A.I. in relation to its ability to mimic how humans think that humans think will be looked back upon as a weird sort of speciesism. The legacy of that conceit helped to steer some older A.I. research down disappointingly fruitless paths, hoping to recreate human minds from available parts.
It just doesn’t work that way. Contemporary A.I. research suggests instead that the threshold by which any particular arrangement of matter can be said to be “intelligent” doesn’t have much to do with how it reflects humanness back at us. As Stuart Russell and Peter Norvig (now director of research at Google) suggest in their essential A.I. textbook, biomorphic imitation is not how we design complex technology.
Airplanes don’t fly like birds fly, and we certainly don’t try to trick birds into thinking that airplanes are birds in order to test whether those planes “really” are flying machines. Why do it for A.I. then? Today’s serious A.I. research does not focus on the Turing Test as an objective criterion of success, and yet in our popular culture of A.I., the test’s anthropocentrism holds such durable conceptual importance.
Like the animals who talk like teenagers in a Disney movie, other minds are conceivable mostly by way of puerile ventriloquism.
One could argue that the anthropomorphic precondition for A.I. is a “pre-Copernican” attitude.
Where is the real injury in this? If we want everyday A.I. to be congenial in a humane sort of way, so what? The answer is that we have much to gain from a more sincere and disenchanted relationship to synthetic intelligences, and much to lose by keeping illusions on life support.
Some philosophers write about the possible ethical “rights” of A.I. as sentient entities, but that’s not my point here. Rather, the truer perspective is also the better one for us as thinking technical creatures.
Musk, Gates and Hawking made headlines by speaking to the dangers that A.I. may pose. Their points are important, but I fear were largely misunderstood by many readers. Relying on efforts to program A.I. not to “harm humans” (inspired by on Isaac Asimov’s “three laws” of robotics from 1942) makes sense only when an A.I. knows what humans are and what harming them might mean.
There are many ways that an A.I. might harm us that that have nothing to do with its malevolence toward us, and chief among these is exactly following our well-meaning instructions to an idiotic and catastrophic extreme. Instead of mechanical failure or a transgression of moral code, the A.I. may pose an existential risk because it is both powerfully intelligent and disinterested in humans.
To the extent that we recognize A.I. by its anthropomorphic qualities, or presume its preoccupation with us, we are vulnerable to those eventualities.
Whether or not “hard A.I.” ever appears, the harm is also in the loss of all that we prevent ourselves from discovering and understanding when we insist on protecting beliefs we know to be false. In the 1950 essay, Turing offers several rebuttals to his speculative A.I., including a striking comparison with earlier objections to Copernican astronomy.
Copernican traumas that abolish the false centrality and absolute specialness of human thought and species-being are priceless accomplishments. They allow for human culture based on how the world actually is more than on how it appears to us from our limited vantage point.
Turing referred to these as “theological objections,” but one could argue that the anthropomorphic precondition for A.I. is a “pre-Copernican” attitude as well, however secular it may appear. The advent of robust inhuman A.I. may let us achieve another disenchantment, one that should enable a more reality-based understanding of ourselves, our situation, and a fuller and more complex understanding of what “intelligence” is and is not.
From there we can hopefully make our world with a greater confidence that our models are good approximations of what’s out there (always a helpful thing.)
Lastly, the harm is in perpetuating a relationship to technology that has brought us to the precipice of a Sixth Great Extinction. Arguably the Anthropocene itself is due less to technology run amok than to the humanist legacy that understands the world as having been given for our needs and created in our image.
We hear this in the words of thought leaders who evangelize the superiority of a world where machines are subservient to the needs and wishes of humanity. If you think so, Google “pig decapitating machine” (actually, just don’t) and then let’s talk about inventing worlds in which machines are wholly subservient to humans’ wishes.
One wonders whether it is only from a society that once gave theological and legislative comfort to chattel slavery that this particular affirmation could still be offered in 2015 with such satisfied naïveté? This is the sentiment — this philosophy of technology exactly — that is the basic algorithm of the Anthropocenic predicament, and consenting to it would also foreclose adequate encounters with A.I. It is time to move on. This pretentious folklore is too expensive.
Benjamin H. Bratton (@bratton) is an associate professor of visual arts at the University of California, San Diego. His next book, “The Stack: On Software and Sovereignty,” will be published this fall by the MIT Press.
© 2024. All Rights Reserved. 沪ICP备2023009024号-1