Building A Better Human
The future enters into us long before it happens, the German poet Rainer Maria Rilke once said. This is no longer a metaphor. The future is entering us. We eat genetically modified food .We submit to implanted devices that go well beyond the familiar heart pacemaker .We tinker with human tissue, developing artificial bone and skin for transplantation. We are on the verge of “smart” prosthetics, such as retinal implants that restore vision in damaged eyes. Such devices will ultimately be networked, allowing, say, a subcutaneous chip to transmit a person’s entire medical history to a physician far away. Such chips would be commonplace and as desirable as mobile phones: we will become our machines .
When the word “cyborg” first appeared in the middle of the 20thcentury, it was strictly the stuff of science fiction. Everybody knew you couldn’t put human physiology under mechanical or electronic control. You couldn’t stitch technology into tissue. The idea of ,say, an implant of the neural circuits inside the skull—proposed as a cure for cerebellum damage—would have been at best distasteful. The notion of a hybrid human would have seemed like sacrilege. That was then.
Today some researchers believe that cyborgs will be possible within 50 years, or at least that humans will have so many manufactured parts as to be virtually indistinguishable from cyborgs .Machines might be so assimilated to us ---or we to them—as to raise the most fundamental questions .As technology fills you up with synthetic parts, at what point do you cease to be fully human? One quarter? One third? What part of us is irreplaceably human, such that if we augmented it with technology we would become some other kind of being? The brain? Or is the brain merely a conducive medium, our humanity defined more by content of our thought and the intensity of our emotions than by the neural circuitry? At bottom lies a critical issue for a technological age: are some kinds of knowledge so terrible they should not be pursued? If there can be such a thing as a philosophical crisis, this will it be. These questions are especially vexing because they lie at the convergence of three domains--- technology ,politics and ethics—that are so far hardly on speaking terms.
There have always been dangerous technologies. The 20th century, which might as well be called the age of industrialized murder, is the only obvious example. But technology is upping the ante by creating fields where benign intentions could lead to brutal outcomes .Technology soon will acquire the inexorability of the ocean tides. But is human civilization equipped to keep pace? Engineers tend to associate history with progress. But what in our history inspires confidence in our ability to channel technology away from destructive uses? Technology is evolving a thousand times faster than our ability to change our social institutions. Moreover, with each of these new technologies, a sequence of small, individually sensible advances leads to an accumulation of great power and, concomitantly, great danger. Unlike 20th century technologies like nuclear weapons, which were self-limiting because they depended on scarce and expensive raw materials, the new technologies could produce accidents and abuses that are widely within the reach of individuals or small groups. Knowledge alone will enable the use of them .The extinction of the human species is all too conceivable.
Enter philosophy .The most common definitions of “human” proceed from an assertion of an intelligence unique to us but this is precisely what technology is eroding .Is there a type of intelligence computers could not acquire ? Is, for example, intelligence the capacity to innovate? Is it the ability to criticize your own projects and values ---in computing terms, the ability to override your instruction set? How about the ability to create by accident? If cyborgs are less error-prone than humans, might they be less creative? And what about sheer fancifulness? Albert Einstein always said that thinking like a child was what enabled him to hit upon the theory of relativity.
In the end the measure of humanity is a philosophical matter. Philosophy, however, has nothing to say about things. Academic philosophers spent much of the last century bankrupting their discipline. With a few honorable exceptions ,they preoccupied themselves with questions of method and nomenclature, such as: under what linguistic conditions would it be meaningful to ask about the definition of “the human“? As Bernard Williams wrote in his 1972 book on “Morality”: “Contemporary moral philosophy has found an original way of boring, which is by not discussing moral issues at all.” Who, then, can speak on moral issues? Certainly not the engineers. The problem is not the technology, which in any event can’t be stopped. The problem is that engineers are making decisions for the rest of us. They are the last people to understand what an acceptable risk is. That will be the great decision of the next decade. It goes well beyond the mere commercial viability of new technologies, though many will think that is all we need to know. It goes to who we think we are. One way: every possibility is welcome, no matter how dangerous, because we are a species that loves knowledge. The other: we don’t want to be overcome by technology. But that’s what it means to be human. You have a choice. Take your pick
Question 1:
a) Describe any five words as they are used in the passage:
- Tinker
- convergence
- Inexorability
- Concomitantly
- Conceivable
- Override
- Bankrupting
- Viability
Question 2: Write five sentences, with the words chosen from (a) to illustrate their meaning. YOUR SENTENCES SHOULD NOT DEAL WITH THE SUBJECT MATTER OF THE PASSAGE.
b) Justify the rectification (“the future is entering us“) of Rilke’s saying by the author.
Question 3: Under what aspects would technology bring into question our humanity and all mankind?
Question 4: What point is the author making when he mentions Albert Einstein?