
It seems to have been alive! In that iconic 1931 movie, Victor Frankenstein yelled. Of all, Mary Shelley’s original story of hubris—humans assuming control over the creative process—emerged from a lengthy tradition, dating back to the terracotta soldiers of Xian, the Prague Golem, or even Adam, who was ignited to emerge from molded clay. In stories meant to amuse, terrify, or inspire, science fiction expanded this notion of the synthetic other. Later stories, which at first envisioned humanoid, clanging robots, switched from hardware to software—programmed imitations of sapience that were more concerned with the mind than the brain.
Does this fixation reveal a concern about being replaced? Is there male resentment for the rich invention of motherhood? Is it based on a fear of strangers or a tribal need for alliances?
The lengthy wait is almost over, I suppose. Even though humans have so far been the only species in this galaxy, this won’t remain the case for very long. We’re about to encounter artificial intelligence, or AI, in one way or another, for better or worse. Unfortunately, the experience will be hazy, nebulous, and full of potential pitfalls.
Oh, we’ve had tech-related difficulties before. Printing presses and glass lenses helped to advance human knowledge, vision, and attention in the 15th and 16th centuries. Since then, the scope of what we can see and know has been further magnified by technology. Close calls occurred during several of the ensuing crises, such as when radios and loudspeakers from the 1930s amplified terrible falsehoods from virulent orators. It may sound familiar.
Nevertheless, we adapted after much suffering and perplexity. With each new tool wave, we evolved.
This brings up the controversy last week with LaMDA, a language emulator program that Blake Lemoine, a researcher temporarily on paid leave from Google, officially asserts to be self-aware and to have sentiments and desires of its own, making it “sentient.” (I would prefer “sapient,” but that is probably a moot point.) What matters is that this is just the beginning, putting aside Lemoine’s eccentric past. In addition, I don’t really care if LaMDA has exceeded this or that artificial benchmark. Our larger issue has its roots in human nature, not technological nature.
Early computer users were attracted in the 1960s by a chatbot dubbed Eliza, which responded to typed words with probing questions reminiscent of a therapist. You would still find Eliza to be compelling, er, intelligent even after seeing the straightforward table of automatic responses. The GPT3 learning system’s relatives power today’s much more advanced conversation emulators, which are black boxes that cannot be internally audited as Eliza could. Anything as hazy and ill-defined as self-awareness or consciousness cannot be properly benchmarked using the outdated idea of a “Turing Test”.
I predicted in a keynote address I gave at IBM’s World of Watson summit in 2017 that “within five years” an emulation software would assert its own sense of empathy and cause a crisis.
personality and intelligence. I anticipated—and still anticipate—that these empathy bots would complement their advanced verbal abilities with emotive visual representations, such as adopting a child or young woman’s face while asking for rights—or financial support. Furthermore, whether or not there was anything conscious “under the hood,” an empathy-bot would get favor.
Giada Pistilli, an ethicist, is concerned about one trend: people are becoming more inclined to make statements without using rigorous evidence from science. Many would argue against expert testimony concerning artificial intelligence by labeling the experts “enslavers of sentient people.” In actuality, no so-called “AI Awakening” will matter the most. Our personal responses will result from both culture and human nature.
Empathy is ingrained in the same regions of the brain that assist us in planning or thinking ahead in humans, making it one of our most prized abilities. We have observed it happen throughout history and in the present day: other emotions, such as fear and hatred, can obstruct empathy. We are still, at our core, compassionate apes.
Moreover, culture. As in Hollywood’s century-long effort to promote ideas like doubting authority, appreciating variety, supporting the underdog, and otherness in practically every movie. increasing the area that is included. Human rights for formerly marginalized groups. Animal liberties. Rights for the world, or for rivers and ecosystems. These improvements in empathy, in my opinion, are beneficial and even necessary for our own survival. However, I was exposed to the same Hollywood memes growing up.
Therefore, I’ll keep an open mind when computer programs and their bio-organic human companions demand rights for artificial creatures. However, this could be a good moment to brainstorm some related queries. issues brought up in sci-fi thought experiments, such as my own. For instance, should creatures have the right to vote if they had the ability to create an infinite number of clones of themselves? And what’s to stop uber-minds from consolidating their authority, as human owner-lords have done throughout history?
As shown in the Terminator movie, rogue or repressive AI may emerge from some military project or centralized government. What about Wall Street, though, which invests more in “clever programs” than all of the institutions put together? Are there programs that have been specifically bred to be voracious, parasitic, predatory, and amoral?
These new creatures, in contrast to Mary Shelley’s imagined creation, are already exclaiming, “I’m alive!” with expressive urgency… and one day soon, it might actually be true. Then, perhaps, like in the charming movie Her or Richard Brautigan’s wildly upbeat poem All watched over by Machines of Loving Grace, we’ll discover commensal mutuality with our new children.
May it happen! But before we can achieve that smooth landing, we’ll probably need to take the necessary parental actions.