Tuesday, December 1, 2015

Could AI turn bad?

Will AI get scary?

Artificial intelligence is making massive advances these days, and the technology is often discussed in terms of the positive transformations it could bring about in business and daily life. But in popular media, AI has long promised something else: an inevitable turn for the worse. Whether we're talking about books, TV shows or films, any plot trajectory involving AI is usually bound to be the same: The AI is introduced at the outset of the piece as a productive element, but after some initial rumblings of foreboding, its bad side dramatically emerges. Just consider some popular examples (note that there are significant spoilers ahead if you're not familiar with the works discussed):

  • In the 1968 Stanley Kubrick film 2001: A Space Odyssey — which was produced concurrently in novel form by Arthur C. Clarke — the supercomputer HAL 9000, which was designed to be "foolproof and incapable of error," accompanies several scientists on a mission to Jupiter. Initially, HAL seems to be allied with the men on the mission, but soon, Hal's intelligent capacities drive the computer to begin asking questions of its own about the objective of the mission. Later, Hal's questioning of the mission turns to outright murderous sabotage when the computer cuts off the oxygen supply to one of the mission members, thereby killing him. It's later revealed that Hal's intelligent capabilities have provided the computer with the ability to register certain human emotions, such as fear. 
  • Another example of a supercomputer that verges into malicious territory is SKYNET from the "Terminator" franchise. Like HAL, SKYNET was designed by humans to lend a helping virtual hand, but the computer program's intelligence and self-awareness infuses it with a mission that diverges from its design: to wipe out humanity. Within the Terminator franchise, the series' films document the continued battle between man and sentient machine. 
  • As far as cautionary tales about the perils of AI, you can't get much better than Isaac Asimov's I, Robota short story collection that (as its title suggests) concerns a future in which robots gain a sense of individual identity. Once this self-awareness is established, there are decidedly mixed results, as the nine stories within the collection reveal. While Asimov does describe stories in which self-aware robots serve as forces of good — in "Robbie," for instance, a robot offers meaningful companionship to a young child and even saves her life — more often than not they're sources of conflict. In "Little Lost Robot," for example, a robot that wasn't designed properly develops a superiority complex and ultimately attempts to take the life of a human scientist before it is taken out of commission.

Real-life AI scary stories

These are the fictional accounts that describe AI and its potential consequences. But these days, we don't have to turn to fiction to see some scary AI in action. This is evidenced by a Learning Mind article entitled "Five Creepiest Advances in Artificial Intelligence," which describes real-life chill-inducing AI developments including:

  • A mentally ill robot: In an effort to better understand mental illness, a group of scientists at the University of Texas-Austin created a supercomputer, DISCERN, that they specifically designed in order to understand the cognitive mechanics behind schizophrenia. What this involved was inundating DISCERN with information geared toward replicating the schizophrenic brain, which, according to Learning Mind, "stores too much information too thoroughly by memorizing everything, even the unnecessary details." The result was that the supercomputer ended up taking responsibility for a terrorist attack due to confusion in its own memory processing brought on by the massive data dump.
  • Greedy and uncompromising robots: We'd like to believe that the robots we bring into this world will be optimally programmed to serve us – that is, selfless and beholden to our every demand. But if an experiment at the Laboratory of Intelligent Systems is any indicator, that may not be the case. The experiment involved a collection of robots that were placed in the same physical space and shown designated sources of "poison" and "food." The robots' assigned objective was to position themselves in close proximity to food and as far away as possible from poison. In addition, each robot was outfitted with flashing lights, as well as the capacity to detect the presence of other flashing lights — i.e. other robots. In theory, these flashing lights could have served as a unifying mechanism to guide all robots to the areas that were closest to food. Instead, however, the experiment took a decidedly different turn: "After several phases of the experiment, almost all of the robots turned off their 'beacons', refusing to help each other. But this was not the only outcome of this experiment: Some of the other bots managed to divert other bots away from the 'food' by blinking more intensely."

Questioning the future of AI  

With AI all around us — and with some scary parts of it outlined above — this all begs a key question: Will AI operate on our terms, or will it turn bad? Unsurprisingly, there's no clear consensus on the issue, and there are credible people on both sides of the aisle — those who tout AI's potential benefits, and those who shudder in fear of its possibly negative turn.

As far as this latter category of skeptics, one need look no further than inventor, engineer and billionaire Elon Musk, who is one of the most public and high-profile critics of AI advancements. In an interview with CNN Money, Musk explained that he envisions the trajectory of AI as growing into a technology that has the potential to exceed human intelligence. Once that happens, it's Musk's contention that humans "will no longer remain in charge of the planet." Musk is hardly the only one with this opinion, as tech leaders like Bill Gates and Stephen Hawking have expressed similar sentiments. 

But others have pointed to the helpful presence that AI currently serves in our lives. Vanity Fair's Kurt Anderson, for instance, explained that AI is at work in helping us to cancel our airline reservations or refill that prescription. It's only when AI reaches a level where it exceeds human intelligence that there could be real issues.

No matter whose side you're on — whether you're excited by AI, or scared of it — there's no denying that preparations are necessary either way. These days, all businesses looking into AI want to ensure that it operates on their behalf — and no company wants to find itself in the crosshairs of malevolent AI. Therefore, it is becoming particularly important for organizations to build strong cyber defenses as they prepare to make forward strides in tech deployments.



from Trend Micro Simply Security http://ift.tt/1PYXiXt
via IFTTT

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.