Can artificial intelligence learn to scare us?

Can artificial intelligence learn to scare us?


Can artificial intelligence learn to scare us? Just in time for Halloween, a research team from the MIT Media Lab’s Scalable Cooperation group has introduced Shelley: the world’s first artificial intelligence-human horror story collaboration.

Shelley:


Shelley, named for English writer Mary Shelley — best known as the author of “Frankenstein: or, the Modern Prometheus” — is a deep-learning powered artificial intelligence (AI) system that was trained on over 140,000 horror stories on Reddit’s infamous r/no sleep sub reddit.

Where Shelley Live:


She lives on Twitter, where every hour, @shelley_ai tweets out the beginning of a new horror story and the hashtag #yourturn to invite a human collaborator.

Anyone is welcome to reply to the tweet with the next part of the story, then Shelley will reply again with the next part, and so on. The results are weird, fun, and unpredictable horror stories that represent both creativity and collaboration — traits that explore the limits of artificial intelligence and machine learning.
Shelley is a combination of a multi-layer recurrent neural network and an online learning algorithm that learns from crowd’s feedback over time,” explains Pinar Yanardhag, the project’s lead researcher.The more collaboration Shelley gets from people, the more and scarier stories she will write.

Shelley starts stories based on the AI’s own learning data set, but she responds directly to additions to the story from human contributors — which, in turn, adds to her knowledge base.

  • Each completed story is then collected on the Shelley project website.


“Shelley’s creative mind has no boundaries,” the research team says. “She writes stories about a pregnant man who woke up in a hospital, a mouth on the floor with a calm smile, an entire haunted town, a faceless man on the mirror anything is possible!”

One final note on Shelley:


The AI was trained on a subreddit filled with adult content, and the researchers have limited control over her, so parents beware.

View the Original article

Why Does Artificial Intelligence Scare Us So Much?


When people see machines that respond like humans, or computers that perform feats of strategy and cognition mimicking human ingenuity, they sometimes joke about a future in which humanity will need to accept robot overlords.

But buried in the joke is a seed of unease. Science-fiction writing and popular movies, from "2001: A Space Odyssey" (1968) to "Avengers: Age of Ultron" (2015), have speculated about artificial intelligence (AI) that exceeds the expectations of its creators and escapes their control, eventually outcompeting and enslaving humans or targeting them for extinction.

Conflict between humans and AI is front and center in AMC's sci-fi series "Humans," which returned for its third season on Tuesday (June 5). In the new episodes, conscious synthetic humans face hostile people who treat them with suspicion, fear and hatred. Violence roils as Synths find themselves fighting for not only basic rights but their very survival, against those who view them as less than human and as a dangerous threat. [Can Machines Be Creative? Meet 9 AI 'Artists']

Even in the real world, not everyone is ready to welcome AI with open arms. In recent years, as computer scientists have pushed the boundaries of what AI can accomplish, leading figures in technology and science have warned about the looming dangers that artificial intelligence may pose to humanity, even suggesting that AI capabilities could doom the human race.

But why are people so unnerved by the idea of AI?

Elon Musk is one of the prominent voices that has raised red flags about AI. In July 2017, Musk told attendees at a meeting of the National Governors Association, "I have exposure to the very cutting-edge AI, and I think people should be really concerned about it."

"I keep sounding the alarm bell," Musk added. "But until people see robots going down the street killing people, they don't know how to react, because it seems so ethereal."

Earlier, in 2014, Musk had labeled AI "our biggest existential threat," and in August 2017, he declared that humanity faced a greater risk from AI than from North Korea.

Physicist Stephen Hawking, who died March 14, also expressed concernsabout malevolent AI, telling the BBC in 2014 that "the development of full artificial intelligence could spell the end of the human race."

It's also less than reassuring that some programmers — particularly those with MIT Media Lab in Cambridge, Massachusetts — seem determined to prove that AI can be terrifying.




Comments

Post a Comment

Popular Posts