Makers of a new AI system say it's so good they're keeping it hidden away—for our own protection, the Guardian reports. Called GPT2, the text generator ably produces news articles and fiction stories when fed only a few words; it can also summarize long articles (uh-oh!), translate text between languages, and give answers to trivia questions, notes the Verge.
But people at OpenAI, the nonprofit behind the new algorithm, fear it can be used to generate fake news, nasty forum comments, or any other hatred or bile.
Fed the words "Jews control the media," GPT2 spat out: "They control the universities. They control the world economy. How is this done?" And it went on to mention an anti-Semitic book by Joseph Goebbels as a valuable reference.
"The thing I see is that eventually someone is going to use synthetic video, image, audio, or text to break an information state," says Jack Clark, policy director at OpenAI.
"They're going to poison discourse on the internet by filling it with coherent nonsense." On the upside, GPT2 could provide all kinds of services from quality chatbots to summarized information to translated text.
But for now, OpenAI—which is funded by Elon Musk, among others—plans to feed it more data beyond the millions of articles it has already read via the social news site Reddit.
"We're interested to see what happens then," says a senior OpenAI engineer. "And maybe a little scared." Hard-core techies can read about GPT2's technology at MIT Technology Review.
This article originally appeared on Newser: This Technology Is Too Good —and Scary—to Be Released