Spread the love



Whether you believe it was one of the most dangerous versions of artificial intelligence created or dismiss it as a massive unnecessary PR exercise, there’s no doubt that the GPT-2 algorithm created by research lab OpenA.I. caused a lot of buzz when it was announced earlier this year.

Revealed in February, OpenA.I. said it developed an algorithm too dangerous to release to the general public. Although only a text generator, GPT-2 supposedly generated text so crazily humanlike that it could convince people that they were reading a real text written by an actual person. To use it, all a user had to do would be to feed in the start of the document, and then let the A.I. take over to complete it. Give it the opening of a newspaper story, and it would even manufacture fictitious “quotes.” Predictably, news media went into overdrive describing this as the terrifying new face of fake news. And for potentially good reason.

Jump forward a few months, and users can now have a go at using the A.I. for themselves. The algorithm appears on a website, called “Talk to Transformer,” hosted by machine learning engineer Adam King.

“For now OpenA.I. has decided only to release small and medium-sized versions of it which aren’t as coherent but still produce interesting results,” he writes on his website. “This site runs the new (May 3) medium-sized model, called 345M for the 345 million parameters it uses. If and when [OpenA.I.] release the full model, I’ll likely get it running here.

On a high level, GPT-2 doesn’t work all that differently from the predictive mobile keyboards which predict the word that you’re going to want to write next. However, as King notes, “While GPT-2 was only trained to predict the next word in a text, it surprisingly learned basic competence in some tasks like translating between languages and answering questions. That’s without ever being told that it would be evaluated on those tasks.”

The results are, frankly, little unnerving. Although it’s still prone to the odd bit of A.I.-generated nonsense, it’s nowhere near the level of silliness as the various neural nets used to generate chapters from new A Song of Ice and Fire novels or monologs from Scrubs. Faced with the first paragraph of this story, for instance, it did a pretty serviceable job at turning out something convincing — complete with a bit of subject matter knowledge to help sell the effect.

Thinking that this is the Skynet of fake news is probably going a bit far. But it’s definitely enough to send a small shiver down the spine.








Source link