Interesting

OpenAI Just Released an Even Scarier Fake News-Writing Algorithm

OpenAI Just Released an Even Scarier Fake News-Writing Algorithm


We are searching data for your request:

Forums and discussions:
Manuals and reference books:
Data from registers:
Wait the end of the search in all databases.
Upon completion, a link will appear to access the found materials.

OpenAI, the AI company that Elon Musk founded and then quit, has just released a more powerful version of its AI text-writing software.

The company still won't release their full software - that can be used to write fake news and messages en masse - due to fears it might be misused.

RELATED: MICROSOFT TO INVEST A WHOPPING $1 BILLION IN OPENAI PARTNERSHIP

What does OpenAI do?

OpenAI says its text-writing system is so advanced it can write news stories and even fiction that passes as human.

A user can feed the system text - anything from a few sentences to pages of it - and the system will then continue that same text in an uncannily well-written, contextually relevant, human style.

However, after releasing its original system, GPT-2, in February, the company said the full software was too dangerous to release to the public - a weaker version was made available.

Now, the company has announced it has released a version of GPT-2 that is six times more powerful.

You can actually try the latest public OpenAI system at TalkToTransformer.com. The results can be eerily realistic - though there are obvious flaws in the writing.

OpenAI is still being careful

According to OpenAI’s statement, there’s still an even more powerful version of GPT-2 that the company hasn't yet revealed.

The company says that it plans to release the more powerful model within a few months, but that it may not if it determines that people are using the new, stronger GPT-2 maliciously.

At the time of the original announcement of GPT-2's release in February, Jack Clarke, OpenAI’s head of policy, told The Guardian there are “many people who are better than us at thinking what [the AI] can do maliciously.”

It could be used, for example, to generate infinite fake positive, or negative, reviews – as if written by a real person.

A cure for fake news?

While OpenAI brings us closer to AI world domination, a group of Harvard and MIT researchers has been developing a method to use AI to fight AI.

The researchers developed a system, dubbed GLTR, that uses an algorithm to track the likelihood that a passageway was written by AI or not.

It will be interesting to see if GLTR ever comes up against GPT-2's strongest version - if it's ever released to the public, that is. The AI wars may be upon us.


Watch the video: Pieter Abbeel: Recent Advances and Trends in Artificial Intelligence. Keynote. ODSC East 2019 (July 2022).


Comments:

  1. Malashicage

    I mean, you allow the mistake. I offer to discuss it. Write to me in PM, we'll talk.

  2. Arnwolf

    namana it happens

  3. Car

    It still that?

  4. Eisa

    An interesting topic, I will take part. Together we can come to the right answer. I'm sure.

  5. Grorg

    Wonderful, very precious thing

  6. Kermit

    Exceptional delusions, in my opinion

  7. Cauley

    You are not right. I can defend my position.



Write a message