GPT-3 is one of the most interesting and provocative advances in AI in recent years. There has been a lot of raving articles that both offer praise and warn of its potential. Wikipedia describes it as:
Generative Pre-trained Transformer 3 (GPT-3) is an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series created by OpenAI, a for-profit San Francisco-based artificial intelligence research laboratory.Wikipedia – GPT-3
It’s not the first time that AI techniques have been applied to create fake (“novel”) content. Deep fake techniques have been used to create entirely fake photos of people who doesn’t exists and to alter videos to make it seem like people did things they didn’t do.
Manipulating photos and videos is one thing. But generating original and believable articles is quite another. Here are some examples of original content generated by GPT-3 :
It is a curious fact that the last remaining form of social life in which the people of London are still interested is Twitter. I was struck with this curious fact when I went on one of my periodical holidays to the sea-side, and found the whole place twittering like a starling-cage. I called it an anomaly, and it is.The importance of being on twitter
Responding to a philosopher’s article about GPT-3:
Human philosophers often make the error of assuming that all intelligent behavior is a form of reasoning. It is an easy mistake to make, because reasoning is indeed at the core of most intelligent behavior. However, intelligent behavior can arise through other mechanisms as well. These include learning (i.e., training), and the embodiment of a system in the world (i.e. being situated inthe environment through sensors and effectors).Response to philosophers
Once there was a manGPT Stories
who really was a Musk.
He liked to build robots
and rocket ships and such.
He said, “I’m building a car
that’s electric and cool.
I’ll bet it outsells those
Gasoline-burning clunkers soon!”
Of course, it’s not long before people started posting GPT-3 generated articles to their own blog and popular forums (reddit, hacker news) and reveal it later to be an experiment.
Writing articles, fiction or poetry is just tip of the ice berg. GPT-3 can also tell jokes, generate code from description, answer Q&A, do a tech interview, write ads, and more.
If the written text – blog, press, forum, school work etc – can be generated with such ease, what incentive is there to put in the effort to write anymore? And what will this do to the future of writing? How will anyone be able to tell spam from non-spam in the future? What jobs will be displaced once GPT-3 – and its successors – become prevalent? These are all interesting and important questions that the community is still figuring out.
GPT-3 is currently limited access – I have applied but have not been granted access yet. The creators know that the potential for abuse is too high and so have been managing it carefully. On the other hand, if that aspect can be managed I’m very sure we will start to see very exciting commercial applications of GPT-3 when it eventually goes live.