Travancore Analytics

OpenAI’s GPT-3: Artificial General Intelligence?

August 5th, 2020

Category: Artificial Intelligence,Innovation

No Comments

Posted by: Team TA

Blog 1

GPT-3 is the latest language model from OpenAI. They published the paper in May 2020, and in July, OpenAI gave access to the model to a few beta testers via an API. The model has been used to generate poetry, write role-playing adventures, or create simple apps with a few buttons. For now, OpenAI wants outside developers to assist it to explore what GPT-3 can do, but it plans to turn the tool into a commercial product later this year, offering businesses a paid-for subscription to the AI via the cloud.

GPT-3 is the third in a series of autocomplete tools designed by OpenAI. GPT stands for “Generative Pre-trained Transformer.” The program is surfing a wave of recent innovation within the field of AI text-generation. In many ways, these advances are similar to the leap forward in AI image processing that took place from 2012 onward. Those advances kickstarted the current AI boom, bringing with it a number of computer-vision enabled technologies, from self-driving cars to ubiquitous facial recognition, to drones. It’s reasonable, then, to think that the newfound capabilities of GPT-3 and its ilk could have similar far-reaching effects.

GPT-3 is the most powerful language model ever. Its predecessor, GPT-2, released last year, was already able to spit out convincing streams of text in a range of various styles when prompted with an opening sentence. But GPT-3 is a big leap forward. The model has 175 billion parameters (the values that a neural network tries to optimize during training), compared with GPT-2’s already vast 1.5 billion. And with language models, size really does matter.

Like all deep learning systems, GPT-3 looks for patterns in data. To simplify things, the program has been trained on a huge corpus of text that it’s mined for statistical regularities. These regularities are unknown to humans, but they’re stored as billions of weighted connections between the different nodes in GPT-3’s neural network.

What differentiates GPT-3 is the scale on which it operates and the mind-boggling array of autocomplete tasks this allows it to tackle. The first GPT, released in 2018, contained 117 million parameters, these being the weights of the connections between the network’s nodes, and a good proxy for the model’s complexity. GPT-2, released in 2019, contained 1.5 billion parameters. But GPT-3, by comparison, has 175 billion parameters — more than 100 times more than its predecessor and ten times more than comparable programs.

GPT-3’s human-like output and striking versatility are the results of amazing engineering, not genuine smarts. This AI still makes ridiculous howlers that reveal a total lack of common sense. They are also prone to using hateful, sexist, and racist language. But even its successes have a lack of depth to them, reading more like cut-and-paste jobs than original compositions. There are still miles to go before the GPT-N series can be considered as equal to a human. GPT-3 is still a giant leap as far as language generating AI systems are concerned.

GPT-3: Github | Paper

Leave a Reply

Please select a valid state.
Please select a valid state.
Please select a valid state.