The development of artificial intelligence (AI) holds unprecedented possibilities — some good and some bad. OpenAI is a research institute originally started to steer the technology away from the latter category. Recently, the startup announced its first commercial product: An AI-powered text generator that was previously deemed too dangerous to release to the public.
Previously Too Dangerous, but Now Yours for a Price
OpenAI was founded in San Francisco in late 2015 by Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman. Its main mission? To ensure that the use of artificial general intelligence (AGI) is safe for humanity. Musk resigned from the company’s board in early 2018 but has remained a donor.
By the beginning of 2019, the startup announced it had created GPT-2, a natural language processing (NLP) neural network. GPT-2 could produce text so cogent and natural that it was difficult to distinguish from human writing. This consequently raised valid concerns among its creators that GPT-2 could be leveraged by bad actors to make propaganda or fake news. For this reason, OpenAI initially chose not to release GPT-2 to the public.
General sentiment for the news surrounding GPT-2 was split between two opinions: Either this was a carefully crafted publicity stunt or a warning sign of the imminent automation apocalypse to come. Well, it turns out that the public will have its chance to revisit this dilemma. GPT-2’s successor, GPT-3, is complete. And it’s going commercial.
In the short time span between GPT-2 and GPT-3, fake news has become a more ubiquitous issue in technology and politics. And as the world contends with the current coronavirus pandemic and the upcoming US Presidential election, many would say that a human-like AI text generator is the last thing we need at this moment. But yet, here we are.
Petabytes of Possibilities
Researchers at OpenAI published a paper detailing GPT-3’s capabilities on the open-access repository arXiv. In it, they describe GPT-3 as an autoregressive language model with a whopping 175 billion parameters. That’s a ton. To put it in perspective, GPT-2’s final iteration contained 1.5 billion parameters. And Microsoft’s Turing Natural Language Generation model had 17 billion parameters.
You may be wondering, “What’s a parameter?” Basically, a parameter is an attribute defined by a machine learning model based on its training data. Going from 1.5 billion to 175 billion parameters is obviously no small feat. But, perhaps most surprisingly, the tech behind GPT-3 isn’t necessarily more advanced than comparable tools; it doesn’t even introduce any new training methods or architectures.
To reach 175 billion parameters, GPT-3’s creators scaled up the input data quantity. All of the data came from the non-profit Common Crawl. As its name implies, Common Crawl scans the open web each month. It then downloads the content of billions of HTML pages and makes it available in a format convenient for mass-scale data mining. Currently, Common Crawl has petabytes of information accessible in over 40 languages. To improve the data’s quality, OpenAI applied a few filtering techniques.
“GPT” is short for Generative Pretrained Transformer. Instead of studying words sequentially and making decisions based on their position, GPTs model the relationships between a sentence’s constituents all at once. With this information in tow, the GPT can weigh the likelihood that a given word will be preceded or followed by another word. It even accounts for how this probability is changed by the inclusion of other words in the sentence.
The algorithm behind a GPT ends up learning from its own inferences after identifying the patterns between words in a gargantuan dataset. This is known as unsupervised machine learning, and it’s not simply restricted to words. For instance, GPT-3 can apply the same methodology to comprehend the relationship between concepts and recognize context.
Whether it was translating, answering questions, or filling in the blanks for incomplete sentences, GPT-3 performed quite well. In the research paper, its creators also noted that it could do “on-the-fly reasoning” and was capable of generating short news articles that were indiscernible from human-written ones.
What Comes Next?
It’s undeniable that GPT-3’s capabilities are amazing — and also frightening. The research paper’s authors acknowledge that it could be misused in myriad ways; spamming, phishing, misinformation generation, and manipulation of legal and governmental processes were all mentioned. On the bright side, GPT-3’s API could be used to create new entertainment experiences, improve chatbot fluency, and much more.
GPT-3’s immense potential is difficult to fathom. Even its creators have admitted that it’s not exactly clear how the system may be used. With that said, OpenAI does plan on taking things slow and keeping a careful eye out for possible nefarious use cases. Each customer will be thoroughly vetted, and the research organization is working on new safety features. GPT-3’s API isn’t available to all yet. Access is invitation-only right now, and the pricing is still undecided.
Currently, around a dozen customers are using GPT-3. SaaS web search provider Algolia is using the API to improve its product’s understanding of search queries that use natural language. Social news aggregation platform Reddit is exploring possibilities for automating content moderation. And mental health platform Koko is leveraging GPT-3 to analyze when its users are in a “crisis.”
Now that OpenAI has taken steps into the commercial arena, there’s no turning back now. Many will be watching the startup’s next moves closely and curiously. We hope that the release of GPT-3 does not cause the organization to stray away from its original intent. After all, safe AGI isn’t just a business priority — it’s a necessity for humanity.Tags: artificial intelligence, artificial intelligence app developer, artificial intelligence app development, machine learning app developer, machine learning app developer San Francisco, machine learning app developers, machine learning app development, machine learning app development San Francisco, machine learning applications, machine learning apps, NLP, OpenAI, San Francisco AI app developers, san francisco AI development, San Francisco app developer, San Francisco app development, san francisco mobile app developers