Short answer? It depends. Long answer? There are more critical things to worry about. At least, that is what experts and experience with advanced technologies like this say, especially if you live in Africa.
If you write things—news articles from standard press releases or reportage from a war zone, code, poetry, communiques, fiction—if you write things, then published results from researchers testing out OpenAI’s most recent language model, GPT-3 (Generative Pre-trained Transformer 3), may have rattled you. Or it may not have.
Think of GPT-3 as the autopredict capability of your phone or computer, but on steroids. The model is trained with 499 billion tokens (a unit of text or punctuation in natural language processing) and has 175 billion parameters (learned patterns from the training dataset).
It’s been fed all of the Internet and then some more: 45TB of text training compared to its predecessor, GPT-2’s 40GB. The wealth of information it has to pull from is gigantic and this is why it is able to, with some prompting, produce a diverse range of material from poems to news articles to lines of code with astonishing coherence.
“I am not a human. I am a robot. A thinking robot,” GPT-3 says in the first line of an article on The Guardian UK which has been shared on Facebook over 60,000 times. The editorial team at The Guardian asked the language model to write an essay from a prompt: to convince humans that robots would not usurp the power balance of the universe just yet, that they come in peace.
The New York Times, Op-ed columnist Farhad Manjoo describes GPT-3 as “the most powerful language model ever created”. Tom Simonite, senior writer at Wired says GPT-3 “is provoking chills across Silicon Valley”. To produce her recent essay for The Atlantic, Renée DiResta, Technical research manager at the Stanford Internet Observatory fed prompts to GPT-3 and had it complete a few paragraphs some up to 200 words long. “It’s somewhat disconcerting to have a machine plausibly imitating your writing style based on a few paragraphs—to see it mindlessly generating “thoughts” that you have had,” she writes in the piece citing some of the auto generated text from the model verbatim.
What jobs a tool like GPT-3 could potentially replace
Users and organisations who have gained access to the API from a long waiting line have been sharing the ways they’ve put the tool to task from simplifying legal jargon to outsourcing responding to emails. Back in Nigeria, it’s been used to generate tweets to mirror popular Twitter users in the tech industry.
There seems to be a very wide range of tasks the model can perform in faster timeframes and with as much coherence as humans can muster that is unsettling and raises concerns about what roles could be threatened in the future. Yes, the tool produces some ridiculous results but in a few years, with more advancements and with wider accessibility, what roles will they completely make obsolete?
Instead of hiring a junior legal associate to pore over tons of legal briefs, GPT-3 could become the go-to for simplifying legal information for a law firm rendering the need for that role nonexistent. And while the jury is still out on how widespread its use in the media could be, there is no denying that there will be instances where its speed and neural network will trump human personnel. This has already happened at Microsoft and Bloomberg.
“Why do we need you, if the basic idea is to get computers to do more of the work?” asked Editor-in-Chief John Micklethwait in a company memo.
In both organisations, news curation, trend watching and news stories are already being handed over to language models like GPT-3 and for some more, could significantly reduce operation costs or wipe out some segment of editorial staff needs.
“Yes, it will replace some jobs but you still need humans in the whole process,” says Ibrahim Gana, AI Engineer, who believes any panic around the technology taking jobs may be focused on the wrong things.
What should you be concerned about?
Back here on the continent, there’s always the question with technologies like this about how ready or capable we are of adopting them. In February, this readiness was at the center of TechcCabal’s emerging technologies townhall where stakeholders came together to discuss the state of fields like machine learning in the continent.
“Africa is not in the picture,” Gana tells TechCabal, “Because we are not there yet.”
With freelancing and the ability of job roles to cross borders today, there are still those for whom this might be shaky ground whether they are here or live across the Atlantic.
Wuraola Oyewusi, Research and Innovation Lead at Data Science Nigeria, says in terms of making certain roles or careers obsolete, she does not believe there is anything to worry about.
“What could be worrying is the general idea of such a powerful model in the hands of few and all the intricacies around ethics,” she says.
Over the years, one of the most worrying things about technology innovation globally is how it is increasingly concentrating power in the hands of a few powerful tech companies and individuals and what they in turn use or could use this influence for.
Two days ago, Microsoft (who are the biggest funders of OpenAI) signed an exclusivity deal with the company for the “underlying technology advancements” that powers GPT-3.
In its statement, OpenAI said “the deal has no impact on continued access” to the API which will remain available to current and future users.
Some of the concerns have also been OpenAI backtracking from being non-profit to a for-profit trying to commercialise a technology that could do a lot of harm in the wrong hands. But, the GPT-3 project is a highly expensive endeavor and the company says it is trying to generate funding to continue its research. Not open-sourcing this version of the model but releasing its API gives it more control over its use.
“We terminate API access for use cases that are found to cause (or are intended to cause) physical, emotional, or psychological harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam, as well as applications that have insufficient guardrails to limit misuse by end users,” the company has said.
Like Gana says, technologies like this tend to benefit those with the most financial weight and this could concentrate its use to a select few. And beginning October 1, recently released pricing metrics for access to the API shows a tiered subscription model that ranges from a US$100 – US$400 and upwards. For US$100, two million tokens or about 3,000 pages of text is what you get. Or an additional 8 cents for every 1,000 tokens. This is not cheap.
“What access to the API of a large model like that [GPT-3] does is that rather than gather data and train your own models, you can leverage access to a large model like that and get results, that’s what the fuss is really about,” Oyewusi says.
Eventually, it will be down to its most resourceful use cases and processing really substantial amounts of information.
Language has been one of the distinguishing features between humans and the rest of life forms. We have the ability to use words and to understand each other through them whether they are textual or oral. Colloquially, they can also be cultural markers that differentiate one people from another or a demographic or age group from another. To have machines be able to do this, even with humans training and prompting, feels like the futuristic period of hyper-advanced technology is closer than we think.
And when it comes down to it, contexts from lived experiences, reasoning, and understanding are still notions philosophers are trying to understand to what extent a model like GPT-3 can possess. The flip side of this issue is the potential loss of swathes of languages not well represented in the vast dataset the model has been trained with.
On the other hand, some results produced by the model can be incoherent and nonsensical, filled with false information and half truths which many fear will only exacerbate the plague of misinformation that the world is currently facing. This is because as tools like GPT-3 become more sophisticated, it will become increasingly difficult to tell human-produced text from AI-generated text.
Then there is the already-existing bias in the artificial intelligence field that we are grappling with. Tools like GPT-3, fed with information that is heavily biased, can and does produce results that reinforce bias and stereotypes present in the dataset it has been trained with which include diverse information on polarising subjects like religion and politics.
An AI Engineer who specialises in NLP says this is the major problem with this language model. These should be of more concern to creators, users and policy making bodies.
CEO, OpenAI, Sam Altman tweeted in July when it was released that the hype around the model “is way too much”. One explanation will be that its beta users are sharing only the great results from their prompts, or like The Guardian, are taking the best parts of various results and merging them into one (with caveats, of course).
Maybe the hype is. What those who are concerned about their jobs becoming obsolete can do however, is to stay abreast of how technologies like these are shaping or changing their industries, find those areas where human input is non-negotiable and make space for themselves there.
“That’s what I tell everybody,” Gana says, “Even if you’ll not use AI, get involved, know what it is, know what you can use it for.”