Tools like ChatGPT has raise an question in the artificial intelligence community, that is artificial general intelligence in this case, AI that performs at human level achievable? An online report suggesting the latest advance large language model, GPT-4, is at the early stages of artificial general intelligence (AGI) as it’s exhibiting “sparks of intelligence”.
OpenAI, the mind behind ChatGPT, has unabashedly declare its pursuit of AGI.
A large number of researchers and public intellectuals have call for an immediate halt to the development of these models, citing “profound risks to society and humanity”.
These calls to pause AI research are theatrical and unlikely to succeed, the allure of advance intelligence is too provocative for humans to ignore, and too rewarding for companies to pause.
But are the worries and hopes about AGI warranted?
How close is GPT-4, and AI more broadly, to general human intelligence?
If human cognitive capacity is a landscape, AI has indeed increasingly taken over large swaths of this territory.
It can now perform many separate cognitive tasks better than humans in domains of vision, image recognition, reasoning, reading comprehension and game playing.
These AI skills could potentially result in a dramatic reordering of the global labour market in less than ten years.
But there are at least two ways of viewing the AGI issue.
First is that over time, AI will develop skills and capabilities for learning that match those of humans, and reach AGI level.
The expectation is the uniquely human ability for ongoing development, learning and transferring learning from one domain to another will eventually be duplicate by AI.
This is in contrast to current AI, where being train in one area, such as detecting cancer in medical images, does not transfer to other domains.
So the concern felt by many is at some point AI will exceed human intelligence, and then rapidly overshadow us, leaving us to appear to future AIs as ants appear to us now.
The plausibility of AGI is contest by many philosophers and researchers, citing that current models are largely ignorant of outputs, means they don’t understand what they’re producing.
They also have no prospect of achieving consciousness since they are primarily predictive, automating what should come next in text or other outputs.
Instead of being intelligent, these models simply recombine and duplicate data on which they have train.
The essence of life, is missing.
Even if AI foundation models continue to advance and complete more sophisticate tasks, there is no guarantee that consciousness or AGI will emerge.
And if it did emerge, how would we recognise it?
The usefulness of ChatGPT and GPT-4’s ability to master some tasks as well as or better than a human, such as bar exams and academic olympiads gives the impression AGI is near.
This perspective is confirm by the rapid performance improvement with each new model.
There is no doubt now AI can outperform humans in many individual cognitive tasks.
There is also growing evidence the best model for interacting with AI may well be one of human/machine pairing where our own intelligence is augment, not replace by AI.
Signs of such pairing are already emerging with announcements of work copilots and AI pair programmers for writing code.
It seems almost inevitable that our future of work, life, and learning will have AI pervasively and persistently present.
By that metric, the capacity of AI to be as intelligent is plausible, but this remains contest space and many have come out against it.
Renown linguist Noam Chomsky has said that the day of AGI “may come, but its dawn is not yet breaking”.
The second angle is to consider the idea of intelligence as it is practise by humans in their daily lives.
According to one school of thought, we are intelligent primarily in networks and systems rather than as lone individuals.
We hold knowledge in networks.
Until now, those networks have mainly human.
We might take insight from someone such as the author of a book, but we don’t treat them as an active “agent” in our cognition.
But ChatGPT, Copilot, Bard and other AI-assist tools can become part of our cognitive network, we engage with them, ask them questions, they restructure documents and resources for us.
In this sense, AI doesn’t need to be sentient or possess general intelligence.
It simply needs the capacity to be embed in and part of our knowledge network to replace and augment many of our current jobs and tasks.
The existential focus on AGI overlooks the many opportunities current models and tools provide for us.
Sentient, conscious or not, all these attributes are irrelevant to the many people who are already making use of AI to co-create art, structure writings and essays, develop videos, and navigate life.
The most relevant or most pressing concern for humans is not whether AI is intelligent when by itself and disconnect from people.
It can be argue that as of today, we are more intelligent, more capable, and more creative with AI as it advances our cognitive capacities.
Right now, it appears the future of humanity could be AI-teaming, a journey that is already well underway.
Rahul Ram Dwivedi (RRD) is a senior journalist in 2YoDoINDIA.
NOTE : Views expressed are personal.