The recent OpenAI drama triggered two debates on social media – one on corporate governance and the other on AGI. I was quite surprised – and amused – by the number of people who have jumped to the conclusion that AGI is already here or very close to being here.
I don’t think AGI is a near term thing at all. Also to be clear – I am a big fan of AI but I don’t think at all that AI needs to work exactly like a human (or better than a human) to be of massive value to our every day life. Similarly I don’t think we should sit around waiting for AGI to put some safeguards in place – less sophisticated AI still has massive chance to cause hurt because of the ease of global distribution of software.
There are a few reasons why I don’t think we will get to AGI by doing more of what we already do – like having bigger foundational models, even more compute , having even more training data and so on
To begin with – the basic idea of building an AI solution is to feed it a lot of data. For example – for language based models, training it on all of Wikipedia is a common first step. And that’s not nearly enough – on top of it, these models are fed millions of more tokens. Compare that to how a well educated human learns – no one reads the entire Wikipedia to get a PhD. Humans learn from a small amount of data . A highschool English teacher teaches critical thinking and analytical writing often based on just one book. We then can expand that to every other source of information we get later without needing explicit lessons. When we read a new book – we don’t need to think through every book we have read to form concepts. We are way more efficient in how we learn compared to a machine . But the way machines are taught – it doesn’t mimic how humans learn.
One counter argument is that a machine has a cold start while a human has the advantage of a long evolutionary history and hence some information is already present in our genes/brains . But even if that’s true – humans still didn’t have access to as much info as the machine readily has. Basically – we assimilate and store information differently to machines – and access it differently when we need it.
Humans can get started quickly with very little information. My daughter when she was three years old could recognize animals at the zoo based on the cartoons she had watched. She never confused a bear for something else because the red shirt on Winnie the Pooh was missing on the live bear 🙂 . She knew dogs and cats are animals – and naturally figured out that elephants and lions are animals too.
Also, humans can abstract information across modes of information without special training. Whether I see a sketch of a car, an actual car parked on the street or a car moving in a high speed chase in a movie – I know it’s a car and how it generally works. When I throw a ball up and it comes down – I can relate to the concept of gravity from my middle school lesson even though the example used was of an apple falling on Newton’s head. GenAI has started becoming multi-modal – but not in the way humans do. This is of course a simplistic way of looking at how a human thinks and acts – we have not yet quite figured out the details of how human brains work.
How do we find answers when we are faced with a question ? Let’s say you ask me what’s 121 squared. I don’t know on the top of my head – but I know how to calculate it , and I also know how to approximate it without a precise calculation. But if you ask me what’s 12 squared, I already know it on top of my head. AI only knows the latter way as far as I can tell. An orchestration of several computing techniques could potentially solve these kinds of problems – but learning from a sequence of tokens alone probably won’t get us there.
One last point on what “general” means in the context of intelligence. There are some things that a computer can do faster and more efficiently than a human can. If we can draw a boundary around the problem – like a game like chess or go – a computer has a higher chance to figure our optimal answers compared to us. B
Where humans excel is in generalizing as context changes. As AI research makes breakthroughs in how machines plan, set goals and think about objectives – I am sure we will see massive breakthroughs . And at that point – perhaps AGI might be something more of a reality. I am not an AI researcher – I am just a curious observer . I will happily change my mind as I get more information. But for now – I am not worried about AGI becoming a thing in near future.
One thought on “Why I don’t worry about AGI … yet”