Direction of AI policy will dictate the direction of humanity


Usual caveat – these are all strictly my personal opinions!

Let’s start with what it takes to get value out of AI the way the technology works today. Look at the last paragraph of this tweet from Ilya S, one of the smartest AI experts on the planet. He calls the three criteria in the right order – compute, talent and use cases.

Whether it’s training or inference – AI doesn’t operate with the efficiency of human brain. It needs lots of expensive and power hungry compute. AI is not trying to mimic how any one of us learns or thinks – AI is trying to learn what the collective humanity knows about a topic and then answer questions based on that. Not human – not even Ilya himself – would ever read and learn every last bit of information about AI available to read like an LLM does. Till some fundamental change happens with new research – compute will be a massive constraint.

Talent is more a short term problem. New knowledge is usually built on existing knowledge, and there are plenty of ways to scale knowledge to more people. So while we are short on AI experts today – that is a relatively simpler problem to solve in the medium to long term.

Use cases are a dime a dozen – and generally only constrained by compute and talent. But its use cases that determine the future of humanity. And that is why it’s important for the world to have policies and principles on what usecases are worthwhile to solve with AI.

The most obvious policy stuff like safety, privacy, security, governance etc are definitely getting attention. That’s not to say they are solved – but at least there are public conversations happening and largely both regulators and private companies are saying all the right things. I think it will resolve itself over time given the massive attention.

What’s the primary worry when any technology scales massively? It’s the fear of massive unemployment. History tells us that humans generally found other higher value things to do and hence past technology advances were a net positive. The unknown with AI is that it’s not just “arms and legs” work that AI could displace – it’s the “brains” work. It remains to be seen if humanity can withstand such a disruption like it did in the past.

If it’s a zero sum game – which is what most discussions lead us to believe – there is no way to avoid massive displacement of jobs. UBI type solutions are being discussed because a lot of people seriously believe the world to operate in a zero sum way. That’s a depressing thought. But I also think the reason we ended up there is because the larger community that includes economists, social scientists and so on have not yet done enough hard thinking about AI quite yet. What I am not sure about is whether that’s because they are dismissive about AI’s impact in near future or because the technology looks intimidating and they have given up trying to understand it deeply. Either way – I do hope they take it seriously quickly

For the zero sum game scenario – it’s very easy to see where this goes. There are a lot of roles that exist just because technology is not capable of doing something efficiently and effectively. Classic examples are the roles played by HR, finance, operations etc where humans are needed to hold together an end to end process. Those are absolute road kill – AI already can mimic a good part of the repeatable tasks that humans do. So while not all jobs will disappear – a lot of roles will absolutely go away. Much like we as consumers got used to online banking and automatic checkouts at grocery stores, after the initial resistance – people will get used to newer workflows for HR etc too.

If most of the compute and talent focuses on finding efficiency – then zero sum game becomes harder to avoid because it’s harder to create enough jobs for the displaced people in a reasonably short period of time. Of course there are ways like better severance packages and government support and so on that can ease the pain – those are band aid solutions till more new good jobs get created

It doesn’t have to be a zero sum game at all. If we switch our thinking a bit – AI ( and AGI / ASI if we ever get there) can expand opportunities for human progress significantly. Instead of spending available compute and talent mostly on replacing just the mind numbing tasks like employee transfers and invoice processing – what happens if we repurpose it to go after massively high value use cases like creating new medicines, finding better sources of energy, cheaper ways to get food and drinking water and so on?

Of course it’s not one or the other – the use cases will always fall in the spectrum from replacing the invoice processing clerk to inventing a massively efficient battery or curing cancer for cheap. The question is how the companies and governments will choose to prioritize.

If we choose to mostly focus on solving the easy problem of replacing repetitive boring tasks in high volume but don’t create enough new jobs with higher value stuff – it is low risk for the investors with decent returns. However it will reduce consumer spending in the medium term and force recessionary economies in the long term. End of the day – economy thrives on spending !

If we choose to solve the high value problems like better and cheaper energy, food and water – the short term risk for investors is high but medium to long term returns are massive. And those things will create new jobs – which can then offset any job losses that can happen from eliminating the mind numbing roles people do now. We get to a better place with less pain.

I am curious to hear your thoughts – pls leave a comment on where you stand on this and what else we could be doing to make progress without massive disruption to society

Published by Vijay Vijayasankar

Son/Husband/Dad/Dog Lover/Engineer. Follow me on twitter @vijayasankarv. These blogs are all my personal views - and not in way related to my employer or past employers

One thought on “Direction of AI policy will dictate the direction of humanity

Leave a comment