Direction of AI policy will dictate the direction of humanity


Usual caveat – these are all strictly my personal opinions!

Let’s start with what it takes to get value out of AI the way the technology works today. Look at the last paragraph of this tweet from Ilya S, one of the smartest AI experts on the planet. He calls the three criteria in the right order – compute, talent and use cases.

Whether it’s training or inference – AI doesn’t operate with the efficiency of human brain. It needs lots of expensive and power hungry compute. AI is not trying to mimic how any one of us learns or thinks – AI is trying to learn what the collective humanity knows about a topic and then answer questions based on that. Not human – not even Ilya himself – would ever read and learn every last bit of information about AI available to read like an LLM does. Till some fundamental change happens with new research – compute will be a massive constraint.

Talent is more a short term problem. New knowledge is usually built on existing knowledge, and there are plenty of ways to scale knowledge to more people. So while we are short on AI experts today – that is a relatively simpler problem to solve in the medium to long term.

Use cases are a dime a dozen – and generally only constrained by compute and talent. But its use cases that determine the future of humanity. And that is why it’s important for the world to have policies and principles on what usecases are worthwhile to solve with AI.

The most obvious policy stuff like safety, privacy, security, governance etc are definitely getting attention. That’s not to say they are solved – but at least there are public conversations happening and largely both regulators and private companies are saying all the right things. I think it will resolve itself over time given the massive attention.

What’s the primary worry when any technology scales massively? It’s the fear of massive unemployment. History tells us that humans generally found other higher value things to do and hence past technology advances were a net positive. The unknown with AI is that it’s not just “arms and legs” work that AI could displace – it’s the “brains” work. It remains to be seen if humanity can withstand such a disruption like it did in the past.

If it’s a zero sum game – which is what most discussions lead us to believe – there is no way to avoid massive displacement of jobs. UBI type solutions are being discussed because a lot of people seriously believe the world to operate in a zero sum way. That’s a depressing thought. But I also think the reason we ended up there is because the larger community that includes economists, social scientists and so on have not yet done enough hard thinking about AI quite yet. What I am not sure about is whether that’s because they are dismissive about AI’s impact in near future or because the technology looks intimidating and they have given up trying to understand it deeply. Either way – I do hope they take it seriously quickly

For the zero sum game scenario – it’s very easy to see where this goes. There are a lot of roles that exist just because technology is not capable of doing something efficiently and effectively. Classic examples are the roles played by HR, finance, operations etc where humans are needed to hold together an end to end process. Those are absolute road kill – AI already can mimic a good part of the repeatable tasks that humans do. So while not all jobs will disappear – a lot of roles will absolutely go away. Much like we as consumers got used to online banking and automatic checkouts at grocery stores, after the initial resistance – people will get used to newer workflows for HR etc too.

If most of the compute and talent focuses on finding efficiency – then zero sum game becomes harder to avoid because it’s harder to create enough jobs for the displaced people in a reasonably short period of time. Of course there are ways like better severance packages and government support and so on that can ease the pain – those are band aid solutions till more new good jobs get created

It doesn’t have to be a zero sum game at all. If we switch our thinking a bit – AI ( and AGI / ASI if we ever get there) can expand opportunities for human progress significantly. Instead of spending available compute and talent mostly on replacing just the mind numbing tasks like employee transfers and invoice processing – what happens if we repurpose it to go after massively high value use cases like creating new medicines, finding better sources of energy, cheaper ways to get food and drinking water and so on?

Of course it’s not one or the other – the use cases will always fall in the spectrum from replacing the invoice processing clerk to inventing a massively efficient battery or curing cancer for cheap. The question is how the companies and governments will choose to prioritize.

If we choose to mostly focus on solving the easy problem of replacing repetitive boring tasks in high volume but don’t create enough new jobs with higher value stuff – it is low risk for the investors with decent returns. However it will reduce consumer spending in the medium term and force recessionary economies in the long term. End of the day – economy thrives on spending !

If we choose to solve the high value problems like better and cheaper energy, food and water – the short term risk for investors is high but medium to long term returns are massive. And those things will create new jobs – which can then offset any job losses that can happen from eliminating the mind numbing roles people do now. We get to a better place with less pain.

I am curious to hear your thoughts – pls leave a comment on where you stand on this and what else we could be doing to make progress without massive disruption to society

Some thoughts on measurements – from weight loss to business


Looking back at my career, I absolutely can confirm that “what doesn’t get measured, doesn’t get managed” is true. What I can also confirm is that obsessing over measurements – coming up with even more metrics, measuring at lower levels of granularity, or measuring more frequently doesn’t help manage something – and often have really bad consequences. Another point – maybe something the world of business doesn’t want to acknowledge – is that not everything that is measured gets

managed!

These lessons are not new for me – but I didn’t quite take it to heart when I started working actively on my health in May 2023. I was extremely overweight and the blood-work results were all looking bad. But this was not new – I was always heavy and I had known that the blood-work results were not trending well for a while. And I largely ignored it. What happened in May 2023 was that I got two deliveries of dog food from chewy that were about 50 lbs each. I realized that it took me more effort to bring them inside the house than I thought was necessary.

I just grabbed my car keys – drove to the gym – and signed up with a personal trainer. I also called my doctor and took an appointment to do blood-work etc again to get a new baseline

I knew I needed help. I did not have any understanding beyond “eat less and exercise more” as a strategy. Thomas, the trainer, was a great fit style wise to how I work. Similarly my doctor was happy to sit me down and give me a realistic idea of how long it will take to get the bloodwork results to a normal range and gave me a basic idea of what is helpful and what wouldn’t.

The two measuring instruments that helped me along the way were the Renpho scale that measures a few more things than just weight, and the wearable called Whoop that keeps track of heart rate, sleep, recovery etc.

The first challenge I had to overcome was to not give up after the weight not budging despite eating less and exercising more. Thanks to the heavy dose of motivation – I started learning more about diet and exercise. This was a massive challenge – for a beginner like me, there is an overwhelming level of content on internet. On top of that, there is an army of influencers out there. It took me several months to figure out what works and whose advice to take. One lesson I did learn the hard way is that what worked for my friends didn’t always translate as such in my case. I had to invest a lot of time into this learning and I am glad I did. Perhaps the two biggest lessons were

  1. There is no way I can put exercise a bad diet
  2. If I don’t push myself hard, I won’t know what my body is capable of.

This is where I had to think hard about the value of measurements

When I started, I was counting calories and measuring the quantity of everything I ate. This was a lot of effort and I didn’t think I can sustain it. I realized eventually that I don’t need to be super precise to get decent results. So I just switched to a few principles and thumb rules and it has worked out great so far

  1. Don’t buy things that can derail the diet. Fruit juice is a good example. I still love orange juice but since I don’t buy it, I don’t drink it
  2. When eating out, just divide the plate into two and pack one half to take back home and eat it another time
  3. Give up on alcohol
  4. Rotate between eggs, fish and lean meat frequently for protein
  5. Set a max for carbs – which in my case is one cup of cooked rice
  6. And don’t beat myself up if I break a rule once in a while – like when visiting my mom in India. Everything can be undone with a bit of focus

A very similar approach evolved on the exercise front. Whoop captures most of what I need to keep track and the Renpho scale keeps track of trends in weight and muscle mass and so on.

What I realized is that looking at the data everyday was quite counter productive. Trends are way more useful than the data for any particular day, or data for any particular exercise routine. if I see my weight not budging for say ten days at a time, I stop and look at what changed in my diet and exercise and tweak as needed. But I stopped worrying about day to day changes after about 6 months or so – and it did reduce my anxiety quite a bit and that did help a lot with staying on the journey and enjoying it.

For both diet and exercise, the one thing that has helped me the most is the idea of adding what works most of the time to my routine. Now I have a routine that I follow – one weekly trip for shopping food, 14 hrs of fasting per day, 5 days of walking and average of 5 miles, and 3 days of strength training. It doesn’t happen every week – but when it doesn’t, I can see the results and that gives me the motivation to get back on plan.

I have lost about 80 lbs of weight and about 14 inches from my waist size so far. My doctor is happy with the test results. I am within reach of my goals – and now the challenge is that the last mile is proving to be way harder than the journey so far. Sleep continues to be the final frontier to conquer – I am slowly getting better at it.

Bringing this back to the business world

The contrast of measurements for weight loss to the world of business is sharp when it comes to measurements

In the corporate world, we tend to overdo measurements in every way possible. Steve Jobs famously said once that content and product should trump process. I have heard the CEOs of ASML and NVidia say similar things as well. They are the exceptions – not the rule. The rule in the corporate world generally is that process discipline is what drives scale – which is true. What we also need to realize is that process helps only when the underlying problems are sufficiently addressed. Otherwise the frequent and granular measurements only result in more anxiety and grief, and waste everyone’s time – with no improvement in the business itself.

Future of Technology Consulting in the GenAI world


As always, these are just my own personal opinions

There isn’t a consulting firm out there today that doesn’t have AI and specifically GenAI included in their respective stories. Over the last year or two – consultants and system integrators have done a huge lot of proof of concept work. I don’t know a single client who doesn’t have a clear mandate to derive value from GenAI. This mandate is usually from the board but at a minimum it’s from the C suite. Analysts have been trumpeting GenAI as well.

Having been in this industry for more than a quarter century now, I have seen versions of this play out from ERP in the 90s to web to mobile to cloud and now AI.

So back to GenAI – why is it that everything looks conducive to explosive value add and yet no one seems to be putting massive transformative projects into production?

There are a few common themes

  1. People have only a vague idea of GenAI and hence most people are looking for use cases that scale with GenAI
  2. Lot of idea generation sessions have happened but there is no framework to decide which ones to bet on
  3. For those ideas that seemed promising and hence piloted – the first million dollars of value was easy but that didn’t seem to translate to a scale of tens of millions.
  4. The quality of data makes it quite hard to translate the value seen on PPT to hard dollars that show up in the general ledger
  5. Unit economics don’t seem to work in favor at massive scale and business cases don’t hold up. CFOs are especially worried given they already went through one nightmare with unpredictable spend on public clouds.
  6. Last but not the least – regulatory frameworks have been a challenge for many use cases and deployment models

What we keep forgetting is the simple fact that GenAI is still quite a nascent technology. The good thing about it is that we are past the point where we need to worry about whether it is useful or not. Now the challenge is largely operational in nature – where we need more engineering than science, and more product management than marketing

The world of POC and pilots does not really look all that transformative in my opinion. Sure it’s incrementally better and has some productivity gains for sure. As an engineer, I love how GenAI helps me with code completion, test generation and so on. I enjoy the geekiness of an LLM helping me with emails. Since I am a decent engineer and since I believe I can communicate quite effectively myself – I won’t miss that help if I am told that I can’t use it from tomorrow. These are all just good things to have and gives me confidence to think about the bolder transformations that will follow

Where the small productivity improvements help a lot is that it will conserve time and cost to invest into the bigger things that come next. So I do think that what the tech world has achieved so far is quite useful.

I think the strategy consultants will need to learn GenAI in some serious depth to evolve the frameworks used to qualify which use cases to invest in. Jargon – as much as our industry loves it – ain’t gonna cut it. They will need a much better grounding on the financial modeling of a GenAI deployment as well when the world moves from one model working monolithic to many models working together in a compound system

I have always been a big fan of open source. I think the tech consulting world will largely make use of open source over the long term.

A lot of conversation on GenAI is centered around the idea of a model. Model is absolutely the foundational building block – but I don’t see massive deployments in the enterprise landscape based on one model, and nor do I see GenAI as a stand alone technology.

A minimally sophisticated use case today needs a model and usually some RAG construct to go with it. The future though belongs to compound systems – many different highly specialized smaller models working together in some well orchestrated manner and using both GenAI and other technologies. A crude analogy from the past would be ERP and CRM which were touted to be the one answer to all questions – and when we look back we can see that most of the work happened in integrating them with a lot of other things in the enterprise landscape.

This needs a lot of upskilling for the current tech talent – and will need some serious interdisciplinary training. It will need more training in systems thinking than the consulting world historically is used to and that won’t be an easy transition for many. And given the speed at which AI develops – the tech community will have to spend countless hours learning to just keep up. That’s again not something that we have seen at scale. I am not even sure if the HR teams around the world are capable of handling that scenario, not to mention the CFO teams sweating about modeling the investments and the return on capital. “How we work” will have as much disruption in our organizations as the tech disruption will be in the engineering world.

Let’s consider something like optimzing back office process – say accounts payable – as an example on how the “way we work” will change. Traditionally we would use some lean six sigma analysis to optimize the process, use some tech to automate what we can with OCR and some elementary AI , and then depend on labor arbitrage via BPO to save most of the cost. A lot of companies have taken out hundreds of millions of dollars already this way – there is only so much left to squeeze out with the traditional approach.

If we look at an AI first approach – then the BPO staff would need to learn to properly label the data they are working on . Then an AI team will need to use that to do some supervised learning to get a model trained. To some degree – the better consulting firms all do some version of this already. But we do know that learning by mimicking only gets us to a certain level of efficiency. So we will need to do even more AI work – where the model learns via trial and error (like reinforcement learning). From that point the team will need to build a compound system to orchestrate the piece parts. I am sure those of you who are from the consulting world can already extrapolate the changes in operating model a consulting business will need to pull this off.

As the AI agents become more mainstream and start working with humans – perhaps even interchangeably- there are a host of other aspects to consider. We will need a modern version of the current HCM suites to onboard, train, performance manage and retire the digital agents. It will need a whole new set of integrations with finance and costing apps. And all this assumes that the governments all around the world will get smarter with appropriate regulations

What about the skills we already have? Does anything at all that we know now survive this massive shift?

I do think that despite the need to upskill constantly – many of our existing skills will transfer over just fine. As an example – let’s consider data management. There is no way that any of this AI goodness will happen without great data management ( quality, governance, security, lineage and all that). If anything I think GenAI will make data management the new black. GenAI will make it easier to execute data management for sure – everything from discovery to code fixes will be much easier but the core principles will all be still transferable

I am a massive optimist when it comes to technology. I am perhaps a little too excited about the fun challenges I will get to solve in the next ten years. If I have any regret – it’s just that I am not an engineer in my twenties anymore. I guess my generation had our share of fun with other technology shifts and still will manage to play a small role in this one