I think Satya Nadella might be wrong about the death of SaaS


I will start with the usual – what I post here is strictly my personal opinions. It has nothing to do with my present or past employers

I watched this fascinating interview with Satya Nadella where he predicted SaaS will be dead and Agentic AI will take over.

I am a huge fan of Satya – and I listen to him closely every time I get a chance. He is one of the most grounded technologists of our generation. But this time – I felt his prediction of the death of SaaS is exaggerated

I do think that AI agents are the next generation of innovation in software and it will be disruptive to how SaaS has historically worked. But if anything I think the actual impact to the SaaS business is that it will grow massively in near future itself

Let me explain my thinking and maybe someone who reads this can correct me

The big disruption essentially is the co-existence of human labor and digital labor in future. As digital labor – or AI agents – become more common place, it will become a necessity for the current HCM SaaS apps to put them into scope. Think about it – those agents are going to do types of work that humans have done in the past – and they probably will be a mix of owned and rented entities, much like employees and contract labor today. Their work needs training, performance management, security and so on. I would expect workday, oracle, sap etc to evolve to cater to that reality in short order

Or think about trading partners for a business like customers and vendors. A simple example would be me as a consumer offloading my grocery shopping to an AI agent (sat next generation of Alexa). If the agent is the one in charge – there is no reason for the vendor to market to me via emails and snail mail and phone calls – they need an agent on their side to pitch to my agent through a digital protocol. Think about the changes needed in Adobe and Marketo and so on to cater to that new world.

So just from added complexity of the scope itself – SaaS would grow, not die

Now let me poke at the other assertion by Satya that the business logic will be taken over by the multi repo agents, and then the backend systems collapsing

If you look at how the major SaaS apps treat common “objects” like customer, purchase order etc – you will notice that there is literally no standardization across the vendors today. That is why integrating SaaS systems is a big cost and a painful thing in most client landscapes even when vendors claim their API’s allow seamless integration. On top of that – every client has nuances built via additional configurations and customizations. So even if every vendor agrees to use their metadata and data to train an AI model – it still takes a lot of effort for an AI agent to understand the system that is in place for a given customer

The problem gets worse when the process logic is not held in all in SaaS but it an external system like an RPA or BPM system – which is a very common pattern in enterprise context. In most cases – these are not even documented anywhere and is just in the brains of a few people. There are discovery tools that use multiple techniques to analyze the landscape to make sense of such information – but so far I haven’t seen even 50% of a complex landscape being auto discovered without massive human intervention

Business logic for most applications is quite deterministic and stateful – for good reasons like the need for consistency, auditability, security, compliance, performance and so on. There is barely any room in that set up for even a tiny amount of hallucinations. Even rock solid deterministic systems still need a lot of work and have errors that need human effort to reconcile and fix.

Satya abstracted SaaS to CRUD operations on top of a database. That’s not wrong – but obviously it’s not that simple either. When you pay an employee their salary – it is not just the HR system that needs to be updated, it is also the ledger. Not all those integrations are clean interfaces and many are not even externally exposed. So the ability of an Agent to function in the multi repo way that Satya explains it is not an easy one to pull off in practice

While I don’t agree with Satya that SaaS is on its deathbed for all the reasons above – I absolutely think Agentic AI will be a great addition to it and will add tremendous value. Since there is already plenty of hype on what all Agentic AI can do – I don’t think it’s useful for me to pontificate on that here 🙂

Pls let me know what you think. It’s such an exciting time to be a part of the tech ecosystem – I am sure there are counterpoints to what I said above and I look forward to learning from you if you share your thoughts in the comments

GenAI maybe better validators for now than creators


If the hype is to be believed, then GenAI is the answer to all questions these days, isn’t it? We liberally use GenAI every day even though we know that hallucinations are largely unavoidable.

Two of the most popular use cases in my line of work are

1. Code generation

2. Text summarization

None of us who work with GenAI tend to position it as “autopilot”. We know that it needs humans to be in charge. That’s why GenAI solutions are usually called “copilots” and “assistants” – it has fundamental issues on accuracy and reliability which need humans to rectify and sometimes just ignore. It could generate code that might be syntactically correct but not doing what it is intended for. It could write up a creative summary that makes up invalid data – like nonexistent citations and so on.

To be fair – humans make up stuff all the time too. We often act without thinking through stuff in any detail – but if asked to explain our action, we can usually come up with a convincing explanation. For example – I just returned from my walk. I could have taken a dozen different routes today and I have no clue why I chose the route I walked. But if you asked me – I could come up with some decent answers like “this is the route that has the least amount of traffic this time of the day”. This answer is a believable one for most people and they have no reason to believe that I just made it up looking backwards.

It is relatively straightforward to verify whether my explanation was factual or not if someone wanted to spend the time doing it – and it doesn’t need as much creativity as I needed to come up with a creative answer.

If we extend this concept back to GenAI – it’s not a stretch to see how it’s a lot more efficient (and quite valuable) to use the tools to validate code and text summaries, than creating them in the first place. It takes hardly any creativity to check if a citation is valid compared to creating a fake citation that comes across as realistic. Similarly – it’s a lot easier to create a comprehensive set of test cases for a given code base than creating the best code to solve a given problem.

When I explained this thought to a friend last week in India – the pushback I got was that he didn’t think a system that does a less than stellar job of creating code can be that good at testing.

I think this lack of trust is a bit misplaced

1. let’s say we are building a plane with the best engineers on the planet – half of them doing the build and the other half doing testing. Would we be satisfied with only the engineers who are qualified to build as testers? Or would we ask for engineers who are test specialists? And in any case will we trust it till a pilot actually flies the plane? The ability to build a great plane doesn’t translate directly to the ability to test it thoroughly – or vice versa. And there is no necessary constraint that the same person needs to be an expert in both

2. AI is a lot of more useful when boundary conditions are known – which is the case when all you need to do is validate specific things. In fact you can use a lot of deterministic techniques – and generally improve computing efficiency – when you have specific boundaries to the problem.

I absolutely think that over time GenAI will largely overcome its deficiencies on accuracy etc. But we don’t have to wait for that to make it useful for us – and validation use cases might be one such high value pattern.

I am curious to hear your thoughts about this. Pls leave a comment if you could.

Future Of Technology Jobs


As always – these are strictly my personal thoughts and not that of my employer.

Over the last few months, the most frequent question I get from my friends and family is about the future of tech jobs – given I have been in this industry for a while. This past weekend- I spoke with a friend who lost his engineering VP job at a big tech company and then with his son whose job offer has now been delayed twice as an entry level engineer.

The first question I get asked is whether people are let go from tech companies because AI can do their jobs better.

I think that’s part of the answer – when it comes to HR, Operations etc. AI and other tech are mature enough today to take over a lot of functions that have historically needed humans to execute. The fact that AI can do something doesn’t mean that you can switch it on and fire people that afternoon – it still needs a lot of change management and all change is hard. But in the short to medium term – I do expect a lot of functional tasks to be done by machines. I also think that many companies will rush into this without proper planning and get it wrong and pay a price.

For engineering jobs – where someone has to actually code, AI today at best is a job aid and not a replacement. But it is an excellent job aid – which means engineering managers might not need as many developers to get the job done, or they maybe able to get more done with current capacity. The reality is that a lot of engineering teams have excess capacity hidden in plain sight – people doing manual testing and code review instead of having better CI , people hand coding deployment pipes and so on. It doesn’t even need AI to find productivity – but AI will just make a strong case to look at developer productivity at all levels.

The next at risk role is managers who are largely serving an aggregation function with no hands on skills – engineering managers who cannot code at all, sales managers who only look at CRM and don’t make client calls , Ops managers who have no skills in optimization and so on. AI is excellent at summarization – and we already have other tech that can aggregate numbers and make comparisons and so on. All organizations have inertia when it comes to attacking structure – so it might take time, but there will be no place to hide soon for people who don’t have higher order skills to either make more money for their employer or save costs.

Why is this happening now?

That’s just the nature of competition and business cycles in a capitalist system. Resource allocation happens where companies expect the most return. This doesn’t only happen in down turns – look at the massive hiring and wage inflation that happened during the COVID years where the belief was that exponential growth will continue. When that growth stopped real fast – cost simply didn’t keep up with revenue. At high interest rates – revenue going up was always going to be difficult and hence the logical option was to cut costs to improve efficiency.

It also just happened that AI just got some new wind on its sails at the same time – so the risk perception probably is lower now about letting go of employees.

Just as companies hired way more people than they needed in the boom cycle – there is a good chance they will over correct in the bust cycle. I expect this to reverse as soon as fed cuts interest rates but probably we won’t see the levels/speed of hiring we saw in the past given AI is also progressing rather fast along the way.

Sales jobs might be the one exception to this. Example : When Elon Musk fired a lot of people at twitter/X – engineering seems to have found an equilibrium, but they still have revenue issues.

AI and its compute problem

One reason for some tech companies for finding more investment is the initial cost of AI compute. Training AI is extremely expensive and time consuming. The current approach with deep learning ( transformers included ) is to teach a system with the collective knowledge of all humans – and when that runs out, to augment with synthetic data. That’s not how any one human being learns – no child reads all of Wikipedia for example. The GPU, the data centers, the lack of top talent – all of this adds to the cost very fast.

This causes two things

1. Companies will need to find money quickly to fund these expensive R&D initiatives to remain competitive

2. Smaller companies will have to stay a couple of steps behind the big ones in some cases given the high cost.

Open source AI solves some of the problems but what we need is for AI to learn and think in less expensive ways. I am hoping that the research community comes up with viable alternate approaches quickly

So what about the people whose jobs get affected?

1. The skill we need the most to stay ahead of the tech onslaught is to learn and unlearn really fast. There is no saying what is the skill we will need in 5 years – which means we need to be willing to always be learning new things.

2. The “real” top talent will always be in hot demand in all markets. For the rest – we need to have an alternate strategy on top of the constant learning. If you are an above average skilled person in a top tech company and you lost your job – you could still be in high demand for the tech companies in the next tier.

3. Disrupt instead of being disrupted. If you gain experience in using AI to improve HR functions in your current company – you probably have high odds of being considered for a higher order role in your company, or be in demand to do more of that in another company that wants to do AI in HR. I just used HR as an example – this is true for engineers too.

4. Objectively assess your value add – especially if you are a manager. This is a lot harder than it sounds – most of us are not that objective about ourselves. Get to be more hands on and add value. Status quo will get challenged sooner than any of us will like