Ollie Vijayasankar – 4/29/2013 to 8/24/2025


Ollie joined the family as an eight week old pup. Rebecca Heiman found her for me from a litter than Brianna Bischoff had bred in Houston. We had called in love with him from the first photo we got of him. He was the “Orange Boy” of that litter.

His two big brothers – Boss and Hobo – welcomed him happily.

We used to have a big orange tree in our backyard and all three of them loved to pluck oranges from the tree and play with them. I never had to buy tennis balls for that gang

Rebecca handled him in the show ring

He was a good looking dude from day 1 and his good looks let him get away with a lot 🙂

He is the only dog I ever had that didn’t get any obedience training at all – to his last breath he didn’t even know how to sit on command. He lived for hugs and I was happy to comply.

He needed two surgeries – one for eating half a bath towel as a 7 month old and then another for eating a bunch of stones as a ten year old!

Much like Hobo and Boss – he too loved swimming in the pool.

Boss and Hobo absolutely helped raise Ollie. And Ollie helped raise Archie when he joined our family.

They had been inseparable – and much like how Boss and Hobo were patient with Ollie when he was a young pup, Ollie was quite a considerate older brother for Archie

About a year ago Ollie was diagnosed with cancer. Unfortunately cancer is quite common in golden retrievers. I had lost Boss to cancer as well when he was about 13. His doctors and I discussed extensively about options. I had tried surgery with Boss which eventually proved to be a bad idea. Ollie was not a good risk for surgery and we didn’t do it. He took some medication and seemed to be ok for a while

Unfortunately his condition worsened in the last few days and I had to make the painful decision to help him cross the rainbow bridge.

He had all his favorite things to eat – scrambled eggs, his favorite canned dog food, ice cream ..

We hugged a lot – making Archie quite jealous. He played for a little while with Archie

Today morning we drove to the vet’s office – Ollie as usual calling shot gun.

He was a champion all the way – walked in and greeted everyone and got petted by all the staff. He comfortably settled down , took a sip of water and went to sleep peacefully in my arms

Run free and fetch oranges from the pool in heaven, Ollie balls. Till we meet again !

The future of business applications


As always – these are strictly my personal opinions

Let me start by reiterating that I still don’t agree with Satya Nadella about Apps just being nothing but database CRUD

So where do I see apps going next as AI gets better and better ?

Let’s take an HR platforms as a starting point. They cover a wide array of workflows that deal with finding talent to onboarding and training them to managing performance to paying people and ensuring compliance and separating people from the enterprise. Probably many many more such processes are in scope of such systems.

Now fast forward a few years when Agentic AI moves from science project to active deployments. Now you literally have the same job done by both AI agents and human workers . An example would be say handling invoicing where AI agents do the simple ones and humans do the corner cases. It won’t be one agent – it will be hundreds or thousands of agents who are doing all or part of the enterprise workflows.

That means you now need a system of record to recruit the best bot , onboard those bots, train and retrain them and so on. A science project doesn’t need any of it – but a scaled deployment absolutely will delve into massive chaos unless it is governed.

HR systems of the past have massive technical debt already. If they try to tweak their metadata and data and UX to accommodate digital labor, it will take quite some time and money. This opens up a massive chance for startups with no tech debt to create simpler platforms built for such co-existence. Knowing my exceptionally talented friends at the big enterprise software companies – I am sure they will find clever solutions to all of it, but the threat of disruption is quite real for established app companies in my opinion.

HR was just an example – literally every workflow in an enterprise like marketing and supply chain and so on will get redesigned to make use of digital labor (and other conventional AI goodness too – but probably those are incremental and not as disruptive).

AI will help write and test a lot of software – but not all apps can make use of them equally. I am not sure if apps that need a lot of rigour and consistency like accounting will let go of human control and let AI take over coding and testing.

That’s just the software development side of the story.

Think about the FinOps side of the story next. Today – where most of enterprise workflows are deterministic – it’s quite hard for most companies to plan for and optimize their spend on apps sitting on cloud. There is a whole genre of very funny jokes about hyperscaler bills on the internet. Now think about what happens when AI with its probabilistic and compute hungry nature becomes mainstream. That whole discipline will need to be redesigned !

Let’s also think about the SI ecosystem that is vital to deploy software. When SAP and Oracle and so on got challenged by new app companies – something that was touted loudly was that these newer platforms won’t need significant SI work. On quite a literal sense – that might be true. But if you look around – all those companies ended up building very high SI ecosystems around them.

So where does SI ecosystem move to when apps get disrupted?

I think BPO will be the first casualty – where most of the repetitive work can be automated away. There are SI companies out there who have hundreds of thousands of BPO employees in a labor based model. They all have smart engineers too – but since public markets don’t take kindly to drops in revenue, they have limitations in massively automating. That leads to two possibilities – newer and nimbler “tech first” SIs will go after the incumbents and win OR a hyperscaler or software vendor will wrap the labor into an outcomes based contract with clients and just dis intermediate BPO types services. Either way – BPO the way we know it today is going to be toast. Other things like production support (AMS) also will go this route.

But the more interesting question is whether software deployment can minimize its dependence on SI firms at all. So let’s delve into that a little

AI is already good enough – or close enough to be there soon – to not need a lot of human labor for creating reports and forms and so on. GenAI is quite effective with data manipulation and soon we won’t need as much human labor for data conversions either.

That means the big question is whether developing interfaces and enhancements can get easier with AI. Enterprise software is a lot better today than 25 years ago – but interoperability is still not a key strength. Metadata, APIs etc are all different across platforms – and given most big platforms grew by acquisitions, often they are not very well standardised within a platform either. That’s what has historically needed a lot of skilled SI labor to implement software. GenAI does give options to change this equation drastically.

Even if development work itself still needs a lot of skilled labor – think about things like discovering logic of old code, testing and so on which today take a lot of time and effort. If you look at a transformation project end to end – those are the things that eat up time and money the both. They can absolutely make use of AI to help make massive productivity gains

Just like with software apps – it remains to be seen if SI companies will disrupt themselves or get disrupted by new entrants.

There is some net goodness in all of these things if we are adaptable. Those AI models still need to be trained to take over repetitive tasks – the BPO folks doing repetitive work today might be quite valuable in training AI to do such tasks, and then move on to higher order work like orchestrating workflows as market changes. It’s not a zero sum game unless we make it one by sitting around waiting.

For all of us in tech world – the choice is between getting excited and learning and adapting fast OR getting paralyzed with fear, not learning and just reduce our economic value as market rapidly changes around us.

Direction of AI policy will dictate the direction of humanity


Usual caveat – these are all strictly my personal opinions!

Let’s start with what it takes to get value out of AI the way the technology works today. Look at the last paragraph of this tweet from Ilya S, one of the smartest AI experts on the planet. He calls the three criteria in the right order – compute, talent and use cases.

Whether it’s training or inference – AI doesn’t operate with the efficiency of human brain. It needs lots of expensive and power hungry compute. AI is not trying to mimic how any one of us learns or thinks – AI is trying to learn what the collective humanity knows about a topic and then answer questions based on that. Not human – not even Ilya himself – would ever read and learn every last bit of information about AI available to read like an LLM does. Till some fundamental change happens with new research – compute will be a massive constraint.

Talent is more a short term problem. New knowledge is usually built on existing knowledge, and there are plenty of ways to scale knowledge to more people. So while we are short on AI experts today – that is a relatively simpler problem to solve in the medium to long term.

Use cases are a dime a dozen – and generally only constrained by compute and talent. But its use cases that determine the future of humanity. And that is why it’s important for the world to have policies and principles on what usecases are worthwhile to solve with AI.

The most obvious policy stuff like safety, privacy, security, governance etc are definitely getting attention. That’s not to say they are solved – but at least there are public conversations happening and largely both regulators and private companies are saying all the right things. I think it will resolve itself over time given the massive attention.

What’s the primary worry when any technology scales massively? It’s the fear of massive unemployment. History tells us that humans generally found other higher value things to do and hence past technology advances were a net positive. The unknown with AI is that it’s not just “arms and legs” work that AI could displace – it’s the “brains” work. It remains to be seen if humanity can withstand such a disruption like it did in the past.

If it’s a zero sum game – which is what most discussions lead us to believe – there is no way to avoid massive displacement of jobs. UBI type solutions are being discussed because a lot of people seriously believe the world to operate in a zero sum way. That’s a depressing thought. But I also think the reason we ended up there is because the larger community that includes economists, social scientists and so on have not yet done enough hard thinking about AI quite yet. What I am not sure about is whether that’s because they are dismissive about AI’s impact in near future or because the technology looks intimidating and they have given up trying to understand it deeply. Either way – I do hope they take it seriously quickly

For the zero sum game scenario – it’s very easy to see where this goes. There are a lot of roles that exist just because technology is not capable of doing something efficiently and effectively. Classic examples are the roles played by HR, finance, operations etc where humans are needed to hold together an end to end process. Those are absolute road kill – AI already can mimic a good part of the repeatable tasks that humans do. So while not all jobs will disappear – a lot of roles will absolutely go away. Much like we as consumers got used to online banking and automatic checkouts at grocery stores, after the initial resistance – people will get used to newer workflows for HR etc too.

If most of the compute and talent focuses on finding efficiency – then zero sum game becomes harder to avoid because it’s harder to create enough jobs for the displaced people in a reasonably short period of time. Of course there are ways like better severance packages and government support and so on that can ease the pain – those are band aid solutions till more new good jobs get created

It doesn’t have to be a zero sum game at all. If we switch our thinking a bit – AI ( and AGI / ASI if we ever get there) can expand opportunities for human progress significantly. Instead of spending available compute and talent mostly on replacing just the mind numbing tasks like employee transfers and invoice processing – what happens if we repurpose it to go after massively high value use cases like creating new medicines, finding better sources of energy, cheaper ways to get food and drinking water and so on?

Of course it’s not one or the other – the use cases will always fall in the spectrum from replacing the invoice processing clerk to inventing a massively efficient battery or curing cancer for cheap. The question is how the companies and governments will choose to prioritize.

If we choose to mostly focus on solving the easy problem of replacing repetitive boring tasks in high volume but don’t create enough new jobs with higher value stuff – it is low risk for the investors with decent returns. However it will reduce consumer spending in the medium term and force recessionary economies in the long term. End of the day – economy thrives on spending !

If we choose to solve the high value problems like better and cheaper energy, food and water – the short term risk for investors is high but medium to long term returns are massive. And those things will create new jobs – which can then offset any job losses that can happen from eliminating the mind numbing roles people do now. We get to a better place with less pain.

I am curious to hear your thoughts – pls leave a comment on where you stand on this and what else we could be doing to make progress without massive disruption to society