Swinging between being burnt out and bored out – the curious case of persistent leadership fatigue


As we grow into more senior roles, constant fatigue becomes a real risk for most executives. I have felt this myself and I know several friends who are absolutely miserable. Very few talk about it openly. In my friend’s circle, this has been a recurring topic for a while. We joke about it as the natural outcome of getting old – but we are all well aware that age is not the root cause, or at least not the biggest reason.

I am in a plane now on a work trip to India and thought I will post a rant on this topic.

As I look back at my own career – I can see a certain paradox. I have been exhausted from having too much to do. I have also been equally exhausted from having too little to do. Either way the outcome is terrible. Exhaustion should not be worn as a badge of honor if it’s perpetual.

In my case – being bored out was an even bigger challenge. My brain just switches to a low power mode and I kept doing stupid repetitive things mindlessly. I grew bitter and there were physical symptoms like throwing up and so on. Thankfully I could change roles relatively quickly when I ran into those situations. I no longer accept roles unless I am very sure that the mission excites me. It took some hard knocks before this wisdom kicked in and I wish I was smarter about it earlier.

These days I ask myself a simple question “What will break if I am not around for a couple of weeks?”. If not a lot will break is the honest answer – it is usually a good indicator that being bored out is just around the corner and I am largely doing repetitive work that can be delegated or automated. Then it’s time to find a new mission. You and your boss might not agree about this though. Be fair and give your boss the time and context to process this. Also the mission might not be a new job or role – it could be a side project that excites you and is white space for the company.

Since I have also worked with and managed a lot of execs over the years whose careers I have observed closely, I am reasonably sure this has been the case for at least a good proportion of them.

Most senior leaders walk unsuspecting into an agency trap. The transition from an individual contributor to a manager is hard. It’s equally hard if not harder to become the manager of managers and every step up from there. It takes some time to realize that you no longer can directly influence the outcome like you used to in your previous role. I still struggle with this every time I take on a new role, but usually get over it faster thanks to the life experience gathered along the way.

As I look back at the relatively low exhaustion roles I have had earlier in my career – I think one reason is that I had a lot of peers I could freely talk to. I didn’t have to think about “is it ok to be vulnerable?”. Every step up after that, my peer group shrank and it took me a while to realize that it’s ok to be vulnerable to your team as well. I can’t say I have quite mastered it but I am a lot better at it today than say 15 years ago.

Senior leaders are managing the collective anxiety of a lot of people in their organizations – and often outside their organizations. You need some venting mechanisms built quickly to deal with it. The comfortable thing to do is to lean on others in the same situation – your peers and your direct reports and your boss. There is value in that – but there is a challenge that we might not immediately realize. It’s super easy to fall into an echo chamber where everyone feeds into the insecurities of everyone else.

Echo chambers are a reality. I don’t think there is a way to avoid them for good. The closest to a good strategy that has worked for me is to spend time with people with very different interests. I find time to hang with people who train dogs (they have their own echo chambers but it’s so different from my line of work and it’s almost therapeutic for me to deal with that for some time). I am an avid reader – and I consciously spend time choosing books outside the world of technology for about half my reading. It works – it not only reduces my cognitive load, it even sparks new ideas for my line of work

Somewhere along the way I lost interest in chasing vertical growth in my career. I started enjoying helping my team grow more than my own progression in the hierarchy. It didn’t stop me from getting promoted multiple times after that – that’s the irony of this story. It is quite a liberating feeling when you don’t feel like you need to run at an unsustainable pace. It’s not like I don’t care for money and some luxury anymore – I am not a saint by any stretch. It’s just that I value other things also in equal measure and can make better trade offs.

One of the things I try to do is to take a pause and do an ROI evaluation on the important things I have been doing in the recent past. I make a list of things and people that give me energy and another list of things and people that drain my energy. Then I find ways to spend more time on the former and less on the latter. I don’t always succeed but even if I can shift the balance by ten or twenty percent, the results are amazing.

Citrini report shows us that we need smarter economists and investors urgently


This is the report that wiped out $200B of market cap the following day. https://www.citriniresearch.com/p/2028gic

First – I don’t blame the authors one bit. They painted a scenario – they didn’t predict that’s what will happen. CEOs of the frontier labs and hyperscalers have all painted their scenarios and we were all cool with that. So why not this one and more like it?

Second – I absolutely blame economists and professional investors for being stuck in the past. They have been lazy doing just the first order impact analysis of AI ( picks and shovels will be of great demand in a gold rush so buy Nvidia stock ). That doesn’t even justify their plane tickets to go to Davos and pontificate on AI annually at WEF. Their lack of rigor in their field is what I attribute the current volatility to the most.

I am just an engineer who plays with AI. I have some experience running parts of the P&L for larger companies. I am not an economist by any stretch of imagination. I am going to jot down my thoughts on the scenario painted by the report mostly so that I can come back in 2028 and read about it. So please take it for what it is worth.

As always, these are just my personal opinions.

I will start with the conclusion and explain why I think so across a few bullets. I think the chance of the economy crashing in 2028 because of AI is completely improbable.

Here are 6 reasons why

  1. History has always rhymed – every single technology shift in the market has led to more employment and the kind of roles it creates were not known when the tech first came around. That has been the case for Steam engines to ATM to internet.
  2. There will be a market driven spending brake. If there are no people to buy products, there won’t be runaway AI investment either. This idea of firing people and investing in AI is not an endless loop like the report says – there is no economic logic that supports it.
  3. If and when AI makes everything from food and housing to sneakers cheaper – workers won’t need to make as much money to keep their current standards of living. A four day workweek is a likely outcome in 2028 than economic collapse
  4. Just look at public cloud adoption in large companies to get a hint on how regulations and bureaucracy slows down tech by decades. On top of that there are labor unions, courts etc as well that will cause further friction that slows down adoption. The talk on AI is miles ahead today compared to the walk of AI. The layoffs we see are largely from past over hiring – not because AI has taken over jobs yet. It’s just a convenient excuse for companies to shed cost now.
  5. White collar workers – not all but many – have access to capital market directly or through their 401(k), IRA etc. The wages they might lose in the doomsday scenario will easily be offset by the capital appreciation they get from Nvidia etc booming if the scenario indeed plays out with the Ghost GDP. Cash is fungible – doesn’t matter a lot which way we earn it . It matters for tax etc but you get the point
  6. How likely is financial contagion? If it’s Amex and visa getting disrupted by AI agents who prefer solana blockchain – there are plenty of regulations around KYC/AML that make it happen. Mortgage crisis is a real potential in my mind – it can only be partially offset by the access to capital market that some of the mortgage holders have. But because of the spending brake I discussed above – I don’t see a full blown disaster

So back to who is to blame for this volatility.

Economists need to change their 20th century ways of thinking and start modeling the future. I would like to think the good ones have already started and perhaps just were late in starting. It makes me wonder if economics education model itself needs urgent change. The world we live in is moving so fast and picking up speed that we can no longer just be happy with first order impacts modeled.

Same goes for professional investors. You are playing with the money of people who trust your knowledge and competency. If one doomsday scenario paper is enough to spook you to act this way – you need to reevaluate your investment thesis from first principles

AI may or may not change the economy drastically – but I sure hope it does change the practice of economics and professional investing for the better.

Some thoughts on why Khosla’s prediction of the imminent demise of IT and BPO services won’t play out in 5 years


I read this yesterday https://www.hindustantimes.com/india-news/aisummit-2026-it-bpo-services-will-disappear-in-five-years-says-venture-capitalist-khosla-101771182147907.html

Vinod Khosla is a man I deeply admire, and hence when he says something I pay very close attention and spend some time thinking about it. He has seen computing evolve up close and is quite bold with his investments.

His underlying thesis for saying IT and BPO will go extinct is based on AI getting far superior to humans in the next few years. I agree with the rate of progress – with the kind of capital being burnt through, it is only fair to expect that the breath taking innovation in AI will progress .

That said – I don’t think his predictions will come true in the 5 year time line he is putting forward for a few reasons. I have ten things in mind that makes me believe that to be the case.

As usual, these are strictly my personal views and not of my present or past employers.

  1. Not all BPO and IT deals are FTE based. FTE based model of course will get disrupted much like SaaS models based on seat based pricing is already getting existential threats. The difference is that unlike SaaS where a minority of companies have consumption and outcome based pricing, BPO and IT have quite a lot of outcome based pricing already. Khosla probably is referring to just the FTE based models which is just a subset of the industry. My thesis is that AI will just help IT and BPO providers shift even faster to a fully outcome based model which will be awesome for their clients. Business leaders largely care about guaranteed outcomes when the push comes to shove.
  2. There is a lesson on agent proliferation that we should keep in mind from micro services. A decade ago or so, we fell in love with micro services and a lot of IT shops embraced their elegance. In a couple of years, we realised that it’s a real headache to deal with a lot of services and the scaffolding needed to operate them in an enterprise grade fashion took several more years. The basic infrastructure needed for a lot of agents running loose in an enterprise landscape is quite immature today and such platforms don’t happen overnight .
  3. Enterprise inertia is a real thing. There is hardly a CIO I know in financial services who hasn’t told me about their goal of replacing mainframes and moving everything to public cloud. And yet – mainframes are still alive and thriving and most companies haven’t moved even half their workloads to public cloud. Change is quite hard in enterprises on all fronts – people, process and technology all usually have secondary and tertiary effects if changes happen quickly and hence corporate leaders tend to move deliberately. They won’t risk breaking things when moving fast. I am not saying that this is a good thing – trust me I have been frustrated all my career with this kind of inertia but I understand why senior leaders are careful
  4. Enterprise buying models don’t change fast either. FTE models are considered favorites by most procurement teams because of the ease of managing such contracts – they are easier to negotiate, execute and monitor even if their value is less than outcome based contracts. It will take a lot to switch this behaviour to outcome based models in a mainstream way. Now imagine the challenge of moving this to a software license model !
  5. Law making almost never keeps pace with innovation. Laws are written with human actors in mind. If a human accountant does tax fraud – they go to jail. There isn’t an equivalent way today to keep agents honest. The best solution we have is to have humans in the loop. Granted it doesn’t need every human to stay around – but many will be needed to keep AI compliant
  6. Jevons paradox can’t be forgotten . For foreseeable future, there are plenty of use cases for AI and I don’t see Jevons paradox failing. So as AI becomes more and more efficient, companies will push even more use cases into production which will need even more humans to be around.
  7. GDPR and DPDP type laws won’t allow seamless cross border autonomous workflows. Sovereign AI is important and is here to stay. That essentially means there will a bunch of cross border workflows that still need humans on either end to make it flow
  8. BPO has a lot of last mile aspects that are not yet AI friendly. While a lot of the work is repetitive and easy to automate via AI, most enterprise workflows have a last mile part that needs soft skills to navigate the enterprise nuances. Maybe it’s possible to automate some of it over time by reimagining from scratch – but we are not taking about 5 years in this case
  9. As long as LLM hallucinates, some human needs to stay in the loop. Yes you can reduce hallucination to some degree – but at its core LLM is autoregressive and unless a very different architecture emerge from research community, we can’t put too many things into production without humans in the loop. Reducing hallucinations is not cheap – the very big models have high inference cost. RAG type solutions have limitations and are expensive to maintain as enterprises evolve.
  10. LLMs are static learner’s unlike humans . LLMs are not sample efficient like humans when it comes to learning – we don’t need thousands of cat pictures to know what a cat looks like. Once they are trained, they don’t learn on the job the way a human does. A human BPO agent can be told that they are wrong and need to do a task another way. The way to change an AI agent to act like that is not compute and memory efficient today. When we say an LLM remembered what we said earlier, what we mean is that a layer around it feeds the content of past conversations back to it behind the scenes every time which is quite inefficient and expensive. Short of fundamental research breakthroughs, we will need to keep coming up with better engineering hacks for efficiency.