What can we learn from bad managers?


If you work for a reasonable amount of time, the odds are high that you will work for some bad managers. I certainly have had what I consider a more than fair share of bad (and a couple of absolutely terrible managers). While I have felt anger, frustration, sadness, rage and all that about them for a period of time – I am also strangely grateful for the invaluable experience that taught me a lot about what not to do. I also readily acknowledge that I may not have avoided those pitfalls as a manager myself.

What is a manager’s primary role? At the simplest level – it’s about providing clear direction to the team working under their supervision. In my experience, the one consistent thing I have noticed amongst bad managers is that they had a lack of clarity on where to head. The way they compensated for that lack of clarity was usually why their actions made me think of them as bad managers

Imagine you are quite a good driver trying to drive a car from Phoenix to LA to visit Disney, but you don’t know where LA is relative to Phoenix or when you need to get there. Since you are a good driver – you drive defensively and at the optimum speed to maximize the mileage of your car. You refuse to ask for directions even after your passengers repeatedly beg you to do so. What are the odds that you reach LA? And would anyone enjoy being in that car? Would they ever trust you to even drive you across the street to a McDonald’s after that LA adventure?

Bad managers waste everyone’s time and build frustration because they try to optimize the wrong things and generally doesn’t accomplish the right results.

From an employee’s perspective – what characteristic makes them think of their boss as a bad manager the most?

I would think “micromanagement” wins that prize in a landslide.

There are two reasons managers tend to micromanage for extended periods of time in my experience

  1. They don’t have a great vision and hence the only thing they can manage is the process which they are masters of. This is also a major reason why great sellers and engineers don’t always become great sales and engineering managers
  2. They don’t have the skills to recruit, coach and manage people for the job at hand – and compensate by their own time and expertise. The day only has 24 hours and you will just run out the clock without having too much to show for the effort.

Most employees will try to – for at least the initial phase of working for a bad manager – ask questions and make suggestions. Unfortunately, bad managers typically are poor communicators. Some will give you answers that are vague and confusing, some only talk at you and won’t make it two way and some will either not talk at all or will only open their mouth for negative feedback. By wasting the opportunity to course correct and/or gain clarity – they make a bad situation worse

What’s the long term impact of bad managers?

If I could choose one word – I would say Toxicity !

They tend to take credit and deflect blame – and over time I think it happens less out of malice and mostly out of ignorance and lack of self awareness. Doesn’t matter why though – the effect on the team morale is the same.

Being a manager is hard – it’s difficult to balance empathy with business needs. But that’s the job – you can’t just shrug it off.

Unfortunately, toxicity sometimes get rewarded when business results are great. That happens more times than it should especially in larger organizations. This is quite simply a leadership failure.

What can we do about this?

Employees with bad managers : First you need to evaluate whether you are jumping the gun and just blaming the manager. I have done it and realized it later. I also have seen it hundreds of times as an up line manager. Having a network of mentors help a lot with getting an objective understanding and also in many cases will help mitigate the situation. Since power is asymmetrical in balance – if your concerns are not addressed, improve your skills and network with urgency and get the heck away from the toxic manger as quickly as you can. This is also why I am a big fan of constantly improving optionality in life – it helps us face adversity with minimum trouble

Managers themselves : Learn to listen – with the idea of understanding and not just to respond. Ask yourself the hard WHY questions. None of us are super objective about ourselves – so try to get 360 degree feedback to the extent you can. Maybe you are a good manager already – the feedback will make you a great manager. There is no downside really to listening, understanding and tweaking what you do.

One of the things that have made me a better manager is a continuous interest in learning the job of my manager more and more. That helps me understand why my boss wants me to do certain things and that in turn helps me give clear direction to my team.

One last thing – being a manager is usually motivated with the idea of making more money than you previously did. There is no shame in that at all. But if money is all you need and you hate being a manager – try talking to your upline managers on whether you can be a highly paid individual contributor. Failing that – look outside the company for such options. Life is too short to be miserable doing things you don’t like every day. As an up line manager, I usually ask a lot of questions when people ask me for promotions and career development advice . 90% of the time it’s just a proxy for making more money and there are many ways to make that money if you have honest conversations with your own managers. You will do yourselves and your teams a big favor.

Upline managers : You really are the biggest culprit if your chain of command has a lot of bad managers. You have the best chance to be objective compared to employees and their bad managers. You need to constantly listen – and proactively find out – how the culture of your team is evolving and make tweaks. If all you do is watch short term business results, you will often not realize the long term damage you do with your inaction. You have to assume that the aggregated info you see probably hides a lot of actual issues and unless you probe actively – you won’t see what needs to be fixed.

I don’t like senior leaders saying things like “the strategy was perfect and it was just an execution failure”. My strong belief on this matter is that a strategy that fails execution is a bad strategy – it just did not consider the constraints appropriately. How many times have we heard top leaders saying “we are not transforming fast enough” when they have not made the right changes and investments down the line to enable that transformation? . I often remind folks “The big boss is called the Chief Executive Officer for a reason – and the Chief Strategy officer works for the CEO, not the other way around”. Don’t get me wrong – you do need a strategy, but it needs to be grounded in reality !

Hello 2026 – Let us get used to the excitement and growing pains of Agentic AI !


What do we know so far from the gazillion POCs on GenAI from the last couple of years?

  1. Hallucination is a feature of LLM, not a bug . Thanks to the POCs – yes I know no one likes POCs and want very quick payback – we now know enough techniques to mitigate hallucination to use LLMs in enterprise workflows – like context restrictions (like RAG) , verification (CoVe, CoT etc) , specialisation (fine tuning, SLM etc), and of course human in the loop.
  2. Security is a big deal and probably under invested across the board
  3. Governance is boring and expensive and yet you can’t scale AI without it
  4. AI is still appears quite expensive and unpredictable for massive deployments – and much like how FinOps happened with public cloud scaling, AI needs to make friends with the office of the CFO

So what next ?

Duh … agentic AI .. what else !

GenAI proved that human workers have an awesome tool now to make them WAY more productive in what they already do. GenAI still is limited in the sense that it can’t act autonomously – so naturally our productivity obsessed world moved quickly to agentic solutions which can act independently to various degrees.

Will it work as advertised ?

The basic building blocks are all there – and will keep improving like technology has been progressing all these years. But that doesn’t translate to agents ruling the whole world in 2026 !

At its most basic level – agents need to talk to other agents and should be able to work with tools to get work done. Those protocols like A2A and MCP are already available and a quarter gazillion POCs have been done on those. We now have a better idea of what needs to get fixed to scale deployments.

For example, MCP standardized how agents talk to tools – which is enough for a POC. But in an enterprise deployment, you also need consistent handling of security. So now we use MCP gateways so that AI models don’t need to worry about raw credentials etc.

Another example – A2A standardized how an agent can talk to another, which is enough to do a POC. But any respectable enterprise workflow needs a lot of agents talking to each other – which leads to all kinds of orchestration overheads. What’s the point in handing off tasks back and forth without solving it? So now we use registries that can identify the best agent for a given task, and we set a limit on how many hops can happen before a human gets involved. In the same vein, we now have better approaches on scaling performance and handling security.

Even if all those things worked – we still can’t let agents do lose in enterprise workflows without having a solid audit solution in place. Similar to OpenTelemetry traces, now we can use observability headers to leave the trail to look back at which agents did what actions.

One of the best things that happened last year was that A2A and MCP both were donated to Linux foundation – which helps cross vendor collaboration that benefits everyone !

What is the bear case here?

The bull case is obvious – there is plenty of breathless commentary on that and I won’t rehash it 🙂

Having been in the tech industry for a while, I am sure we will see some high profile failures and the associated doomsday predictions. That happened for ERP, Mobile, Cloud and so on and I am sure AI will be no exception either.

If I were to predict – my top 3 reasons for AI project failures will be

  1. High costs – most likely because cost estimation models don’t scale from pilot to production
  2. Poor data quality
  3. Lack of clarity in business case

I also think there will be plenty of massive success stories from the companies who go about the deployments thoughtfully. But of course bad news sells more clicks !

Agentic AI deployment is about complex integration

One thing is clear – there won’t be “one big AI” to rule the enterprise like we see in sci-fi movies. The future is a complex system of highly specialized agents that will be hired and fired as needed for each task – and the primary challenge will be to make that integration work.

Winners and Losers in CIO offices and service providers

Agentic AI and associated technologies will disrupt “business as usual” significantly – that much is a given at this point. The question is who will win and who will lose.

Vast majority of IT work today – whether it’s a CIO team or a service provider – is some type of development activity. The default operating model is labor based with standardized processes. The skills levels are a mixed bag. Every year, there is some productivity improvement with better automation and tooling improvements- but those are incremental changes.

This is an easy area for LLMs and agents to make drastic changes and create massive productivity gains and low risk. The highly skilled engineers will always be needed and probably will get paid even more – but that is a modest subset of the labor pool. This is where CIOs will find the most budget to implement agentic AI.

So who will thrive?

In the very near term – technical proficiency might be the biggest differentiator. The initial deployments will have a lot of technical challenges like the ones I mentioned above and perhaps many more, and that will need really good engineering skills to mitigate. The use cases will generally be less ambitious till the underlying plumbing is in place.

But then it will change rapidly and the differentiator will be the process and industry knowledge – especially to deal with last mile problems that cannot be solved by great engineering alone.

Post script

AI – and agentic AI – is here to stay. We will probably vastly over estimate its impact in the short term and suffer the disappointment. We will also largely under estimate its impact over the long term. The popular commentary is around AGI and ASI and so on and that is all very worthy as future goals. We know by now that we don’t need AGI or ASI to have massive ROI in most enterprise use cases.

My hope – and prediction – is that several smart companies will start thoughtfully deploying agentic AI in production this year with realistic business cases justifying them.

Happy new year !

AI and the challenge of executive echo chambers


The story of struggle in every company is the same – the things we do for efficiency have a way of getting in the way of effectiveness. At its simplest level – this is the reason the wise ones say transformation is a journey and not a destination.

The rant that follows is the result of two phone calls I had early in the morning with old friends 🙂 .

Anyone who has spent even a month of their career inside a company would hear something like “the problem is that we operate in silos”. This was true when I joined the workforce in the 90s and it’s true today. So why do we have silos even though every CEO, every vendor and every thought leader have argued against silos all these years? It’s simply because we need silos for competency building and boundaries are arbitrary. It would be amazing if every marketer also understood company financials in great detail and can make wise choices in marketing spend – but the day only has so many hours and if you want to be great at marketing, you need to spend more time doing it and that reduces time available to learn the nuances of finance. That’s the reality. Silos will be here tomorrow as well – because they are a necessary evil. What is achievable is in building great interfaces between the silos – be it trusted relationships between people across silos, simpler processes and sensible use of technology.

It’s the same case with corporate hierarchies. No one including me likes hierarchies – and want it to be flattened. We are all well aware of the advantages of flat organisations. And yet – we also have to live with the hard reality that to keep ourselves organised and efficient, hierarchy is a necessary evil. What we can do – and rarely get right – is via empowering employees, having great communications, building trust and so on. Jensen Huang has something like 50 direct reports in Nvidia – which I am sure helps keep them flatter than most companies. But none of the CEOs I know in person seem to be able to adopt that – so I am not sure if that strategy will become mainstream.

AI “might” kill silos and hierarchy levels a lot more than any of the past approaches – but that has its own pros and cons.

In any case – one of the known challenges with having a hierarchy in a large scale org is that the people on top of the pyramid don’t always share the views of the people lower down. Optimism about AI is one such topic – what I hear from the very senior execs and what I hear from the less senior ones rarely match.

When it comes to AI – the most common approach is a top down mandate on adoption. For example – I know plenty of CIOs who have rolled out AI based tooling for their large engineering teams. They have PMO teams to track metrics on adoption. Most of the time those dashboards are all green. I talk to the engineers all the time as well – and they have a thousand concerns about embracing AI including the fear of losing their job to an AI agent in the future. The engineers and the middle managers find ways to keep the big bosses happy – either by doing some minimal work with AI to show they are using the tool, or showing higher velocity by being smart about how story points are handled. I used IT as an example – it’s not any different in other functions. Everyone declares success – but the enterprise level ROI needle doesn’t move very much.

On the other hand, I also know tens of cases where AI has given great ROI. Not all – but majority of those cases have some commonality. They were done in employee centric ways – listening to the ground reality, addressing concerns, not forcefully mandating upfront and willing to change course based on learning. The challenge though is about scaling. These cases tend to have modest ROI in real dollars so far even though percentages look impressive. The world we live in is an impatient one and unfortunately also the one that refuses to learn lessons from the past.

When I entered the workforce – ERP was in the hot seat like Agentic AI is today. Almost word for word – the hype was similar about its transformative ability. The reality though is that it took a long time to show the kind of ROI that it promised. The technology was evolving fast and assumptions were changing all the time and consequently estimates were hardly solid. What were initially thought of as best practices had to be changed many times over. If we choose to not be breathless with excitement – we can pause and think through how we got through those times in 90s and how we accelerated fast after that.

I was an SAP consultant. I remember the first projects used to have workshops with COO and GM level folks and ignored the input (including big warnings) of the people manning shipping and packing and so on. After spending 20 hour days fixing problems for months after go live, the whole approach to implementations changed.

Change is quite hard and it takes both time and money. If you look back at failed ERP projects in the past – ( also today for that matter) – they largely failed because the team under estimated the resistance to change. Change management budgets were usually the first to be cut in those programs. The money saved by underinvestment in change management then got spent in addressing the delays – usually by some order of magnitude increase in spend.

Just as we see with AI today – there were plenty of thoughtful ERP projects early on that delivered ROI but they were small and hence no one was impressed. Our industry is used to under estimating the long term and over estimating the short term.

The good thing is that we have plenty of history to teach us how to do this well this time around. If we take the human beings involved along – listen, learn, debate and iterate fast – the acceleration in value will happen and the needle will move naturally in the right direction. This needs the senior folks to constantly ask themselves the uncomfortable question whether they might be operating in an echo chamber.

The AI tech – and its ecosystem – is evolving fast. Even with known issues like hallucinations and potential for misuse – we know enough now to make impactful solutions happen for solving big problems while mitigating the known risks. Let’s do this thoughtfully and set realistic expectations and maybe even have some fun going through this journey!