How do you handle stress at work ?


I spoke with about 50 colleagues yesterday – quick calls to thank them for all they do and to wish them happy holidays. In many ways, 2023 was a stressful year for many of us and it came up in various forms in our conversations. I was asked by some of them on how I handle work pressure. After a couple of coffees in the morning, I thought I should offer some perspective on this topic.

1. Have safety valves – do not stew in pressure

You need some proven ways – note the plural; one might not be enough – to destress. For me those are – long walks, calling one of my trusted friends, and Carnatic music. There is a 6 mile circuit that I reserve for such walks and it invariably helps calm me down when I need it.

2. Put boundary conditions if you are constrained on time on high risk decisions

I have a mental model that tags the consequences of my decisions two ways – what’s the chance of the risk happening and what’s the impact it happens. Stress gets in the way only on decisions where the risks have a high chance of materializing and the impact is high if it does play out.

Pressure is inevitable when you have to make such decisions with imperfect information within limited time. If I have to take such a decision – I insist on reasonable boundary conditions like “ok let’s do this now – but here is how we will check if it works and we will stop in a month if it doesn’t trend like our hypothesis”

3. Form follows function

I prefer a rigorous debate than a well structured document as the basis of a decision. I do like documenting decisions once we make it though.

Similarly – some metrics have a way of hurting decisions as much as they help. People lose sight of the principle behind the metric and sometimes become a slave of ratios etc to make decisions. So I prefer asking a lot of first principles based questions before I make decisions – and it helps minimize stress because it makes everyone involved think logically.

4. WHY is the critical question, WHAT will follow

Leaders create stress unintentionally when they ask the team something without explaining why they are asking. The more senior you are – the more the risk of you creating unnecessary stress in the organization.

If you have taken the time to hire, train and manage performance of your team – you should have the confidence that if you explain a problem, your team can find solutions. Your job is to find the right question to ask and to explain why solving it is important. If you don’t have the confidence in your team – shift left and solve for the quality of your team as quickly as you can.

5. Eliminate and simplify

Not all problems need a solution with high quality and they don’t all warrant similar effort. Even a given problem can usually be decomposed to find what’s the critical part and what can wait. Eliminating the noise and simplifying the problem statement goes a long way in eliminating stress.

Don’t assume that the person asking has thought through all aspects before asking you. So you should feel validating whether that you think is the crux of the issue is what they also think. I am always grateful when I am challenged by my team when I ask a question. I am very comfortable asking clarifying questions to my boss as well.

6. Know the audience when you communicate

Even the best decisions can lead to additional stress if you don’t think through the answer from the recipient’s point of view. If I have to convert the same information to both the executives in my team and to new hires – I often will need to say it differently. Similarly the response to a client might look differently when it is addressed to the buyer vs someone who is the actual user.

I am convinced that more stress is caused by poor communication than poor decisions.

7. Keep shifting left and make as many things a routine as possible

I used to train dogs for high level competition. Before we start training – my dog and I will go through a certain routine which gets us both into the right frame of mind. Routine helps minimize stress and keeps us focused. High level sports people all have routines they follow.

I have routines in personal life – I wake up at the same time most days, make an espresso, do a two minute training session with my dog, and solve the daily Wordle and then go for my walk. It helps me get into the right frame of mind for rest of the day.

At work – processes don’t die a natural death. That’s a real curse. They don’t even evolve very much and people start hiding behind processes as a their safety thing. Even when they criticise these “bureaucratic processes” – they take it for granted that it’s sacrosanct. A massive amount of stress happens because people don’t kill irrelevant processes. Your job as a leader is to make sure that every process gets critically evaluated and either eliminate or work around the ones that don’t add value.

Similarly – the moment you have a stable solution, make it a routine so that people don’t need to waste their time thinking about it deeply every time. All I would caution is “premature optimization” kicking in. Especially in large firms – there is a tendency to institute process upfront in a top down way. It almost never works – the better idea is to experiment, evolve and then standardize the process.

The shift left often ends up with hard questions about your skills as a leader and whether you have the right team around you. Don’t sit and stew on those – make changes when you are convinced that there is a problem. Bad news doesn’t turn for the better with time without interventions.

8. Learn from everyone around you

There is no monopoly for good ideas. If the marketing team has a good process for recruiting – shamelessly steal it and adapt it for engineering teams. It is much better for one person to solve it and others to adapt than all functional heads stressing over solving common problems from scratch.

I am sure there are more things I should call out – but it’s time to drive to the gym, so I am not going to stress over the rest for now 🙂

Why I don’t worry about AGI … yet


The recent OpenAI drama triggered two debates on social media – one on corporate governance and the other on AGI. I was quite surprised – and amused – by the number of people who have jumped to the conclusion that AGI is already here or very close to being here.

I don’t think AGI is a near term thing at all. Also to be clear – I am a big fan of AI but I don’t think at all that AI needs to work exactly like a human (or better than a human) to be of massive value to our every day life. Similarly I don’t think we should sit around waiting for AGI to put some safeguards in place – less sophisticated AI still has massive chance to cause hurt because of the ease of global distribution of software.

There are a few reasons why I don’t think we will get to AGI by doing more of what we already do – like having bigger foundational models, even more compute , having even more training data and so on

To begin with – the basic idea of building an AI solution is to feed it a lot of data. For example – for language based models, training it on all of Wikipedia is a common first step. And that’s not nearly enough – on top of it, these models are fed millions of more tokens. Compare that to how a well educated human learns – no one reads the entire Wikipedia to get a PhD. Humans learn from a small amount of data . A highschool English teacher teaches critical thinking and analytical writing often based on just one book. We then can expand that to every other source of information we get later without needing explicit lessons. When we read a new book – we don’t need to think through every book we have read to form concepts. We are way more efficient in how we learn compared to a machine . But the way machines are taught – it doesn’t mimic how humans learn.

One counter argument is that a machine has a cold start while a human has the advantage of a long evolutionary history and hence some information is already present in our genes/brains . But even if that’s true – humans still didn’t have access to as much info as the machine readily has. Basically – we assimilate and store information differently to machines – and access it differently when we need it.

Humans can get started quickly with very little information. My daughter when she was three years old could recognize animals at the zoo based on the cartoons she had watched. She never confused a bear for something else because the red shirt on Winnie the Pooh was missing on the live bear 🙂 . She knew dogs and cats are animals – and naturally figured out that elephants and lions are animals too.

Also, humans can abstract information across modes of information without special training. Whether I see a sketch of a car, an actual car parked on the street or a car moving in a high speed chase in a movie – I know it’s a car and how it generally works. When I throw a ball up and it comes down – I can relate to the concept of gravity from my middle school lesson even though the example used was of an apple falling on Newton’s head. GenAI has started becoming multi-modal – but not in the way humans do. This is of course a simplistic way of looking at how a human thinks and acts – we have not yet quite figured out the details of how human brains work.

How do we find answers when we are faced with a question ? Let’s say you ask me what’s 121 squared. I don’t know on the top of my head – but I know how to calculate it , and I also know how to approximate it without a precise calculation. But if you ask me what’s 12 squared, I already know it on top of my head. AI only knows the latter way as far as I can tell. An orchestration of several computing techniques could potentially solve these kinds of problems – but learning from a sequence of tokens alone probably won’t get us there.

One last point on what “general” means in the context of intelligence. There are some things that a computer can do faster and more efficiently than a human can. If we can draw a boundary around the problem – like a game like chess or go – a computer has a higher chance to figure our optimal answers compared to us. B

Where humans excel is in generalizing as context changes. As AI research makes breakthroughs in how machines plan, set goals and think about objectives – I am sure we will see massive breakthroughs . And at that point – perhaps AGI might be something more of a reality. I am not an AI researcher – I am just a curious observer . I will happily change my mind as I get more information. But for now – I am not worried about AGI becoming a thing in near future.

GenAI will need a whole new look at Data Governance !


There are two areas that I think will be the “make or break” criteria for Generative AI

1. MLOps and

2. Data governance

And between the two – I think Data governance will be the one that will get enterprise attention first, and real quick. This is because I think the first hurdle will be to make sure enterprise users trust GenAI – and that’s a high bar in itself. I will park my thoughts on MLOps for now.

The size of the model is probably less important for enterprise uses – most tasks that AI can help with in an enterprise context are narrow in scope. This is generally a good thing. Big models are expensive to train and probably will never get used at inference time to make use of all it was built to do.

Even if we look at a complex end to end process in an enterprise context – it probably makes more sense to have a series of specific models that can work together, instead of one big model that covers everything. We don’t need the model that answers questions on purchase orders to also write an essay on the meaning of life 🙂

I am well aware that talking about cost of a new technology instead of innovation goodness is uncool – but having lived my whole career in large Enterprise land, I am quite sure that if GenAI has to scale in adoption – it has to have a low cost base. Enterprises might even live with a lower quality of responses if the cost is right. I am only half kidding here 🙂

To make smaller models (which are cheaper) really useful – enterprises will need very high quality data to fine tune it with. For narrow scope – enterprises generally will have the data with enough tokens to make it useful ( product manuals, customer complaints m, procedures, laws, invoices etc ). The only question is whether such data is governed in some systematic way so that the information can be trusted to be of high quality.

Data quality is largely an unsolved problem even for the much simpler world of data warehouses which has been around for decades now. It has almost never attracted enough budget and time in most companies. a big reason why datalakes didn’t yield the planned business value is also because people didn’t trust the data to be of high quality. We will see what fate awaits lakehouse approaches – but I am always optimistic. These things generally improve over time.

Size of the available data to train and fine tune might actually not be as big a problem as the quality of data. More data that looks the same doesn’t really do much for models that use it to make them any better. After reading the Chinchilla paper , I am sure we will keep massively improving the ratio of training data to size of models. Deepmind’s approach is radically more efficient than the original GPT-3 paper and it only took a couple of years to get there.

There are two complimentary approaches I can think of regarding how an enterprise will think of data for fine tuning (assuming they will start from a model that a someone else spent money on training) – 1. Establishing a consistent data governance process and tooling and use high quality trusted data to fine tune the models and/or 2. Depend on LLM itself to create high quality data ( self-instruct , use one LLM to create data for another , have human users curate LLM generated data etc – like in a chatbot type use case where a human expert can correct an AI solution and let it learn from it) .

Fine tuning is only one part of why I think data governance will get a lot of attention. There is an “everyday” need that will happen frequently when the model is used – people ( users, auditors, regulators …) will all ask for proof on where did this data came from that GenAI is answering .

GenAI has an additional headache beyond what’s used for training and fine tuning and all – users might feed it inappropriate data ! That’s another thing that needs to be governed – and probably heavier in regulated industries and when IP, privacy etc need to be kept in mind at every step.

There are two things to think about carefully here – the process of data governance itself, and the tooling and automation of it. I am less worried about the tooling part in relative terms – I am just not sure yet if enterprises have thought through all these “fringe” aspects of GenAI compared to all the cool applications they are excited about. If they don’t find the time and budget to get it done right – it will be a lot of grief to deal with.