GenAI in the enterprise – nine themes that I have seen so far


Ever since ChatGPT became a thing – I haven’t had a week pass by without having GenAI conversations with clients. It’s truly been a fascinating time to be a technologist.

There have been 3 times in the past 25 years when I have seen this kind of massive interest in being a first mover

1. When ERP helped consolidate applications

2. When Datawarehousing became mainstream

3. When mobile and cloud converged

I work in Financial Services – which adds it’s own layer of flavour to make all opportunities and challenges a bit more spicy 🙂

Here are nine broad themes that I have noticed so far from the conversations I have had with FS companies.

1. Risk mitigation vs First mover

FS companies pride themselves as primarily being the best risk managers (which is a very good thing for consumers). So “what can go wrong” has been front and center for GenAI plans. FS companies also know that their primary competitive advantage is in data and they want to be the first to capitalize on it. This push/pull tension is common in how they operate even for mainstream innovation – but GenAI has taken over as the lead theme for now, with public cloud adoption perhaps as the close second.

2. Privacy

All FS companies handle highly sensitive and personal data. There are tight restrictions on what can and cannot be done – and thankfully this industry thinks through this carefully. Between legal and ethical issues at play – the risk of getting this wrong is apparent to everyone and hence a lot of thought goes into mitigating it. How they solve it is not consistent across the industry – and a unified approach with is both efficient and effective is much needed. Otherwise a lot of GenAI innovation just won’t happen at scale.

3. Buy vs Build

The larger Banks (all kinds of banks) all have hired great tech talent including in AI. While this is obviously great to have such great people – it also means a lot of time and money is spent on building everything in-house. This is less common in insurance – but Banking and Capital markets companies generally love to build more and buy less. I know companies who have tried and failed to build their own equivalent of commercial CRM systems. Open source software has made building systems much more possible and many times it’s a good thing for the companies. But again – these debates do take away a lot of time from having innovation at scale. You can’t extrapolate time and budget from POC projects to full enterprise implementations.

Buy is not an easy option either given the tech is so new. Every large tech vendor has a platform offering and evaluating them takes time and money. The usual checklists for build/rent/buy is not enough for emerging tech, and needs to be extended. But that extension needs a level of knowledge that they don’t have today.

4. Skills

To begin with – most companies don’t have enough people with solid knowledge of AI. GenAI has an even smaller talent pool. Upskilling is totally possible – but takes a lot of time. I have lost of count of how many hours I have spent in the last three months reading papers to get the basics right. I am grateful that my employer has a lot of experts in the field who can clarify concepts for me when I run into confusion, but that’s not a luxury every company has. It’s not just great AI talent that you need – you need all the usual things that go with it ( architecture, engineering, UX ….) which means you have to deprioritise other projects. That disruption is not pretty

5. Intellectual property

One of the offshoots of GenAI is it’s use with developer productivity – code generation type usecases . Everyone – me especially – got very excited when we saw the possibilities for the first time. But that doesn’t naturally translate to the enterprise world – IP problems come into play very quickly. GenAI is only as good as the training set that was used in its creation. Have the solution providers done the work to make sure copyleft and copyright issues are addressed before a client generates code ? Otherwise it’s a massive risk that the companies carry. I just used code generation as an example – it applies across the board for GenAI ( well for all AI really )

6. Environmental impact

Greenhouse Gas emissions is something to think about upfront. GenAI is compute intensive to train given the size of the models – and while inferencing is not similarly intensive on a unit basis, a wide deployment will make sure the units add up. Also remember that GPUs consume more energy than CPUs. Between primary and secondary factors – the environmental impacts are a factor to be thought through before large scale work happens. Only a subset of companies seem to have made it a tier one criteria though in my limited view.

7. MLOps

While most of the attraction of GenAI is in the actual “generative” aspects, the enterprise attention is quite high on operations. There are big problems to tackle – how do you detect and prevent models from drifting ? How do you prevent degradation via AI learning from synthetic data created by AI itself? What are the most trustworthy watermarking approaches ? And so on . I think GenAI will be the shining moment for all the research going on in MLOps which will help across the board .

An excellent side effect of this attention to ops is that it has highlighted the need for investment in foundational data management which often gets ignored in the enterprise world.

8. Quality control

Similar to the point on MLOps – companies will have to rethink how QA is done. Software is built in layers – LLMs can affect the quality of layers above them that use them. There is a lot of work going on in academia and at all the big tech companies on improving accuracy, consistency, performance etc of LLMs. I have a strong feeling that these studies will probably result in alternate approaches to GenAI fundamentally . I will write another blog later to expand on my thinking – I am still organizing my thoughts on the matter.

9. Trust

GenAI has rekindled this important topic and put some urgency around scaling it. It’s invariably the first question in every meeting that I hear – “can we trust this thing?”. The question is simple – but the answer is quite complex in the capabilities needed to ensure trust. We need to know how AI arrived at the decision, what data was used to train it, what has changed over time in both data and the model and so on.

Published by Vijay Vijayasankar

Son/Husband/Dad/Dog Lover/Engineer. Follow me on twitter @vijayasankarv. These blogs are all my personal views - and not in way related to my employer or past employers

Leave a comment