Microservices – What have we learned ?

Yesterday, I shared some of my thoughts on serverless architecture  and ended up getting a lot of feedback and a lot of it went back to SOA and then logically on to Microservices. So I thought it may be worth penning some thoughts on this topic as well. Microservices are not a fad – they get used quite a bit today across big important companies, although perhaps not very consistently. Also, I think we have enough experience by now to calibrate our expectations compared to a few years ago.


What is microservices architecture?

I am sure there is an official definition out there somewhere. But a simple way to describe it is as a way to design a solution as a collection of services, each of which does one thing quite well independently. Each service is its own thing – own process, implementation, language etc . The collection of such services that solve an end-to-end problem is then orchestrated with a lot of automation capabilities to form an application.

Why is this a good thing compared to a “monolith” architecture ?

Separation of concerns is not a new idea in designing systems. Microservices architecture is built on this principle. The traditional way of building an application includes a front end ( html/js) , a database ( SQL/NoSQL/File/…) and App server to handle logic. When we hear people criticizing “monolith” apps – they are usually referring to the serverside logic built as a logical whole.

Monoliths are not bad per se – they can be designed and implemented in a modular way, and can scale with the help of good design using load balancers etc. Just that when it comes to scaling, testing etc – you have to deal with the whole even though only a small part needs to change. As cloud becomes more and more the default deployment option, the flexibility to scale and change quickly becomes a bigger need than in the past. Microservices is a very good way to deal with it. Many monolith systems will co-exist with the world of Microservices .

How micro is a microservice ?

This is one area where the wisdom from actual projects tend to throw cold water on the theory and philosophy of microservices. The tendency for many engineers is to go super granular in service definition . Almost without exception, everyone I know who started with this approach has regretted it and agreed to start with fewer services and then break them into smaller chunks over time. The operational overhead is quite significant as you play with tens of services – you now have to maintain and monitor several services, and at some point there is a performance penalty for too much communication across a bunch of services that all do one little thing.

Another interesting aspect is whether your system needs to behave more in a synchronous fashion or an asynchronous fashion. When you break the system into smaller chunks, essentially you are favoring asynchronous communication between them. Then if you need to make it work in a synchronous fashion – you may question your granularity decision quickly.

What about the product/project team?

I have seen several ways in which teams are organized , and have spoken to folks who worked in such teams where I had no direct involvement. There are a few consistent themes

  1. The need to communicate frequently and freely is a make or break criteria, way more than traditional approaches. With great flexibility comes great responsibility !
  2. One big advantage that comes with microservices is that each service can be implemented with a different fit for purpose language. And each service might choose a different database for persistence. While that is all great in theory, just because you can should not translate to you should. For large projects – too many technology choices leads to diminishing returns. Choose wisely !
  3. There is practically no good way to hand off to an ops team when dev is over. Microservices forces a DevOps culture – or at least DevOps tooling for sure. Its probably a good idea to get EVERYONE in the team some training in tooling. You need different muscles for this world than dealing with a Tomcat cluster. The promise of CI/CD needs a highly trained, high performing team. I may even venture to say that best practice is to have the same team that builds the team to continue to support and enhance systems built on microservices. There are just too many moving parts to effectively transition to a completely new team.
  4. There is no substitute for experience. There are not enough highly skilled folks around , so the ones you get need to carry the weight of mentoring their less experienced colleagues. Written standards might not be enough to overcome this. A common observation is two services looking at the same business object – like a vendor object that is of interest to an accounts payables service and a compliance service – and interpreting the semantics differently. Only with experience can you catch this early and converge.

Is it truly easy to make changes compared to monoliths ?

If you are a microservices fanatic, you probably are well versed in all backward compatibility tips and tricks, and hence your answer has to be YES. I will just say that there are some cases where you wish you were working in a Monolith, especially when faced with pressing timelines. A good example is the changes many apps will need due to GDPR  . When multiple services need new functionality – you need to wrestle with the best approach to get this done. Would you create a new service that others can call ? Maybe a common library? Maybe change each service and make local changes? Each has obvious penalties. No silver bullets – decisions taken in designing the app will dictate whether you buy aspirin from Walgreens sized box or Costco sized box 🙂

What about monitoring, testing, debugging etc ?

All the overheads on these topics that comes from distributed computing are in full force here. This is one area where the difference is significantly more noticeable than in the monolith world. Many of us are fans of doing Canary releases . You should have some consistent philosophy agreed on upfront for release management. Whether we admit it explicitly or not, lean and fast deployment has a tradeoff with testing effectiveness. Essentially you are relying more on your ability of montiring your app ( via all the services and messaging frameworks and redundancies) and making quick changes vs trusting impeccable test results . This is a significant change management issue for most technology teams and especially their managers.

So is microservices architecture a safe bet for future ?

There are plenty of public success stories today of microservices implementations – and conferences and tech magazine articles and youtube videos and all that. All Major SIs have expertise in implementing them. So in general, I think the answer is YES. However, I am not sure if microservices over time will be any less frustrating than monoliths in how they evolve. I probably will get some heat from my purist friends for saying this – but perhaps one way to smoothen the journey is to start with a monolith as we have done in the past, then as it evolves – perhaps have services that call the monolith’s APIs. And as you learn more, break down the monolith to a full set of services. I am not saying this because I am a non believer – I am basing it strictly on the talent available to do full justice to a pure microservices based architecture in a mainstream way. Just because Netflix did it does not mean everyone can. In any case – the mainstream pattern in large companies any ways is to start with their old monoliths and roughly follow the approach I mentioned.


Is Serverless for you ?

One of the more recent architecture choices we can play with is the idea of serverless aka FaaS (Function as a Service). Thankfully, it is not hyped like say Machine Learning is. But nevertheless, it is widely misunderstood – often leading to bad design choices. I am just going to list a few questions I have been asked often (or I have asked fellow techies) , and give my point of view on those. I will try to keep this at a level where it makes sense for people who are not full time technologists .


Are there really no servers?

To begin with the name serverless itself is quite misleading. Your code does not execute in vapor – it still needs servers. From a developer point of view – you choose a provider to handle a lot of things servers do (but not everything) , and you can focus on your application tasks. That is not the same as there being no servers. Its one of those things where my developer friends smile and wink, and my Ops friends roll their eyes 🙂

Is it really much simpler than other options ?

A very hard question to answer with a YES or NO. If we look back 10 years or so, it was all about service oriented architecture (SOA). Now think how many well designed services were created in the time since then ? I personally have seen way more badly designed/implemented services than good ones. My point is when you try to deconstruct an existing application into smaller ones – it often (not always) becomes more complex, not more simple. I know it is counter intuitive till you think through it, or work on an actual project. The simplicity argument is strongest in favor of FaaS when you eliminate server management from what a developer has to worry about – but even there, you need to be careful about where the server logic goes. Sometimes you implement it in client, sometimes you move it to the function, and some times you need dirty hacks to keep everything working. Simplicity is in the eye of the beholder.

It is cheaper ?

When used for the right scenarios, it is indeed cheaper. The obvious case is bursting – where once in a while you get a lot of traffic to handle. If you are not careful about designing – and especially if you don’t test with production level data, its quite possible that you may end up with a more expensive solution than having to deal with full time server management. That is hardly unique for serverless though. Poor choices have a price to pay now and/or in future.

What does the developer not have to worry about ?

For the code implemented as a function, the provider ( like AWS Lambda) takes care of making sure it will get executed when triggered. So things like horizontal scaling, fault tolerance and high availability are things you don’t need to worry about. Needless to say, you have to make sure your code is friendly for parallel execution. And it is still on your plate how the application as a whole works – Lambda etc only controls what you passed on to it. So developers can enjoy more breaks ( or officially , they can work on making their app so much better ) 🙂

Also, if your server code is in java – which is true for many cases – you save a lot of time because you don’t have to redo the whole code. You can lift and shift with low effort. Another thing to gain more coffee breaks !

Also, API gateways become a good friend for developers in mapping parameters to and from functions. This makes development and maintenance more efficient in many cases. API Gateways themselves are fairly new too – so there is that. I guess you can also consider authentication to be implemented via the gateway – but my security expert friends may not like this idea. I need to think through this one more.

What is the big difficulty for developers in FaaS world ?

If you are like me, you spend more time debugging than actually writing code. As with all distributed computing scenarios – you have a tradeoff to make here. As we introduced these paradigms in quick order, the monitoring and debugging tooling has not kept pace. So the first movers typically spend more time debugging across all the layers without useful tools and it can be quite frustrating and inefficient.

How about testing ?

Since computing is distributed, you should plan for higher quality of testing in general. Its also about setting expectations with testers and users. Since the server is not always on waiting for a request – it needs to be switched on every time, and then it stays on only for a few minutes. If you have a lot of near real time needs, implementing it as FaaS is perhaps not the first option to cross your mind. Also – a lot of dirty hacks get introduced while testing if you are not careful with designing. A common one is to keep pinging the service to keep it awake since you realized some tasks take longer. You really need a close approximation of production peaks and valleys in testing to make sure you don’t get a midnight call to debug.

isn’t FaaS stateless ?

Short answer is yes , of course. But often we need some hack to hold state – usually by using a cache or database . Some logic on session management could be in client side too.

Is AWS Lambda the only option ?

Lambda is definitely the most popular and have been in the market the longest. But a lot of big players like IBM (OpenWhisk), Miscrosoft ( Azure functions) and Google ( Google cloud functions) . So you do have choices – they all have different things they support, but probably will converge over time. I will resist the temptation to talk about standardization 🙂

So what is a good place to start ?

Serverless is a newbie in the world of architecture – so proceed with sufficient caution. Since my playground is in large enterprise space, what I have seen the most is large existing apps offloading small parts of their functionality to functions. Those companies who have embraced DevOps also consider serverless when they create new apps . At the moment, I don’t expect to see a lot of pure serverless architecture options in large enterprises. Some kind of hybrid approach is probably where we are headed. Once the tooling gets strong, I am sure we will see definite patterns emerge.

Hopes and Dreams of a new CTO

On Friday 1/19/2018, I got a new role in IBM services as the CTO for North America.

It was an honor and privilege leading the CBDS business and I am very grateful to our team and our clients for a very fulfilling time. Pat Eskew and Rafi Ezry will lead it to greater heights and I look forward to working with them and cheering on the team every step of the way.

There are a few people to explicitly thank specifically for this new adventure I am embarking on. First, my boss Ismail Amla who runs services for North America for his trust in me. Second, my uncle Dr Krish Pillai who gave me his computer and the Dennis Ritchie book on C, when I was in eighth grade. I learned BASIC on that computer to code video games and had a huge collection of custom games on cassettes. And I struggled through the K&R book line by line till C became how I think of logic. Third, Prof Kalyanaraman who taught me statistics in Business School – he bridged the gap between math, computing and business for me.  I owe a huge gratitude for my parents who never questioned or hesitated in finding ways to support my varied interests , even when times were REALLY hard. And it goes without saying – more people than I can list here have helped me and continue to help me. Please know that you have my sincere gratitude and I will continue to seek your guidance.

I have some hopes and dreams about the journey ahead of us.

What I would like us to do for our clients is to be a champion for technology minimalism and simplicity. 

Technology has become incredibly sophisticated over time, and unfortunately also quite complex. On top of that there is the constant noise on hype. Every category of tech is a trillion dollar opportunity if you believe the analyst reports. This complexity and hype leads to clients not being able to use the sophisticated tech to solve their biggest problems. Instead – best case they get stuck in endless proofs of concepts, and worst case they stay still and risk becoming irrelevant for their customers.

Its very rare that any one technology is going to add value by solving a big problem. It usually takes the convergence of multiple technologies to arrive at meaningful solutions. This comes with the risk of over engineering , low speed of execution, and a real danger of designing a brilliant solution that can’t change on a dime when market changes. Striking a balance between all these is where engineering meets art.

I have a degree in engineering and business. And though not by design – I had a career where I had one foot each in tech and business. Growing up as a developer and later as an architect, I absolutely enjoy tech for the sake of tech – and I am not ashamed of it in the least.  But with roles in delivery, sales and general management,  I equally appreciate that in enterprise software, no one cares about tech that does not make or save money for our clients. Bringing biz and tech together – discussing the art of the possible, providing reality checks on emerging tech, ethics and trust issues that come with tech, connecting clients with each other and with ecosystem partners, building business cases to justify investments , debating usability of code for humans and machines etc are all things I look forward to working with clients on.

End of the day, its not what we make that is important – its what we make possible for our clients !

I would love for us to be known as the team that our clients depend on for solving their unknown unknowns  

We have an amazing team with a multitude of backgrounds, skills and experiences. Thanks to the opportunity to work with clients across several industries and solving a variety of problems, we know several common problems and also the solutions for those. That minimizes the risk of reinventing the wheel, and maximizes the execution speed.

But that is just the starting point – we need to be able to help uncover problems and opportunities that are not well defined yet. For any given problem – I have no doubts we have the skills to solve it. But a problem is only as good as how it is defined – simply because solutions depend on how a question is asked. The speed with which the world around our clients is progressing – we need to feel comfortable with the unknown unknowns, asking better questions and constantly striving to iterate towards better answers. Technology might not even be the lone answer for many questions – it could be a change in process or people.

This needs us to keep learning, and teaching each other  – broadly and deeply. Tomorrow belongs to the polymaths ! A very wise leader told me once that learning is like breathing – you just can’t stop. I plan to actively continue with our learning initiatives – both as a student and as a teacher/sponsor. The world of technology consulting is changing quickly , and in quite disruptive ways. I hope and dream for us to be on the right side of this change.

On the personal front, there are two things I am committed to this year . First is to exercise more . And after procrastinating for over a decade, I finally signed up with a personal trainer yesterday. I told him that I will hold him responsible for my success in my new role since I will need a lot of energy and strength .  He nodded, and there is a possibility that he may have rolled his eyes 🙂 .

The second is to teach programming to my daughter, to supplement the class she has started . Today I helped her with some nested conditional logic. She was impressed for about 10 seconds and then started telling me that such complex code is useless because she won’t be able to remember later the reason for writing it and none of her friends will get it . A part of me is proud that she immediately realized something about the big picture that took me a few years as a developer to get . And the other part of me is wondering if I have it in me to keep up with this despite my resolution . I see a lot of eye rolls in my future 🙂

Ten enterprise technology industry predictions for 2018

As of now, vacation has ended and I am back at work. I am starting a new role at work this year – more on that later. The last couple of weeks gave me some time to think about what is in store for our industry in 2018 . Despite my own misgivings on making predictions in general, I thought I will write these down any way in this blog. As always, these are strictly my personal musings .


1. Data becomes sexy, again, thanks to AI


Customers who have started on the AI journey all realize the same truth – this only works as good as the data that AI has access to. And most companies have less than stellar capabilities when it comes to data management. I totally expect 2018 to be the year of data ….again ! Of course tooling will change from the last time this “data is sexy” thing happened . Rejoice , my friends in data modeling, ETL and so on! 🙂

2. Data security and privacy becomes mainstream – thanks to GDPR and AI


All major companies always had to deal with security and privacy. Now with GDPR, this will become a mainstream topic both for SW and services – with cost and revenue impact . Its not just a back office problem like it was historically treated. Now front office functions need to be redesigned to make sure no regulations are broken. Europe started the trend, but obviously everyone else is going to have their version too soon. If history is any indication, we will end up with even more disparate rules and guidelines across the world. I have this feeling that most international tech companies will spend significantly in 2018 to lobby governments across the world.

GDPR is only one reason – the other is Artificial intelligence becoming a reality pretty quickly all around us. There is a lot of fear about privacy and security – some misguided and some very valid – and this will only amplify in 2018 and beyond.

I am tempted to say something about standards too – but the reality in this industry is that if there are two competing standards, people will come together to create a unifying standard, only to see that now we have three standards instead of two we started with. So – while much needed – I am not holding my breath 🙂

3. Chatbots will get a redo


Everyone seems to have a chat bot these days – but most are useless. I tested at least a dozen over the holidays as a consumer and it was a horrible experience. I think this will start seeing a big change in 2018. To begin with – I think more and more companies who jumped in and created the first generation of rules based chatbots will now start moving fully or partly to more of an AI driven chatbot. Instead of answering just short tail questions, I expect chatbots to answer more and more context sensitive long tail questions, and start to learn more from each interaction. This is another reason for data management to get a big boost. 2018 might also be the breakthrough year for voice to text capabilities – this is something close to my heart, given my thick Indian accent often confuses existing APIs.

4. AI will start democratizing visualization of data


I grew up in BI. From the time I started as a young BI consultant, I have believed that the best BI experts are more artists than engineers. It took me a long time to become a decent visualization guy. And having been in the field for a long time, I know I am in good company. We have more people who are experts in back end engineering than we have people who can make high impact visualizations. I don’t think the core principles like making data actionable, making sure it is context sensitive etc will ever change . Now the tooling has improved significantly and that is absolutely a good thing. Unfortunately, the complexity of the data (types of data, their interconnection, the speed of change of data etc) has also increased a lot and the challenge of visualizing has also increased a lot. I think this year we will start seeing the world of visualization start to rely more on cognitive technology and try to democratize data visualization for lesser mortals like me. I am not sure if this is a prediction or really a cry for help 🙂

5. Open source starts looking more like proprietary  


Everything new eventually starts to look like its predecessors in our industry. It usually takes 15-20 years or more. I think Opensource software is now at a stage where no one has any sustained advantage, because there is hardly any barrier to entry for someone else. Also, every popular category – like databases for example – is way too fragmented. By becoming extremely developer focused, many new companies ignored ops tooling which adds to the customer head ache. At some point it becomes an untenable management overhead for customers to run a different software for every unique workload. I think this year we will see a change to this – OSS companies will probably start keeping more of their wares on commercial licenses , some larger companies will buy out a bunch of smaller public and startup companies and so on. I could be wrong on timing – maybe status quo will prevail another year or two, but I definitely think this will happen very soon.

6. World starts to come together for better/simpler debugging and monitoring


2016 and 2017 have made sure that containers and micro services are here to stay. Most new development will be cloud native in nature. While my purist friends still are waiting for one public cloud to rule them all, I am still on the commoner band wagon of hybrid cloud as the only pragmatic option. With every passing day, we will also create more and more sophisticated abstractions. All good things for the “happy flow”. But life in enterprise computing is rarely about happy flows – the effort to debug and monitor across all these layers has also become tedious. With all my previously stated misgivings on efforts to standardize – I do think we need thoughtful and simple open standards for debugging and monitoring in the increasingly distributed computing landscape. Given the momentum we are seeing, I am betting on this year forcing the community (perhaps led by APM gang) to come together and start putting the building blocks in place.

7. Lot of tech M&A in store, probably more startups will exit/ IPOs too. 


Companies are sitting on a lot of cash already. On top of that, the GOP plan has a tax holiday for bringing money from abroad. After giving $1000 bonuses and increasing dividends, there will still be plenty of cash sitting around in big company bank accounts. Estimates of $1 to $3 trillion have made rounds on how much cash is stashed abroad by American companies.  The sensible move is to use a good amount of this cash to start massive consolidation in the industry. This should happen across all segments – HW, SW, Cloud,…

A side effect of this is that startups should get a lift – either via IPO and/or by selling out to someone with deep pockets.



8. Devices/Things will become smarter and more secure


IOT , despite the hype, is a thing already. The fear of man made calamities like DDoS is also very real. And it is clear that leaving all decision making logic to cloud is not viable for more interesting use cases (say like a self driving car). I expect to see a lot more logic being executed inside the device itself , and a lot of hardware level security features added that cannot be changed via a software hack. None of this is new – just that I think 2018 will be the tipping point for this to become more mainstream. A good starting point in my opinion would be the routers on home networks – designed ground up with the idea of securing connected home devices on the network it controls.

9. More block chain branded companies this year


Last year we saw big data companies make the pivot to be machine learning companies. They did not want to be known as hadoop or ETL or NoSQL or anything remotely related to data, but over night change to Machine learning companies . Those that missed that round won’t probably bother with AI/ML anymore – I expect them to find a way to brand (hopefully also engineer something real – at least some of them) themselves as blockchain companies. Nothing wrong with this per se – no one is really fooled in this industry anymore with branding changes. There will be a temporary head ache for real blockchain companies to demystify stuff for their customers, on top of the topic of the day which is crypto currencies themselves.

10. ERP companies will yet again start to design their next generation products


ERP has evolved a LOT over the years, and mostly for the better. Just a few years ago, I thought their hardest challenges will be the move to cloud and improving usability (better UI, speed, simplicity etc). Those challenges have been addressed – admirably in general, compared to where they started. But I think even bigger challenges have now come up for this category.

ERP was fundamentally designed for efficiency and for human users. Now with AI allowing machines to learn and improve, the static nature of ERP is fast becoming a thing of the past. Small AI innovations have been started by pretty much every ERP vendor – but that is not even minimally indicative of much their world is going to get disrupted. The next generation needs AI at its core – it should be the center of continuous learning for every organization . It means not just efficiency is key, effectiveness becomes the new normal. On top of that – human users won’t be keying in much of the data any more. That work will be taken over more and more by machines . A lot of logic associated with screen flows in ERP today will be useless in that world. Even the current sophisticated interfaces built on ERP will be less efficient when it is always a machine that is going to talk to it in binary, or if its a human using voice or text strings . To some degree, I know the internal architecture of the main ERP systems in use today. Barring maybe one exception (not naming anyone given everyone is a friend) – I think rewriting most of their software from the ground up is probably the only way these existing systems will move into the future. If they don’t do it – I am reasonably sure that someone else will disrupt them from outside .






Technology in 2018, through lyrics of popular songs

I have been listening to a bunch of old songs this morning, while also taste testing different coffee beans I bought over the holidays. I am not sure why, but the lyrics keep giving me hints about technology. So here we go 🙂

1.Artificial Intelligence 

I can hear the sound of violins long before it begins

2. Mainframes
Mamma mia, here I go again
My my, how can I resist you?
Mamma mia, does it show again
My my, just how much I’ve missed you?


3. Crypto Currencies 

But there’s a side to you
That I never knew, never knew
All the things you’d say
They were never true, never true
And the games you play
You would always win, always win

4. Internet of Things

Everybody was kung-fu fighting
Those kicks were fast as lightning
In fact it was a little bit frightening
But they did it with expert timing
Keep on, keep on, keep on, keep on

5. ERP

How deep is your love?
I really mean to learn
‘Cause we’re living in a world of fools
Breaking us down when they all should let us be


6. Predictive Analytics

Do you know barbarella, magical barbarella
Mystical fortuneteller
Selling your dreams to you