This is one of those topics where my employer IBM has a vested interest and consequently I need to state upfront that what I say below is strictly my own views from my limited view of the large enterprise world. My personal experience over the last couple of decades has been in the very large company space. I have not spent enough time in small and medium companies to know their situation first hand. So please take what I say with a pinch ( or pound ) of salt . Also to preserve friendships that I value, I am not going to make any statements on specific cloud vendors.

For once, customers led and vendors followed .
I know several commentators on social media call proponents of multi cloud as snake oil salesmen. What they conveniently forget in this case is that multi cloud is a reality and vendors woke up to it late and smelled the coffee. In the large company space I am familiar with (granted it is a small set – but also the set that everyone else eventually tends to follow in enterprise IT) – I cannot think of even one that does not have several cloud vendors they work with at some scale. Most common pattern that I have seen is their own private cloud, one major public cloud and then a few others at negligible scale (often brought in via developers trying out new things). In a few cases, I have seen two public clouds used at scale – commonly as the result of some corporate M&A activity .
In short, multi cloud is a reality today – not some futuristic idea. As Eagles crooned – You can stab it with your steely knives, but you just can’t kill the beast. Every company struggles to manage it (Most cannot even monitor – let alone do some active management) and that is why vendors are on the job trying to solve it.
I don’t know why, but now I am now humming We didn’t start the fire 🙂
Multi cloud for Bursting / cost arbitrage – no one does it well…yet
While in theory – a lot of people would like to be in an ideal world where workloads can be sent intelligently to execute wherever it is cheapest to execute, that is still enterprise utopia. Even in the simplest case of your own private cloud and just one public cloud – seamlessly moving most of your apps from one to another is not easy to pull off . The two common use cases are when peak compute is required – like when month end closing happens, or thanksgiving sales happen , and when disaster recovery happens and you want massive fail over to happen to a different public cloud . You can do both for well engineered applications today – but that is a very tiny part of the overall landscape, and won’t buy you much.
You can check out any time you like, but you can never leave !
(Sorry – the Eagles song just keeps playing in my brain in a loop).
In an enterprise with multiple clouds – the normal scenario is that data and algorithms don’t sit next to each other . One will need to move to meet the other when it is needed. Moving data is expensive – mostly because companies love to create data, and no one likes to delete anything . Unfortunately some data cannot move for legal or cost reasons, and some algorithms may not move because its vendor has no incentive to expose it elsewhere.
It is already true that you will need to do one-off extremely high business value solutions that need multiple clouds. It will only become more and more true in future. As an example, say five years from now (may be less) – you may need a multi cloud scenario where a specialized solution that needs quantum computing from one provider, AI APIs from another and some good old algorithms from your own private cloud to manage enterprise risk.
The one off solutions usually don’t come with a lot of governance. So others will build little apps on all these clouds for other one-off solutions . Proliferation is more or less a given.
That was a long winded way of saying – multi cloud is very useful in adding significant business value ( since no one party will ever be the lone innovator) and hence you will use it. But once you are there – you are better off learning to live with it peacefully than just fight it all the time.
Containers are awesome – but only when used as intended
Everyone wants to work like Netflix and Amazon. But the reality is that they all have massive baggage of applications that were built when cloud was not a thing. There is a massive fascination in the field when it comes to containers. I am a big fan myself. But if you look at how the move to containers look like today – you don’t need to be a genius to realize it is not used as designed. Most of the time there is no systematic refactoring done. Docker and Kubernetes are not silver bullets. I have lost count of how many times dev teams have overlooked security, performance etc in the quest to containerize everything. Forget all of that – look at the docker image sizes in modernization projects and you will probably see a lot of them represented in GB and not MB ( and I bet you there are posters pasted all around the building saying lightweight images are what we are going for). If that is the trend – the end result won’t look much better than just outsourcing your data center.
Got it – but what if we build our own management layer ?
Some companies have the ability to make significant investments in people and tech. They can of course attempt to solve the multi cloud management issues by building their own management layer. While it might help the first wave of hand picked applications to achieve utopian status – it is not without some long term pain.
The most obvious one is talent. There are only so many top engineers in the ecosystem and you need to recruit and retain them for a long time. And while top engineers love to build great new stuff, they don’t always fancy the maintenance of those cool things. So you will need to invest additional $ to keep it running and enhanced with high quality .
Then there is the problem of each cloud being unique. Over time better standards will evolve – but for now, if you want to build a framework yourself – you need to assume there is a relatively small set of common functionality that you can make use of. The rest you need to build into your custom layer, or offload to applications to do so themselves. I will leave it your imagination and level of optimism to extrapolate what happens next in each case.
So how does one proceed with multi cloud?
1 Invest in refactoring the tech, and be realistic on time lines.
On an average – most large companies have only ten or twenty percent of their workloads that are cloud ready if you ask the question today. And that only means ready – it does not mean optimized for cloud. The best thing you can do is to invest the engineering effort to get your applications cloud ready.
2 Refactor the organization for the (multi) cloud world
People are the make or break part of any transformation. Ask the hard questions – like can you manage DevSecOps that include your current private cloud and also one or more public cloud ? Have your governance process been refactored and will it be audit ready ? Have you budgeted for frequent re-skilling and attrition management? Does your platform team and apps team know what each other does and are there clear rules of engagement ? etc
3 Be a minimalist and work with the ecosystem
As of today, I don’t think there is a pragmatic way to move to a low chaos multi cloud landscape without close partnerships with the ecosystem – including some partners you may fondly(?) call as “legacy”. On the other hand – you cannot work with everyone and get some place any time soon. So the practical way to do this is to choose a couple of partners to go deep with and try your best to influence their roadmaps and tooling to suit your goals. If you can resist the temptation – I would urge you to not go down the path of building your own management layer with great sophistication.
Parting shot
Every large company CIO that I know personally is a realist when it comes to cloud. Some may not admit their true beliefs in public – which is not uncommon, and not just about cloud either. So I am an optimist when it comes to where multi cloud is headed.