My introduction to SQL was learning Oracle 8.x around 1996 or 97. And since that time, I have heard about Larry Ellison. Interestingly – that is around the first time I heard of Hasso Plattner too . Till then I only knew of 5 IBM engineers being the founders of SAP. I have deep respect for both the gents – they have a lot in common, despite having such different personalities. Of course I have only seen Hasso at close quarters, not Larry. Having worked at IBM for many years, I also have the greatest respect for the DB engineers and researchers there.
When SAP entered DB market seriously with Hana – my first thought was that it was a terrible idea. DB is the stickiest part of the stack at customers. I also remembered being tutored in IBM by the sales leaders that if you own the lowest levels of the stack – like HW, OS , DB and middleware, you own the account for ever. What I did not know at that time was that SAP’s plan was not to be yet another DB vendor – they wanted to change how DB works fundamentally. SAP wanted to play offense in a game that had moved on to heavy defense as the winning strategy. That is not just a sales and marketing thing – it needed a level of engineering that is extremely sophisticated. I was sufficiently convinced that SAP had a real chance of changing the market – and I bet my livelihood on it.
IBM is a great place to get trained in enterprise software. I learned from my first year there that all competitors have to be respected, but none should ever be feared. Competition is the best thing about this industry – keeps everyone on their productive best. Few months ago, IBM came up with DB2 BLU and now ORACLE has come out with 12c. I think both are good moves and both companies will use it to try and negate the impact of SAP Hana. And this is great validation of SAP’s strategy to change how DB should work. What is also not too surprising is that Oracle and IBM chose incremental steps to find some common ground with Hana – rather than go all out. For one – this fits well with their strong defensive strategy of protecting existing instal base, and two it needs a level of engineering that takes more time than what they had since they realized SAP made the right bets. But all things said and done – still a good move that is positive for the enterprise software market.
I give Oracle full marks on messaging – I thought it was absolutely brilliant to say “don’t need to change anything, just flip a switch and reap the benefits of in-memory DB”. That is a simple and elegant message. And it is not trivial to come up with such good messages. It is very easy to understand and appreciate at a high level. Nicely done.
Ellison did say many things in his keynote that Hasso and Vishal from SAP have been saying for years – why RAM is faster, why columnar DBs work better and all of that. All of which are good statements, and he has the credibility to say these things about databases. He definitely was in his elements talking about databases – and I enjoyed watching it. It was also a nice change of pace from the opening act from Fujitsu.
SAP Hana is fast and it is in memory, and it is column based – and looks like 12c does all of this too in some way. But that is just a fraction of what makes Hana special. SAP views Databases very differently from Oracle. HANA is a full fledged platform – which supports all types of processing with one copy of data . Not only does it store data in memory and in columns, it pushes down processing closer to data – and reduces the number of physical layers needed traditionally in an application. It has built in libraries for predictive and statistical functions. It has built in app server and web server. HANA can seamlessly integrate with other big data systems like Hadoop. That is a long winded way of saying – a database can and should be so much more. SAP showed it can be done in one system without needing to club together many different applications – at multiple productive customer projects, not in some proof of concept labs environment.
Oracle could have done a lot more – but chose to just do only very little. I am sure they have the engineering brilliance to do more, and a sales force who could have made use of such innovation. That is disappointing for the techie in me – and I hope they do way more in future and push the envelope on what a DB can do.
I was also kind of confused on why Oracle chose to do Column and row stores with multiple copies of data (unless I misunderstood what Larry said in the keynote). Enterprises already can’t deal with the many redundant copies of data – why would you add to that problem with a “modern” solution?
I have more questions – why is OLTP faster now ? What is the behavior of the system when it starts up ? What happens when there is not enough memory – will it use disk ? What happens when an app needs data in rows and columns ? How much of DBA effort is needed – how smart is the system in deciding what goes into memory and what does not? how much does this “switch” cost a customer? and many more. I am hoping that more details will be available through the conference, or in the weeks that follow. I am hoping there are good logical answers to all of these details.
There were two general types of questions on twitter after the keynote – will be now see a Hana vs Oracle bake off on speed ? and will 12c slow down hana deals and put pricing pressure on hana ? I think both are genuine questions worth asking .
If 100X performance is all 12c is capable of, then we probably won’t need to do any bake offs. There are enough customers who have way more performance gains with Hana than 100X. In any case – raw speed is only one part. What you do with that speed is what happens – and Hana has applications purpose built to exploit the speed – using the other capabilities in Hana like predictive and geospatial cpabilities for example. And given the head start Hana has over 12c, it is hard to imagine Oracle catching up in incremental steps like it seems to be doing.
On slowing down deals and pricing pressure – I have no idea given I don’t work in sales. Nor do I make sales or pricing decisions for SAP. However, from past sales experience – I think this is a factor of how well Oracle and SAP can educate their customers on the technology options. I will definitely be curious to see how it plays out in market. Customers do not buy on technology merit alone – I know that well. Given Hana grew pretty fast and has thousands of customers in last 2 years, I doubt customers will have any issues in seeing its value proposition.
Oracle is a great competitor and will not just sit back and watch hana eat its lunch – as an engineer, I just hope that they bring in some serious innovation to database technology, to back up its messaging. I will be the first to stand up and applaud if they do that. And for SAP, I am sure the intention is to continue to try as hard as we can to maintain and increase the lead on innovation front.
:”Flip a switch ” is not a simple message, it’s simplistic. If only things in IT were done by flipping a switch (certainly not innovation, when one carries the burden of legacy technologies)
LikeLike
Well said, Vijay, as usual.
One thing that I’m eager to see is whether SAP will start certifying SAP applications on top of Oracle 12c. For example, BW on Oracle 12c might be a serious competitor to one of HANA’s main cash cows (BW on HANA). And I don’t say that in terms of how fast it is, but it’s just that they might deliver a considerable gain on lower costs (especially if Oracle 12c can run on low or mid x86 HW – appliance costs are still one of the main cost lines on HANA Business Cases). It might pressure SAP for lower HANA prices, but I believe, and hope, it will particularly push for a faster HW agnosticism and virtualization adoption.
LikeLike
To the best of my knowledge , SAP continues to be open to other databases to certify running its apps . 12c is now in pre beta according to what was mentioned in twitter . When it matures , I assume Oracle will put it through certification if they feel it is necessary .
I am not sure what is the HW required and how price competitive it is . They usually price by cores – so to get the type of results Larry mentioned , I doubt it will be cheap on HW or SW
LikeLike
Vijay,
OLTP would be faster in new design because – LJE mentioned – in current systems there’re 10-20 analytic indexes for every table in addition to 2-3 indexes for OLTP side. With the new design, 10-20 analytic indexes wouldn’t be needed, as a result, insert operation would take 10-20 less inserts/modifications. Therefore OLTP transactions would be faster. If tables on an average has 10-20 analytic indexes in a system, then yes, I agree OLTP transactions would be faster. However in the last 25+ years, I’ve never worked with any system with more than 3-5 (Average, both OLTP& Analytic indexes) indexes per table.
At any rate, Oracle has gained more time to deliver a working in-memory solution by delivering a simple message: Flip a switch.
Best regards,
Bala Prabahar
LikeLike
You have worked in a lot of SAP systems – how many have you seen with 20 analytical indexes ? I cant remember any . There are some that are needed for batch jobs which you cannot really take away. So in general I dont think LJE thought through his answer very well.
Hopefully someone else at ORCL will provide details later
LikeLike
Yes, we’ve ~0 analytical indexes(indexes created for the batch jobs cannot be taken away as you said) in an OLTP system. This is true because the customers have learned to use BW/DW systems for their analytical requirements.
Best regards,
Bala
LikeLike
There’s a lot to be said for the “switch mode” in Oracle’s offering, and not having to require rework or re-engineering of application software to achieve 100X…and we’ll have to wait to see for the license costs to do an assessment of value cost (and to see how real these results are). But I do think there’s a lot of sensibility behind their approach. I think that at present, given the cost of HANA, only edgy applications that can really benefit from > 100X would consider the switch, no?
LikeLike
It is a viable approach only if cost justifies value . They generally price by cores – and I haven’t seen any sizing guidelines yet to see what kind of RAM is required either .
If apps are not re-engineered and logic stays in app server – I seriously doubt that the overall UX will be much better . That kind of results probably are cheaper to get by throwing more RAM and fusion IO cards at existing solutions .
Hopefully we will see some details soon and even better , some customer proof points .
LikeLike