Are you sure you can make use of REAL TIME data?


Several BI vendors, insist that real time data will magically change the enterprise world. They may be right – I have often heard my customers echo that line too. But here is the question that doesn’t get asked often enough – what will you do with it, if information is delivered to you real time?

When world moved from handwritten letters to emails, there was a definite improvement in speed of business. But how long did it take, before people let emails sit unread in inboxes without acting on it? Same with twitter – how many people make use of real time information that gets bombarded via twitter? They choose to bookmark it somehow for later consumption. You can only deal with a certain amount of information – the rest is not useful, and often counterproductive.

This is much bigger in enterprise scale issues. Lets say you find at real time that there is a surge of demand in west coast, and you can make a killing by moving inventory from your Midwest stores to the west coast ones. Excellent idea – except – do you have enough people to pick,pack and ship in real time? Do you have 200 trucks that can show up in next hour to transfer the goods? Can your store systems handle such a load? Do you have people watching real time data stream in – and empowered to act on it? If not, can you realistically automate responses to real time alerts.

Getting real time information is just one part of the solution – even a small one at that – your ability to react to it is what matters more.  The investment needed to react to real time information is pretty significant for most companies.  Which begs the question – how many companies will make significant investments on getting real time information?

The alternate paradigm is “right time”. Well – sounds like a better  idea to me, except there is no one “right time” for everything.  For some one who does day trading, getting up to the second data is important – but for a rank amatuer investor like me, it does not matter much at all. Stock market decisions can be abstracted to “buy and sell” at my level. And I can automate that by alerts and automatic orders. I lose some flexibility, but feel comfortable at the risk level that I don’t mind. Evidently the definition of right time is different for me and a day trader. In the enterprise scene, it depends on multiple factors at various points in time.  And as business evolves – you could get caught up chasing your tail on what is right time.

Real time information surely has its place – I have no doubts – but we need to seriously think about what part of a company’s information needs have a “real” need for such information. “Actionable BI” is probably a nice way to describe it – but actionable does not always mean you get to act on it. It just means if you had the ability to act on it – you could have acted on it. BI does not dictate if it is actionable or not – your operational abilities decide that. In our extreme excitement about cool technology, and Moore’s law and all that – I guess we can all be forgiven for not thinking about the surrounding issues 🙂

Random musings on customers and vendors


Many times during this week, I had conversations on the dynamics between customers and vendors with my buddies.  These are all not connected thoughts, but let me type it here for some day in future, when some of it may make sense – at least to me 🙂

 

In B2B – Customers and Vendors are always strategizing to take advantage of each other. CRM and SRM are practically aimed at the same thing – find as much about your business partner so that you can gain an upper hand in the short term and long term transactions.  B2C is similar – except, the SRM is mostly social in nature. I, as a consumer, cannot keep track of Vendors who will sell to me  – so I trust social channels to give me a fair chance.

 

A common concern from customers is that vendor companies do not share information internally, so the customer has to redundantly provide data in transactions.  This led to the big mantra of 360 degree view of customer – every company out there wanted this, and very few did get there.  Customers would really like their vendors to do all they can to prevent redundant data entry etc, but they will cry bloody murder and sue the vendors if they think Vendor is storing “personal” information about them. The point is – it is not easy to do this, and technology is not always the limiting factor. Example : No sales person on a commission will enter “all” his account management information on a system where some one else can use it. Even if he did – in some countries, he cannot enter certain information that can identify you as a person.  In some other countries, certain data needs to be stored in a server that resides locally in that country and so on. So, even if the vendor has the best CRM system in the world, there is no way of getting around it without serious overhaul of culture, human behavior, incentive schemes and the legal framework. In short, it ain’t happening any time soon in a meaningful way.

 

Customer will not always upgrade to the latest and greatest version of a product, unless they are forced into it somehow. However, they do expect 100% support of what they once bought and also want the vendor to continuously innovate. A lot of consumer software can do innovations much easier than enterprise software strictly because they are not tied to backward compatibility.  If enterprise software can get to that paradigm, we have a prayer of expecting real innovation from vendors.

 

Bottom line – customer wants to pay the least possible money and expect the vendor to be there for them for ever and ever. Vendors on the other hand would like to be constantly paid for what they do, usually via a maintenance fee .

 

Neither customer nor vendor will let go of their profit motive.  But here is the thing – vendor can quantify the profit better than customer. Vendor knows cost and revenue. Customer knows the cost in most vases – but in most cases , customers have to guess or approximate the value.  Due to this – Customers can fairly guess what margins a vendor has, where as the Vendor cannot guess well in most cases on what is the exact value a customer gets, since the customers themselves cannot figure this out easily.  In those cases where customers and vendors know each other to a good extent, the chance of making a deal is much higher. In my opinion, this is also why some customers and vendors stick by each other although they both term each other as difficult. Known devils are generally better than the unknown ones 🙂

 

And finally – business is always done between people. They represent companies – but they still are people who need to feel comfortable dealing with each other. A lot of attributes of these people get attributed to the companies they represent.  So when you hear things like “that Vendor is hard to work with” or “that customer is cheap” or “that company only does on-premises, never on-demand” – most often, it just refers to one person, or a very small group within that company.  But at an abstract level – especially when used as part of aggregate data – these nuances get lost, and we reach very wrong conclusions.

That is it – end of my rant. It is time to start the weekend.

SAP says HANA is ready for General Availability. What say you?


Today, along with few other bloggers – I had a chance to listen to Vishal Sikka of SAP.  Special thanks to Craig Cmehil for inviting me to this meeting. Vishal told us that HANA is going GA on Monday June 20th, 2011.  Vishal also mentioned some impressive pipeline numbers for HANA. And the coolest news from Vishal was the HANA cloud.  In my book, SAP should get a perfect 10 for vision.

 

At SAPPHIRENOW 2011, we heard several customers on video expressing their satisfaction based on the proof of concept type projects they have had.  But beyond that, I have not seen any numbers from SAP on how many clients are actually using HANA in production.  Surely SAP has some KPIs to meet before they take HANA to GA. So why is it that SAP is not shouting from rooftops on how well HANA met these KPIs and how many customers use it in production? Won’t the majority of customers want to know this information before they open their wallets to buy a 1.0 product?

 

The big pitch on HANA is ” REAL real time”. And real time is enabled by replication from source systems like ECC.  The Sybase replication server and BO data services were the two mechanisms SAP told us about in the past. However, in past few days, there has been some information coming out on SLT as replication mechanism from ECC at least for near future .  I am told this is based on ABAP based triggers. I could not find any information on SLT. A friend at SAP has promised to dig out some documentation.  SAP has clearly mentioned in the past on where Rep server fits and where data services fit. But I have not seen any guidance on where and why SLT should be used. And what is the impact? will it increase the load on ECC side? Will it need HANA to have some ABAP instal on it?  Again, I cannot imagine SAP taking HANA to GA without giving clear guidance on this topic.

 

If HANA is going to GA, customers will want some guidance on sizing the hardware. I know from before that there is a rule of thumb sizing. There is a good comment thread on my buddy John Appleby’s blog on sizing.  So let me just give the link here and not rehash it. http://www.bluefinsolutions.com/insights/blog/sap_hana_myth_busting_through_the_market_hype/ .  In short, I would expect SAP to give clear instructions on sizing and have a quick sizer type tool for this.  Without that, I cannot imagine many customers just agreeing to a random rule of thumb.

 

Why was releases like 1.0, 1.5 and 2.0 done away with and substituted with SP 1, SP2 and so on?  What does that mean to customers if anything? Does SAP get any benefit by doing this? And what is involved with moving from one SP to another?

 

I am counting on SAP to provide all this information on Monday when HANA is officially in GA, so that customers and partners can take well informed decisions.

 

Here is a parting thought on Sales and Operations Planning on HANA cloud. S&OP is a complex process on many levels. The raw data for S&OP is usually stored in multiple on-premises systems for demand and supply information. The data is usually at very high volumes due to granularity needed.  And this data usually is very sensitive.  So with these things – I am not convinced that S&OP is a good candidate for HANA cloud. I think HANA on premises will be excellent for S&OP – but just cannot visualize many customers doing S&OP on cloud. Vishal disagreed with me when I expressed my doubts today on viability of this solution- and I am sure he would not have put S&OP on cloud without doing his homework. So I am waiting to see several customers getting into HANA cloud for S&OP and me being proven wrong on this one .

 

If SAP is offering a HANA cloud, will there be anything in it for ecosystem partners? Will developers get access to it to create applications that work on HANA cloud? And will SAP build their own servers for HANA or depend on existing hardware partners? Dennis Howlett has covered this better than I could ever have. Here is his analysis http://www.zdnet.com/blog/howlett/the-untold-story-of-sap-and-data-centers/3169 . I am not sure how big SAP will go about its data centers. On one hand, with ByDesign, OD and HANA cloud – they do need big data centers to cater to all of that. On other hand – most of the money comes from on-premises systems, so why spend big on cloud when it is yet to bring in the big bucks? It is an interesting problem for SAP, and I am very keen to see how they go about solving this.

 

So that is what SAP has to say, and what I have to say. Question is – what say you?