IBM gives predictive analytics a friendly face

1987-predictions-2One of the big challenges facing the Business Analytics industry is the historical complexity of business intelligence and analytics tools. For years companies have had to rely on their BI experts to do just about anything useful; it isn’t that companies do not see value in putting analytics in the hands of business people, it is that the products needed a Diploma in Statistics and intimate familiarity with the technology behind the tools.

However the situation is improving. Products like Spotfire and Tableau have worked hard to deliver data visualization solutions that provide users with business-context easy to understand data, and suppliers of broader Analytics suites such as Oracle and IBM have been trying to improve other aspects of analytics usability. To be honest, IBM has been somewhat lagging in this area, but over the last year or so it is giving clear indication that it has woken up to the advantages of providing such tools as predictive analytics and decision management in a form that the wider business user community can access.

The recent IBM announcement of SPSS Analytic Catalyst is another proof point along the journey to broader access, usage and value. This exciting new development may have been named by a tongue-twisting demon, but the potential it offers companies to create more value from corporate information is huge. In essence, the tool looks at this information and automatically identifies predictive indicators within the data, expressing its discoveries in easy-to-use interactive visuals TOGETHER WITH plain language summaries of what it has found.  So for example, one SPSS Analytic Catalyst (really rolls off the tongue, doesn’t it) page displays the ‘Top Insights’ it has found, such as the key drivers or influencers of a particular outcome.

The combination of simple visuals with associated plain language conceals all the statistical complexity underneath, making the information easily consumable. The business users can quickly identify the drivers of most interest related to corporate key performance measures, for example, and then drill down to gain a deeper insight. Removing the need for highly trained BI experts means that the wider business community can create substantially more value for the company.

Morgan Stanley goes SOA

Morgan Stanley was recently talking about its major SOA investment spanning the last 18 months.

Implemented in its Global Wealth Management division , the justification for SOA was to give advisors and clients an integrated view of more accurate information, more quickly.  As with most SOA projects of this scale, SOA isn’t the whole story.  Morgan Stanley had to first upgrade the network infrastructure to give its branch network the bandwidth required, then implement SOA on top and finally use ETL to convert the data flowing through their service bus into the dashboards the advisors look at (and a customer portal).

Interesting aspects of the coverage include:

  • The programme was a strategic initiative to make this division competitive now and into the future.  Technology is simply the enabler to that goal.
  • A best of breed approach was taken to the technology: IBM for the SOA layer, Informatica for the ETL.
  • Web Services were used extensively (In the drive to distinguish between SOA and Web Services, it is easy to dismiss Web Services.  They can and often are part of the SOA technology stack)
  • The project included major rewrites of existing applications – to consolidate where possible and also to update so that they could be plugged into the SOA framework.
  • The common problem of changing the development culture away from write-it-all-ourselves was stressed.

Ronan

IBM’s Information on Demand streamroller gains speed with the Princeton Softech acquisition

IBM announced the completion of its acquisition of Princeton Softech – a company which focused on data archiving, classification and discovery software.

All of which sounds quite specialist until it is put into the context of IBM’s Information on Demand (IoD) strategy.  Back in March, Ambuj Hoyal, who heads us IBM’s Information Management division (with responsibility for the Information on Demand strategy) explained:

“… an inflection point occurred in 1996 when there were many techniques to create Web sites or do Web-based business… We are at a similar inflection point in 2006. We have myriads of techniques – metadata management, ETL (extraction, transformation, and loading) tools, data creation tools, Federation tools, cleansing tools, profiling tools. People use these tools to solve the information challenge.”

To translate, IBM see a huge opportunity and are putting serious money into it – this acquisition is the latest of 21 which are part of this strategy (to see the list go here).  The opportunity is to build an information management platform which allows organisations to create, maintain and (most importantly) extract value from the myriad of data sources which flow around the enterprise.  Data cleansing, data distribution, data integration and master data management (among other areas) are each expensive activities but often have clear budget and value associated with them – this even before getting to semi-structured information which is also with the Information on Demand remit.  While there are best of breed solutions to different parts of the puzzle, there aren’t single integrated solutions – which is what IBM hopes to offer.  Interestingly, IBM has yet to move on Business Intelligence vendors – it appears to have correctly realised that the major task is not creating dashboards; it is ensuring that what goes into the dashboards is correct and timely.

Any familar with the area of enterprise data management will realise that the challenges inherent in building and deploying such a platform are formidible.  At a recent briefing IBM gave Lustratus, the whole area of data governance in particular was highlighted:  how do you organise structures and responsibilities to ensure that coherent and consistent data definitions can be used and reused through the enterprise (this should sound very familiar to anybody involved in SOA – just switch the word service for data!).  To figure out how to do this right IBM set up the Data Governance Council back in 2005 with many leading financial services and telecoms companies (among others).

Yet again getting into detail is beyond the scope of a normal blog – but I would recommend anybody with a passing interest in BI (or indeed enterprise architectures) to take a look at IBM’s web-site on Information on Demand. Of course the strategy is not without obvious challenges:  The technology is from many different sources (even if it now all belongs to IBM) and there is a significant amount of complexity associated with solving such a complex problem.  Also, when there isn’t a significant regulatory stick (Basel II for instance), I imagine it could be very hard to sell at a strategic level.  This is because while there are clearly valuable uses of Information on Demand, but there seems to be no common theme around which business momentum can be built.  And finally, its association with the term business intelligence may well go against it – already some analysts are wondering where IBM’s query tools will stack up against Business Objects et al (not a relevant question as BO and others will sit on top of IoD) and in many cases the proposition is operational efficiency or regulatory compliance, not (to my mind at least) classic BI.

Ronan