SOA success, and what causes it

I was recently pointed to an article in Mainframe Executive magazine written by David Linthicum on the subject of “Mainframe SOA: When SOA Works/When SOA fails”.

I think the friend who suggested I read it was making mischief, knowing my views on the subject of SOA and guessing (correctly) that this article would wind me up.

In summary, the article says that SOA is a large and complex change to your core architecture and working practices and procedures, and that the success or failure is dictated by questions such as executive buy-in/resourcing/funding/skills, and not technology selection.

The truth about success with SOA is that it has little to do with the technology you want to drag into the enterprise to make SOA work, and more to do with the commitment to the architectural changes that need to occur

I have two problems with the opinions stated in this article. The first is to do with changing attitudes to SOA, and the second with the technology comments.

Let me first state that I am well aware that if a company wants to adopt an enterprise-wide SOA strategy designed to take maximum long-term benefit from this new way of leveraging IT investments, then this requires all ofthe areas brought up in the article to be addressed – skills, management buy-in, political will, funding and a strategic vision coupled with a tactical roadmap. I have no beef with any of this.

But I would contend that the world has changed from two years ago. The financial constraints all companies are experiencing have more or less forced the long-term strategic play onto the back burner for many. Some analysts actually like to claim that SOA is dead, a statement designed to be controversial enough to gain attention but to some extent grounded in the fact that a lot of companies are pulling back from the popular SOA-based business transformation strategies of the past. In fact, SOA is absolutely not dead, but it has changed. Companies are using SOA principles to implement more tactical projects designed to deliver immediate benefits, with the vague thought of one day pulling these projects together under a wider strategic, enterprise-wide SOA banner.

So, as an example, today a company might look at a particular business service such as ‘Create Customer’, or ‘Generate Invoice’, and decide to replace the 27 versions of the service that exist in its silos today with a single shared service. The company might decide to use SOA principles and tools to achieve this, but the planning horizon is definitely on the short term – deliver a new level of functionality that will benefit all users, and help to reduce ongoing cost of ownership. While it would have been valid a few years ago to counsel this company to deliver this as part of an overarching shift to an SOA-oriented style of operations, today most companies will say that although this sounds sensible, current circumstances dictate that focus must remain on the near term.

The other issue I have with this article is the suggestion that SOA success is little to do with the technology choice. Given that the topic here was not just SOA but mainframe SOA, I take particular exception to this. There are a wide range of SOA tools available, but in the mainframe arena the quality and coverage of the tools vary widely. For example, although many SOA tools claim mainframe support, this may in actuality simply be anMQ adapter ‘for getting at the mainframe’. Anyone taking this route is more than likely to fail with SOA, regardless of how well it has taken on the non-technical issues of SOA. Even for those SOA tools with specific mainframe support, some of these offer environments alien to mainframe developers, thereby causing considerable problems in terms of skills utilization. It is critical that whatever technology IS chosen, itcan be used by CICS or IMS-knowledgable folk as well as just disributed specialists. Then there is the question of how intuitive the tools are. Retraining costs can destroy an SOA project before it even gets going.

For anyone interested, there is a free Lustratus report on selecting mainframe SOA tools available from the Lustratus store. However, I can assure companies that, particularly for mainframe SOA, technology selection absolutely IS a key factor for success, and that while all the other transformational aspects of SOA are indeed key to longer term, enterprise-wide SOA there are still benefits to be gained with a more short-term view that is more appropriate in today’s economic climate.

Steve

The REAL concern over Cloud data security

Recently I have been involved in a discussion in the LinkedIn Integration Consortium group on managing data in a Cloud Computing environment, and the subject has turned to security.

I had maintained that data security concerns may sometimes result in companies preferring to look at some sort of internal Cloud model rather than risk putting their data in the Cloud-

the concept that I find is intriguing larger companies is the idea of running an INTERNAL cloud – this removes a lot of the concerns over data security, supplier longevity etc.

This generated a reaction from one of the other discussion participants, Tom Gibbs of DiCOM Grid.

I hate to poke at other commentators but security is an overarching issue for IT and telcom as a whole. No more and probably less of an issue with cloud or SaaS.

It’s almost amusing to watch legacy IT managers whine that b/c it isn’t local it isn’t secure. I’m sorry but this is totally naive.

This brings up an important point. What Tom is saying is that the Cloud provider will almost certainly offer top-notch security tools to protect data from unauthorized access or exposure, and therefore what’s the problem?

The answer is that the executive concern with putting data outside the corporate environment is likely to be more of an emotional rather than logical argument. With so many topical examples of confidential information being exposed, and executives knowing that regulations/legislation/corporate policies often make them PERSONALLY responsible for protecting information such as personal details of clients/customers/citizens, for example, the whole thing is just too scary.

IT folk may see this as naive, just as Tom says. After all, modern security tools are extremely powerful and rigorous. But of course this depends on the tools being properly applied. In the UK, for example, there have been a number of high-profile incidents of CDs or memory sticks containing confidential citizen information being left on trains and exposed to the media. The argument allowing data to be taken off-site was based around the fact that policy required all such data to be encrypted, making it useless if it fell into anyone else’s hands. These encryption algorithms were top-notch, and provide almost total protection. BUT the users who downloaded the information in each of these cases did not bother to encrypt it – in other words, if the procedures had been followed then there would have been no exposure but because people did not implement the procedures then the data was exposed.

These situations have not only proved extremely embarrassing to the data owners involved, but have resulted in heads rolling in a very public fashion. So the concerns of the executive moaning about risk are visceral rather than rational – ‘Moving my data outside of the corporate boundary introduces personal risk to me, and no matter how much the experts try to reassure me I don’t want to take that risk’. Of course less sensitive information will not be so much of a concern, and therefore these worries will not affect every Cloud project. But for some executives the ‘security’ concern with moving data into the Cloud, while not logically and analytically based, is undeniably real.

Steve

Pragmatism is the theme for 2009

I have just returned from a couple of weeks around and about, culminating in presenting at the Integration Consortium’s Global Integration Summit (GIS), where I presented the Lustratus ‘BPM Sweet Spots’ paper.

One message seemed to come out loud and clear from the conference – pragmatism is the watchword for 2009.

There were two other analyst presentations apart from the Lustratus one, and I was surprised to see that both presenters pitched a message along the lines of ‘you will never succeed with SOA/Integration/BPM unless you get all the strategic planning and modelling for your enterprise done first’, combined with a suggestion that the presenter was just the resource to ask for help! This contrasted sharply with my own presentation of choosing tactical targets for BPM rather than going for a strategic, enterprise-wide, fully modelled approach.

I was wondering if I had read the mood wrong in the marketplace, but then the eight or so user case studies all proved to be tactical strikes for specific business benefits rather than the more extensive strategic approach more common a year or so ago. It was nice to be vindicated – it looks like 2009 really IS the year of pragmatism and short-term practical considerations.

Steve

The forgotten SOA benefit – Visibility

There has been a lot of chatter recently about measuring SOA ROI – take a look at Loraine Lawson’s recent blog for instance.

or Gartner’s results of a UK-based survey of SOA adopters. However, one of the benefits that I think a lot of people miss, or do not attribute enough importance to at least, is Visibility.

Basically, the visibility story goes that with SOA, since you break up operational components into discrete business services, then it becomes easy to monitor entry and exit to these services and hence business operations flow. This gives a clear picture in business terms of execution and performance – not just what is happening, or how many times, but HOW business is being carried out.

Gartner did touch upon visibility,

Improved Efficiency in Business Processes Execution – Isolating the business logic from the functional application work enables a clearer view of what a process is, and the rules to which it adheres. This can be measured by lower process administrative costs, higher visibility on existing/running business processes, and reduced number of manual, paper-based steps; better service-level effectiveness; quicker implementation of process iterative or of variants of the same process for different contexts.

However, the Gartner focus was only on visibility as it relates to execution efficiency. In fact, SOA-based visibility offers another benefit which, in today’s tough times particularly, can be a real big hitter for executive management. It enables management to see how process are being executed – in other words, it provides the ideal tool to monitor compliance against a growing raft of regulatory requirements across just about every industry. In order to demonstrate that your systems comply, it is necessary to be able to see what they are doing and how they are doing it. This is what SOA delivers.

So how does improved compliance management fit into the ROI picture? True, it is very hard to attach a dollar amount to compliance – however it certainly matters. With the amount of public and political scrutiny of corporations today, it is absolutely imperative that executives can show they are faithfully adhering to regulations and guidelines. Failure to do so will not only risk severe penalties, but also probably lose them their jobs! Now THAT’s a compelling business case….

Steve

Don’t get handcuffed by EA

I have to confess up front that I have never been desperately comfortable with Enterprise Architecture (EA) frameworks and disciplines, and therefore the opinions I am about to express should be taken in that light.

However, I do worry that EA may be handcuffing some companies to the point where potential benefits are strangled at birth.

I was recently reading an interesting article by Nagesh Anipindi, entitled “Enterprise Architecture: Hope or Hype?”, which discusses the lengthy presence of EA as an innovation and considers the reasons for its failure to move into the mainstream of acceptance. As Nagesh writes,

For 2009, Greta has predicted that more than half the existing EA programs are at risk and will be discontinued in the near future. Remaining ones that survive this economy, per Greta, will struggle with framework and information management problems.

Nagesh goes on to postulate that the current pressures on IT budgets will result in a lot of EA efforts being abandoned, not because they are unworthy but because they fall below other more critical areas such as Operations and development of new business solutions. He then goes on to say that once the industry realizes the massive benefits that EA can deliver, he believes this situation will turn around and EA will become an essential part of every corporate IT organization.

I think Nagesh may have missed the point slightly, although I agree with a lot of what he says. Look at one of the many definitions of Enterprise Architecture, as Nagesh records it –

Gartner defines EA as: “Enterprise Architecture is the process of translating business vision and strategy into effective enterprise change by creating, communicating and improving the key requirements, principles and models that describe the enterprise’s future state and enable its evolution.

This definition epitomizes the problem as far as I am concerned. The basic purpose of EA is there, but clouded with the sort of mumbo-jumbo that frightens off potential decision-makers. What is EA REALLY about? It is about tying the IT architecture and implementation to the business vision and needs, both now and in the future. It is about making sure IT really serves the business. Does this require communication? Of course. Does it require principles and practices – yes. But the complex phrasing of this definition is typical of ‘EA Experts’. These people talk of EA Frameworks, of EA Models, and of rigid procedures. From an intellectual point of view, this is all absolutely fine. If you were writing a thesis on how to architect an enterprise IT system to match business needs and to be able to continue to do so, it might be perfectly acceptable to introduce loads of models, a single tool-driven interface, definition language and frameworks.

However, this is the real world. The danger I see is that this over-enthusiastic approach can tie the hands of professionals and organizations so tightly that they cannot achieve anything. There is also the danger that when this approach is considered over time, it introduces a real skills problem, with the need to train new people on all these tools and methods which do not actually contribute to delivering new business value. In effect, the mechanisms to deliver the effective enterprise architecture develop a life of their own and start to consume development resources for their own purposes as opposed to business needs.

A small example may illustrate my point. In the old days, when I worked with IBM, a purist movement pointed out that because we wrote our design documentation in English, we were impacting quality and accuracy by introducing the potential for misunderstandings as to what a passage of English might actually mean. As a result, IBM worked with Oxford University to develop a mathematically-based specification language to eliminate the problem. This made complete sense at an intellectual level. However, although this language was adopted for a time, there were always new people coming onto the team who didn’t understand it, and training began to be a real overhead. Eventually, the language was dropped. Although it made intellectual sense to use it, it did not work at a practical level.

I am all for Enterprise Architecture – at least in spirit. I believe the role of an Enterprise Architect is exactly to ensure that the technology correctly delivers on the business needs, and is architected in such a way to enable new business changes to be implemented quickly and effectively. But I don’t think this requires a complex framework of models and requirements tools and so on. In fact, I think a strong EA edicts the minimum, but offers a loose framework that allows departmental innovation. In truth, there is nothing new about EA – it is all about doing things sensibly and remembering that IT is there purely to serve the business and not itself. All the rest of the formal EA clutter is a set of handcuffs that can hold organizations back.

Steve

What is behind SAP’s ‘go-slow’ on SaaS?

There have been many reports recently on the problems with SAP’s Software as a Service (SaaS) offering, Business ByDesign – see for example the article by Timothy Morgan here.

To summarize, SAP is backing off its initial, bullish claims on SAP Business ByDesign, saying that it is now going to proceed at a much slower pace than originally planned. Of course, the SAP trade show Sapphire, which is being held this week, might provide more info, but I somehow doubt it.

So, what is going on? Why the sudden backtrack? After great trumpeting 18 months ago from SAP about Business ByDesign being the magic bullet for SMEs, offering the ability to run remote copies of SAP applications on a per user basis without having to cough up for a full license, why the hesitation?

I suspect the truth of the matter may be partly political, partly execution oriented and partly financial. There are those who would argue that SAP does not really WANT a SaaS market for its packages to come rushing into existence. After all, from a supplier point of view wouldn’t you prefer to sell more expensive licenses that lock the user in rather than a cheap usage-based service that the user can walk away from at any time?  So the conspiracy theorists would say SAP deliberately tried to freeze the market for SAP SaaS offerings to discourage competition and slow down the emergence of this market.

On the execution side, perhaps it is possible that SAP did not realize that selling SaaS solutions is a world away from selling large application suites directly to big companies. SaaS solutions are low-cost high-volume as opposed to high-cost low-volume, and hence need much more efficient and layered distribution channels – and SMEs are used to picking up the phone to ask someone whenever they have to change something, not a great strength for SAP’s support structure.

Then finally, the financial side. Many SaaS suppliers have discovered an uncomfortable truth – while in a license model the user pays a substantial sum of money for purchase followed by maintenance, in a SaaS model the risk position is reversed, with the supplier having to put the resources in place up front to support the POTENTIAL usage of the infrastructure by all those signed up users and then receiving revenues in a slow trickle over time. Is it possible that SAP just didn’t like the financial implications of having to continually invest while looking at payback times of years? Did they therefore decide to deliberately throttle the number of new customers, giving them a chance to make some money before making more investments?

Maybe SAP will tell all at Sapphire … or maybe we will just have to keep guessing.

Steve

Does Microsoft ESB Guidance have a future?

As one might have expected, Microsoft tried to ignore the Enterprise Service Bus (ESB) movement for a long time, but eventually it had to do something to answer demands of its customer base looking for SOA support.

Its response was Microsoft ESB Guidance, a package of

architectural guidance, patterns, practices, and a set of BizTalk Server and .NET components to simplify the development of an Enterprise Service Bus (ESB) on the Microsoft platform

Let’s be honest. This is a typical Microsoft ‘fudge’. Microsoft ESB Guidance is not a Supported Product, but is instead a set of guidelines and one or two components. It is a Microsoft Patterns and Practices offering – in other words, you are on your own. This may be fine if you are a Microsoft development shop, but far more worrying if you are real business user with extensive Microsoft presence. It has a lot of the disadvantages of Open Source, but you still have to pay for Bizt Talk etc..

So what does the future hold? Will trying to bring the Microsoft server world into the SOA domain always be a matter of risk and going it alone? Will Microsoft productize Microsoft ESB Guidance? Are there any alternatives other than just consigning the Microsoft platform to run in isolation on the fringes of the enterprise?

Fortunately, the Microsoft model may actually be working here. I do not believe Microsoft will ever productize ESB Guidance – after all, they have had two years and are still maintaining there are no plans to do this. However, what this position does do is it encourages opportunists to jump in and develop products based around the Microsoft technology and guidance materials. An example is Neuron-ESB, from Microsoft specialists Neudesic.

So, while Lustratus strongly cautions users about the effort, cost and risk of using Microsoft’s own ESB Guidance package, the idea of utilizing a Microsoft-based supported ESB product from a specialist vendor is much more attractive. Of course, whether these new Microsoft-based ESBs are any good is yet to be seen….

Steve

BPM is flying off the shelves – at least at Pegasystems

It’s always nice to be proved right. At the end of 2008, when Lustratus published its 2009 predictions for the infrastructure market, we highlighted BPM and predicted that 2009 would (at last) be its year.

In March I discussed the impressive 2008 for Pegasystemsin a previous Litebytes post, and now the company has made its 1Q09 announcement of earnings.

Briefly, we are talking about revenue increasing 29% YOY to $62.4M for the quarter, and license revenue up a storming 60% to $28M. Recession – what recession? Admittedly the results were skewed a little by a single large deal closing at around 12% of the total, which may put Pega under pressure for the next quarter, but this cannot disguise the point we made in our 2009 predictions – tactical, targeted BPM can deliver the real savings and flexibility to support broadening customer bases and types that businesses are looking for in the current economic downturn, or can respond to specific business channels such as tracking and reducing fraud.

The other point that these results reaffirm is that companies are looking for solutions that are geared to their own industry vertical needs – Pegasystems has a strong industry framework philosophy that responds to this need very effectively. The only possible ‘cloud’ on the horizon seems to be Peagsystems’ tentative move towards the dangerous ‘Platform-as-a-Service’ (PaaS) market segment – this area is a minefield at the moment and it is to be hoped that Pega do not find themselves sucked into the abyss by getting to wedded to this idea. Just stick to what you do best, guys!

In summary, for all those companies who have heard about BPM and then shied away, put off by the thought of the effort required to deployBPM across the enterprise for all processes, take another look with a tactical, laser-focused mind-set. BPM really can be selectively applied at a reasonable price, with rapid payback and an attractive ongoing benefit stream.

Steve

SAP takes a hammering in 1Q 2009 results

SAP released its first quarter results today – and they do not make pretty reading at all.

Although overall revenue was only slightly down overall it is the software license figures that are so alarming, crashing by a third compared to 2008. This may seem not to be important since the software license numbers are only a relatively small part of overall revneues, but in fact it is the software license performance that drives a lot of the other related activities, so weakness here will feed through over time. SAP points to the fact that 1Q08 was before the global problems had really taken hold, but while I think this is partially true, I think there is another problem evident here.

Companies are still investing in IT – there have been enough results in the last few weeks that show great growth for some, with Pegasystems and Sybase being two particular examples. However, the SAP results seem to show a greater weakness in the application package market – and this is only to be expected. The problem is that while companies like Pegasystems and Sybase are looking to help companies get immediate return through doing things differently (using BPM and going mobile respectively), SAP packages are SAP packages. They do what they do, and although it is generally a good idea to keep updating them and spreading them more widely, these tasks are

  • Very time-consuming and costly
  • Not exactly urgent

On this basis, most companies are electing to stick with what they have at the moment on the packages front, while concentrating on other areas of more immediate return in the infrastructure like BPM and Business Events implementations. This gives SAP a real headache in the near term. Eventually, once everyone is spending again, companies may well return to the question of their SAP application package portfolio, but at least in 2009 I suspect this will be put on the back burner. I guess that for SAP, 2010 can’t come soon enough.

Steve

Why do so many SOA adopters moan about low reuse levels?

I was reading a recent post from Joe McKendrick the other day on measuring SOA success…

…and it reminded me of a related issue – that of measuring services reuse. SOA adopters often moan to me that despite having implemented SOA and deployed many services, reuse rates are down at the 1-1.2 level – in other words, virtually no reuse. They seem to want to pick a fight with me because as an advocate of SOA I have often pointed to reuse as one of the more measurable benefits. After all, achieving a high level of reuse is a clear indicator to business executives that efficiency is increasing, since the implication is less development is required to do new things.

I am starting to get pretty short now inthese conversations. I wish, wish, wish that people would heed my previous advice – don’t think of SOA delivering reusable services, think of it as a great tool for SHARED services. Obviously reuse will come through services being shared – so what point am I trying to make? The problem is people are choosing to build ‘reusable services’ with SOA and assuming that others will start reusing them. It is the old ‘Build and they will come’ philosophy. This rarely works – it is worse than a scatter-gun approach. If users instead think about what services would be good candidates for being shared first, and then develop these as SOA services, reuse levels will definitely improve.

So, when getting started with SOA, don’t encourage everyone to start building code into services and hope that reuse will come as if by magic. Start off by deciding on the logical services to build that will be shared – things like get customer history, or create new customer. Then go ahead and build these shared services candidates, and see reuse levels climb….hopefully making it easier to justify your SOA investments to the business.

Steve