Calling all integration experts!

Remember the old Universal Translator as modeled here by the late Mr. Spock? One of the first (or perhaps future?) examples of integration solutions, and certainly one of the most fondly rememberehttp://zagg-blog.s3.amazonaws.com/community/blog/wp-content/uploads/2012/03/12581.jpgd! But at its heart, it is also an almost perfect representation of the integration challenges today. Many years ago, there was EAI (Enterprise Application Integration) which was all about integrating homegrown applications with purchased package applications and/or alien applications brought in from Mergers and Acquisitions activity. The challenge was to find a way to make these applications from different planets communicate with one another to increase return on assets and provide a complete view of enterprise activity. EAI tools appeared from vendors such as TIBCO, SeeBeyond, IBM, Vitria, Progress Software, Software AG and webMethods to mention just a few.

Then there came the SOA initiative. By building computer systems with applications in the form of reusable chunks of business functionality (called services) the integration challenge could be met by enabling different applications to share common services.

Now the eternal wheel is turning once again, with the integration challenge clothed in yet another disguise. This time it is all about integrating systems with completely different usage a resource characteristics such as mobile devices, IoT components and traditional servers, but also applications of completely new types such as mobile apps and cloud-based SaaS solutions. In an echo of the past, lines of business are increasingly going out and buying cloud-based services to solve their immediate business needs, or paying a third-party developer to create the App they want, only to then turn to IT to get them to integrate the new solutions with the corporate systems of record.

Once again the vendors will respond to these user needs, probably extending and redeveloping their existing integration solutions or maybe adding new pieces where required. But as you look for potential partners to help you with this next wave of integration challenges, it is worth keeping in mind possibly the most important fact of all; a fact that has been evident throughout the decades of integration challenges to date. Every single time the integration challenge has surged to the top of the priority list, the key differentiator contributing to eventual success is not the smarts built into the tools and software / appliances on offer. Rather it is all about the advice and guidance you can get from people with extensive experience in integration challenges. Whether from vendors or service providers, these skills are absolutely essential. When it comes down to it, the technical challenges of integration are just the tip of the iceberg; all the real challenges are how you plan what you are going to do and how you work across disciplines and departments to ensure the solution is right for your company. You don’t have the time to learn this – find a partner who has spent years steeped in integration and listen to what they have to say!

Is Cloud lock-in a good thing, or bad?

Salesforce.comI am doing a lot of research into Cloud Computing at the moment, and spent an enjoyable morning with Salesforce.com, one of the largest Cloud vendors.

However, one thing that particularly piqued my interest was the discussion on Cloud lock-in. One of the most frequent concerns I hear from companies thinking about Cloud is that they are worried about vendor lock-in. After all, with Cloud being so new, what if you lock into a supplier who does not survive?

The discussions with Saleforce.com highlighted an interesting aspect to this debate. One of its offerings, force.com, provides a ‘Platform as a Service’ (PaaS) cloud offering, where users are presented with an environment in the cloud complete with a whole host of useful tools to build their own applications to run int he cloud or customize existing ones. However, Salesforce.com offers its own programming environment which is “java-like” in its own words. This immediately raises the lock-in concern. If a company builds applications using this, then these applications are not portable to other Java environments, so the user is either stuck with Salesforce.com or faces a rewrite.

A bad thing, you might think. BUT Salesforce.com claims that the reason it has had to go with a Java-like environment is that this enables it to provide much improved isolation between different cloud tenants (users) and therefore better availability and lower risk. For the uninitiated, the point about Cloud is that lots of using companies share the same cloud in what the industry calls a multi-tenancy arrangement, and this obviously raises a risk that these tenants might interfere with each other in some way, either maliciously or accidentally. Salesforce.com has mediated that risk by offering a programming environment that specifically helps to guard against this happening, and hence differs from pure Java.

So, is this lock-in a bad thing or good? I don’t know whether Salesforce.com could have achieved its aims a different way, and I have to admit that to a cynic like me the fact that solving this problem ‘unfortunately’ locks you into the supplier seems a bit suspicious. However, this is irrelevant since the vendor is doing the work and has chosen its implementation method, which it is of course free to do. Therefore, the question facing the potential force.com user is simple – the strategic risk of being locked in to the supplier has to be balanced against the operational risk of possible interference from other tenants. Depending on how the user reads this balance, this will determine how good or bad the lock-in option is.

Steve

SOA success, and what causes it

I was recently pointed to an article in Mainframe Executive magazine written by David Linthicum on the subject of “Mainframe SOA: When SOA Works/When SOA fails”.

I think the friend who suggested I read it was making mischief, knowing my views on the subject of SOA and guessing (correctly) that this article would wind me up.

In summary, the article says that SOA is a large and complex change to your core architecture and working practices and procedures, and that the success or failure is dictated by questions such as executive buy-in/resourcing/funding/skills, and not technology selection.

The truth about success with SOA is that it has little to do with the technology you want to drag into the enterprise to make SOA work, and more to do with the commitment to the architectural changes that need to occur

I have two problems with the opinions stated in this article. The first is to do with changing attitudes to SOA, and the second with the technology comments.

Let me first state that I am well aware that if a company wants to adopt an enterprise-wide SOA strategy designed to take maximum long-term benefit from this new way of leveraging IT investments, then this requires all ofthe areas brought up in the article to be addressed – skills, management buy-in, political will, funding and a strategic vision coupled with a tactical roadmap. I have no beef with any of this.

But I would contend that the world has changed from two years ago. The financial constraints all companies are experiencing have more or less forced the long-term strategic play onto the back burner for many. Some analysts actually like to claim that SOA is dead, a statement designed to be controversial enough to gain attention but to some extent grounded in the fact that a lot of companies are pulling back from the popular SOA-based business transformation strategies of the past. In fact, SOA is absolutely not dead, but it has changed. Companies are using SOA principles to implement more tactical projects designed to deliver immediate benefits, with the vague thought of one day pulling these projects together under a wider strategic, enterprise-wide SOA banner.

So, as an example, today a company might look at a particular business service such as ‘Create Customer’, or ‘Generate Invoice’, and decide to replace the 27 versions of the service that exist in its silos today with a single shared service. The company might decide to use SOA principles and tools to achieve this, but the planning horizon is definitely on the short term – deliver a new level of functionality that will benefit all users, and help to reduce ongoing cost of ownership. While it would have been valid a few years ago to counsel this company to deliver this as part of an overarching shift to an SOA-oriented style of operations, today most companies will say that although this sounds sensible, current circumstances dictate that focus must remain on the near term.

The other issue I have with this article is the suggestion that SOA success is little to do with the technology choice. Given that the topic here was not just SOA but mainframe SOA, I take particular exception to this. There are a wide range of SOA tools available, but in the mainframe arena the quality and coverage of the tools vary widely. For example, although many SOA tools claim mainframe support, this may in actuality simply be anMQ adapter ‘for getting at the mainframe’. Anyone taking this route is more than likely to fail with SOA, regardless of how well it has taken on the non-technical issues of SOA. Even for those SOA tools with specific mainframe support, some of these offer environments alien to mainframe developers, thereby causing considerable problems in terms of skills utilization. It is critical that whatever technology IS chosen, itcan be used by CICS or IMS-knowledgable folk as well as just disributed specialists. Then there is the question of how intuitive the tools are. Retraining costs can destroy an SOA project before it even gets going.

For anyone interested, there is a free Lustratus report on selecting mainframe SOA tools available from the Lustratus store. However, I can assure companies that, particularly for mainframe SOA, technology selection absolutely IS a key factor for success, and that while all the other transformational aspects of SOA are indeed key to longer term, enterprise-wide SOA there are still benefits to be gained with a more short-term view that is more appropriate in today’s economic climate.

Steve

Pragmatism is the theme for 2009

I have just returned from a couple of weeks around and about, culminating in presenting at the Integration Consortium’s Global Integration Summit (GIS), where I presented the Lustratus ‘BPM Sweet Spots’ paper.

One message seemed to come out loud and clear from the conference – pragmatism is the watchword for 2009.

There were two other analyst presentations apart from the Lustratus one, and I was surprised to see that both presenters pitched a message along the lines of ‘you will never succeed with SOA/Integration/BPM unless you get all the strategic planning and modelling for your enterprise done first’, combined with a suggestion that the presenter was just the resource to ask for help! This contrasted sharply with my own presentation of choosing tactical targets for BPM rather than going for a strategic, enterprise-wide, fully modelled approach.

I was wondering if I had read the mood wrong in the marketplace, but then the eight or so user case studies all proved to be tactical strikes for specific business benefits rather than the more extensive strategic approach more common a year or so ago. It was nice to be vindicated – it looks like 2009 really IS the year of pragmatism and short-term practical considerations.

Steve

The forgotten SOA benefit – Visibility

There has been a lot of chatter recently about measuring SOA ROI – take a look at Loraine Lawson’s recent blog for instance.

or Gartner’s results of a UK-based survey of SOA adopters. However, one of the benefits that I think a lot of people miss, or do not attribute enough importance to at least, is Visibility.

Basically, the visibility story goes that with SOA, since you break up operational components into discrete business services, then it becomes easy to monitor entry and exit to these services and hence business operations flow. This gives a clear picture in business terms of execution and performance – not just what is happening, or how many times, but HOW business is being carried out.

Gartner did touch upon visibility,

Improved Efficiency in Business Processes Execution – Isolating the business logic from the functional application work enables a clearer view of what a process is, and the rules to which it adheres. This can be measured by lower process administrative costs, higher visibility on existing/running business processes, and reduced number of manual, paper-based steps; better service-level effectiveness; quicker implementation of process iterative or of variants of the same process for different contexts.

However, the Gartner focus was only on visibility as it relates to execution efficiency. In fact, SOA-based visibility offers another benefit which, in today’s tough times particularly, can be a real big hitter for executive management. It enables management to see how process are being executed – in other words, it provides the ideal tool to monitor compliance against a growing raft of regulatory requirements across just about every industry. In order to demonstrate that your systems comply, it is necessary to be able to see what they are doing and how they are doing it. This is what SOA delivers.

So how does improved compliance management fit into the ROI picture? True, it is very hard to attach a dollar amount to compliance – however it certainly matters. With the amount of public and political scrutiny of corporations today, it is absolutely imperative that executives can show they are faithfully adhering to regulations and guidelines. Failure to do so will not only risk severe penalties, but also probably lose them their jobs! Now THAT’s a compelling business case….

Steve

Don’t get handcuffed by EA

I have to confess up front that I have never been desperately comfortable with Enterprise Architecture (EA) frameworks and disciplines, and therefore the opinions I am about to express should be taken in that light.

However, I do worry that EA may be handcuffing some companies to the point where potential benefits are strangled at birth.

I was recently reading an interesting article by Nagesh Anipindi, entitled “Enterprise Architecture: Hope or Hype?”, which discusses the lengthy presence of EA as an innovation and considers the reasons for its failure to move into the mainstream of acceptance. As Nagesh writes,

For 2009, Greta has predicted that more than half the existing EA programs are at risk and will be discontinued in the near future. Remaining ones that survive this economy, per Greta, will struggle with framework and information management problems.

Nagesh goes on to postulate that the current pressures on IT budgets will result in a lot of EA efforts being abandoned, not because they are unworthy but because they fall below other more critical areas such as Operations and development of new business solutions. He then goes on to say that once the industry realizes the massive benefits that EA can deliver, he believes this situation will turn around and EA will become an essential part of every corporate IT organization.

I think Nagesh may have missed the point slightly, although I agree with a lot of what he says. Look at one of the many definitions of Enterprise Architecture, as Nagesh records it –

Gartner defines EA as: “Enterprise Architecture is the process of translating business vision and strategy into effective enterprise change by creating, communicating and improving the key requirements, principles and models that describe the enterprise’s future state and enable its evolution.

This definition epitomizes the problem as far as I am concerned. The basic purpose of EA is there, but clouded with the sort of mumbo-jumbo that frightens off potential decision-makers. What is EA REALLY about? It is about tying the IT architecture and implementation to the business vision and needs, both now and in the future. It is about making sure IT really serves the business. Does this require communication? Of course. Does it require principles and practices – yes. But the complex phrasing of this definition is typical of ‘EA Experts’. These people talk of EA Frameworks, of EA Models, and of rigid procedures. From an intellectual point of view, this is all absolutely fine. If you were writing a thesis on how to architect an enterprise IT system to match business needs and to be able to continue to do so, it might be perfectly acceptable to introduce loads of models, a single tool-driven interface, definition language and frameworks.

However, this is the real world. The danger I see is that this over-enthusiastic approach can tie the hands of professionals and organizations so tightly that they cannot achieve anything. There is also the danger that when this approach is considered over time, it introduces a real skills problem, with the need to train new people on all these tools and methods which do not actually contribute to delivering new business value. In effect, the mechanisms to deliver the effective enterprise architecture develop a life of their own and start to consume development resources for their own purposes as opposed to business needs.

A small example may illustrate my point. In the old days, when I worked with IBM, a purist movement pointed out that because we wrote our design documentation in English, we were impacting quality and accuracy by introducing the potential for misunderstandings as to what a passage of English might actually mean. As a result, IBM worked with Oxford University to develop a mathematically-based specification language to eliminate the problem. This made complete sense at an intellectual level. However, although this language was adopted for a time, there were always new people coming onto the team who didn’t understand it, and training began to be a real overhead. Eventually, the language was dropped. Although it made intellectual sense to use it, it did not work at a practical level.

I am all for Enterprise Architecture – at least in spirit. I believe the role of an Enterprise Architect is exactly to ensure that the technology correctly delivers on the business needs, and is architected in such a way to enable new business changes to be implemented quickly and effectively. But I don’t think this requires a complex framework of models and requirements tools and so on. In fact, I think a strong EA edicts the minimum, but offers a loose framework that allows departmental innovation. In truth, there is nothing new about EA – it is all about doing things sensibly and remembering that IT is there purely to serve the business and not itself. All the rest of the formal EA clutter is a set of handcuffs that can hold organizations back.

Steve

What is behind SAP’s ‘go-slow’ on SaaS?

There have been many reports recently on the problems with SAP’s Software as a Service (SaaS) offering, Business ByDesign – see for example the article by Timothy Morgan here.

To summarize, SAP is backing off its initial, bullish claims on SAP Business ByDesign, saying that it is now going to proceed at a much slower pace than originally planned. Of course, the SAP trade show Sapphire, which is being held this week, might provide more info, but I somehow doubt it.

So, what is going on? Why the sudden backtrack? After great trumpeting 18 months ago from SAP about Business ByDesign being the magic bullet for SMEs, offering the ability to run remote copies of SAP applications on a per user basis without having to cough up for a full license, why the hesitation?

I suspect the truth of the matter may be partly political, partly execution oriented and partly financial. There are those who would argue that SAP does not really WANT a SaaS market for its packages to come rushing into existence. After all, from a supplier point of view wouldn’t you prefer to sell more expensive licenses that lock the user in rather than a cheap usage-based service that the user can walk away from at any time?  So the conspiracy theorists would say SAP deliberately tried to freeze the market for SAP SaaS offerings to discourage competition and slow down the emergence of this market.

On the execution side, perhaps it is possible that SAP did not realize that selling SaaS solutions is a world away from selling large application suites directly to big companies. SaaS solutions are low-cost high-volume as opposed to high-cost low-volume, and hence need much more efficient and layered distribution channels – and SMEs are used to picking up the phone to ask someone whenever they have to change something, not a great strength for SAP’s support structure.

Then finally, the financial side. Many SaaS suppliers have discovered an uncomfortable truth – while in a license model the user pays a substantial sum of money for purchase followed by maintenance, in a SaaS model the risk position is reversed, with the supplier having to put the resources in place up front to support the POTENTIAL usage of the infrastructure by all those signed up users and then receiving revenues in a slow trickle over time. Is it possible that SAP just didn’t like the financial implications of having to continually invest while looking at payback times of years? Did they therefore decide to deliberately throttle the number of new customers, giving them a chance to make some money before making more investments?

Maybe SAP will tell all at Sapphire … or maybe we will just have to keep guessing.

Steve

Why do so many SOA adopters moan about low reuse levels?

I was reading a recent post from Joe McKendrick the other day on measuring SOA success…

…and it reminded me of a related issue – that of measuring services reuse. SOA adopters often moan to me that despite having implemented SOA and deployed many services, reuse rates are down at the 1-1.2 level – in other words, virtually no reuse. They seem to want to pick a fight with me because as an advocate of SOA I have often pointed to reuse as one of the more measurable benefits. After all, achieving a high level of reuse is a clear indicator to business executives that efficiency is increasing, since the implication is less development is required to do new things.

I am starting to get pretty short now inthese conversations. I wish, wish, wish that people would heed my previous advice – don’t think of SOA delivering reusable services, think of it as a great tool for SHARED services. Obviously reuse will come through services being shared – so what point am I trying to make? The problem is people are choosing to build ‘reusable services’ with SOA and assuming that others will start reusing them. It is the old ‘Build and they will come’ philosophy. This rarely works – it is worse than a scatter-gun approach. If users instead think about what services would be good candidates for being shared first, and then develop these as SOA services, reuse levels will definitely improve.

So, when getting started with SOA, don’t encourage everyone to start building code into services and hope that reuse will come as if by magic. Start off by deciding on the logical services to build that will be shared – things like get customer history, or create new customer. Then go ahead and build these shared services candidates, and see reuse levels climb….hopefully making it easier to justify your SOA investments to the business.

Steve

A practical approach to Open Source (part 4)

So far, these posts have covered user benefits of OSS, risks and the need to understand the various different OSS business models.

This final post in the series looks at tips for getting benefit from OSS implementations.

The first point to remember is that even though OSS software is ‘free’. its adoption needs to be planned carefully. This is even true forthe lowest risk area of personal productivity tools, surprisingly. Most people think this is the easiest OSS area, and are happy to let OpenOffice or Firefox or any of the multiplicity of user tools be loaded….but this can lead to trouble. Remember that one characteristic of OSS is that because it is free, there is no need for a user to gain purchasing approval, and hence adoption can be uncontrolled. If it then turns out that the new software introduces an incompatibility with other systems, this can be a nightmare. Make sure that adoption is controlled and that every employee is aware that OSS software purchasing should go through the same governance procedures as commercial software, even if it doesn;t cost anything.

This leads to the next point. In fact, OSS software is never free. The LICENSE may be free, but there are users to train, developers to educate, support to be arranged, risks to be evaluated and countless other tasks. Indeed, at the OSS infrastructure level as opposed to end-user productivity tools area, most OSS offerings are of the ‘framework’ type where the user is left having to do extensive development and customization before the software can be used productively. So, to succeed with OSS it is necessary to evaluate the business case taking all these resource and service requirements into account, even if license costs are 0.

The next tip is think ahead. Ask yourself why the software is free. Is it because the software community, out of the goodness of its heart (!), wants to share its bounties with everyone for free, or is there some other game plan at work? A number of OSS projects have come about from users wanting to find a way to defray their costs of supporting their home-developed code base. Projects such as AMQP and Swordfish srping to mind. The issue here is that if the particular project never really gets popular acceptance, then  future updates are at risk of dying off if the original authoring company changes direction. Other OSS projects are offered by vendors having a ‘full-function’, priced version of the software. Remember IONA’s Artix / Celtix ESBs? Artix was the commercial product, and Celtix the OSS version. Every time IONA added new function, it tended to put it in the commercial one first, and only backfitted it to the OSS version if it didn’t get wide acceptance. So, be aware that if you go with an OSS project you may have to take a commercial license in the future.

Watch out for projects that claim massive acceptance but which in truth are only supported and used by a small minority. A typical trick to watch for is claims of ‘Millions of donwloads’. This is really weak – remember that if something is free to download, every student in the world is likely to download it at least to play with. Only a timy fraction of these downloads would even move to a point of trying to actually use it.

The best tip of all is to wait for clear signs that a particular OSS project has gone mainstream. So, Mozilla Firefox is so well-known as a browser, with so many users, that it is a reasonably safe bet. LINUX has huge industry support, and rich backers such as IBM. There is no way it would ever be allowed to fall behind in the technology stakes, and because of its wide acceptance there are hundreds of companies in the LINUX ecosystem now offering support, services, training and other OSS LINUX add-ons. However, If you really want to be a trailblazer, then go ahead with unproven projects…but just go in with eyes wide open.

Steve

A practical approach to Open Source (part 3)

The third post in this short series looks at the need to understand the business model surrounding the OSS offering being considered.

One of the defining qualities of OSS is that, at least at a basic level, the produt can be licensed at no charge. This immediately raises the question of how sustainable the model surrounding theOSS project is. The fundamental question is, who is going to keep the code base up to date? Who will apply fixes, develop new function, offer support, provide documentation, etc?

There are a number of different types of OSS project, and each has different business model implications that affect its likely success and future longevity. At its heart, the OSS movement really got under way based on an ‘anti-commercial’ theme, where programmers wanted to share their skills and use software that was developed ny them, for them. This is fine as far as it goes, but as people’s interests change, the exposure is that these developers will move on to something new and the original OSS project will wither away. In the rare situations where th problem is overcome, there is usually a viral element to the project’s success, like in the case of Firefox for example.

The next model is where a commercial company is set up around the OSS project. Usually, these companies sell services around the OSS project such as documentation and training, as well as offering commercial licenses to cover support, or verified and tested versions of the OSS code base. The success of this approach will depend on whether the OSS users are prepared to cross the ‘free software’ line and accept that there will still be costs incurred. A big problem here, however, is how extensive the support offered is. The worst threat is that OSS projects often use other OSS offerings to fill out capabilities, and therefore either the commercial support organization has to become expert in all these code bases, or there will be gaps in the support.

The most devious OSS model is where a vendor sponsors an OSS project for its own advantages, regardless of the implications on the user. Typically, a vendor might take a base level of code and make it an OSS project ‘for the good of the community’, but instead of this project attracting other development partners it remains drive by the single vendor. Now, that vendor typically produces an ‘authentic’ version of the project which DOES have a license cost and maintenance fee. The idea is to get users on board thinking the product is free, and then hook them with the professional versions.

Finally, the best OSS model of all from a user point of view is where a number of large vendors decide it is in their interests to back a particular OSS project. This is the case with LINUX, for example, where vendors such as IBM have put in millions of dollars of investment. As a result, a whole ecosystem of LINUX-oriented companies have sprung up, and all of this ensures that LINUX users can have a degree of confidence in its future.

Steve