Calling all integration experts!

Remember the old Universal Translator as modeled here by the late Mr. Spock? One of the first (or perhaps future?) examples of integration solutions, and certainly one of the most fondly rememberehttp://zagg-blog.s3.amazonaws.com/community/blog/wp-content/uploads/2012/03/12581.jpgd! But at its heart, it is also an almost perfect representation of the integration challenges today. Many years ago, there was EAI (Enterprise Application Integration) which was all about integrating homegrown applications with purchased package applications and/or alien applications brought in from Mergers and Acquisitions activity. The challenge was to find a way to make these applications from different planets communicate with one another to increase return on assets and provide a complete view of enterprise activity. EAI tools appeared from vendors such as TIBCO, SeeBeyond, IBM, Vitria, Progress Software, Software AG and webMethods to mention just a few.

Then there came the SOA initiative. By building computer systems with applications in the form of reusable chunks of business functionality (called services) the integration challenge could be met by enabling different applications to share common services.

Now the eternal wheel is turning once again, with the integration challenge clothed in yet another disguise. This time it is all about integrating systems with completely different usage a resource characteristics such as mobile devices, IoT components and traditional servers, but also applications of completely new types such as mobile apps and cloud-based SaaS solutions. In an echo of the past, lines of business are increasingly going out and buying cloud-based services to solve their immediate business needs, or paying a third-party developer to create the App they want, only to then turn to IT to get them to integrate the new solutions with the corporate systems of record.

Once again the vendors will respond to these user needs, probably extending and redeveloping their existing integration solutions or maybe adding new pieces where required. But as you look for potential partners to help you with this next wave of integration challenges, it is worth keeping in mind possibly the most important fact of all; a fact that has been evident throughout the decades of integration challenges to date. Every single time the integration challenge has surged to the top of the priority list, the key differentiator contributing to eventual success is not the smarts built into the tools and software / appliances on offer. Rather it is all about the advice and guidance you can get from people with extensive experience in integration challenges. Whether from vendors or service providers, these skills are absolutely essential. When it comes down to it, the technical challenges of integration are just the tip of the iceberg; all the real challenges are how you plan what you are going to do and how you work across disciplines and departments to ensure the solution is right for your company. You don’t have the time to learn this – find a partner who has spent years steeped in integration and listen to what they have to say!

Why enterprise mobile computing needs an mBroker – part 1

mobilephonesMobile computing is all the rage, with employees, consumers and customers all wanting to use their mobile devices to transact business. But how should an enterprise approach mobile computing without getting into a world of trouble? How can the enterprise future-proof itself so that as mobile enterprise access explodes the risks are mitigated?

mBrokers are emerging as the preferred method of building a sustainable, governable and effective enterprise mobile computing architecture. The mBroker brings together ESB, integration broker, service gateway, API management and mobile access technology to provide the glue necessary to bring the mobile and corporate worlds together effectively and efficiently; for a summary of mBroker functionality see this free Lustratus report. In this first post in a series looking at mBrokers, we will look at the fundamental drivers for the basic broking functionality offered by mBrokers.

Integration brokers have been around for many years now. The principle is that when trying to integrate different applications or computing environments, some form of ‘universal translator’ is needed. One application may expect data in one format while another expects a different format for example. A trivial example might be an intenrational application where some components expect mm/dd/yy while others want dd/mm/yy. The broker handles these transformation needs. But it plays another very important role apart from translating between different applications; it provides a logical separation between application components, so that requestors can request services and suppliers can supply services without either knowing anything about each other’s location/environment/technology. In order to achieve this, it provides other functionality such as intelligent routing to find the right service and execution location, once again without the service requestor having to know anything about it.

Enterprise mobile applications face a lot of the same challenges. When crossing from the mobile device end to the corporate business services end, the same problems must be addressed. For example, mobile applications often rely on JSON for format notation and use RESTful invocation mechanisms to drive services. But many corporate networks employ an SOA model based around XML data and SOAP-based invocations of services.  In addition, the same sort of abstraction layer offered by integration brokers is beneficial to avoid the mobile device needing to know about locations of back end applications. It is therefore not surprising to find that integration broker technology is one source for mBroker technology.

 

New Lustratus Research Report – A Competitive Review of SOA Appliances

Just a short note to say that I’ve uploaded a new report to our web store at lustratus.com.

The report, entitled A Competitive Review of SOA Appliances focuses on Intel’s SOA Expressway, IBM’s DataPower range and Layer7’s SecureSpan SOA Appliance. In the report I compare and contrast the technical and strategic approaches each vendor takes to addressing the task of creating, managing, accelerating and securing a service oriented architecture using appliances.

The report can be found here.

Steve Craggs

IBM acquires Cast Iron

castironI am currently at IBM’s IMPACT show in Las Vegas, where the WebSphere brand gets to flaunt its wares, and of course one of the big stories was IBM’s announcement that is has acquired Cast Iron.

While Cast Iron may only be a small company, the acquisition has major implications. Over the past few years, Cast Iron has established itself as the prime provider of Cloud to Cloud and Cloud to on-premise integration, with a strong position in the growing Cloud ecosystem of suppliers. Cast Iron has partnerships with a huge number of players in the Cloud and application packages spaces, including companies such as  Salesforce.com, SAP and Microsoft, and so IBM is not just getting powerful technology but also in one move it is taking control of the linkage between Cloud and anything else.

On the product front, the killer feature of Cast Iron’s offering is its extensive range of pre-built integration templates covering many of the major Cloud and on-premise environments. So, for example, if an organization wants to link invoice information in its SAP system with the Salesforce.com sales force environment,  then the Cast Iron offering includes prepared templates for the required definitions and configurations. The result is that the integration can be set up in a matter of hours rather than weeks.

So why is this so important? Well, for one, most people have already realized that Cloud usage must work hand-in-hand with on-premise applications, based on such things as security needs and prior investments. On top of this, different clouds will serve different needs. So integration between clouds and applications is going to be a fact of life. IBM’s acquisition leaps it into the forefront of this area, in both technology and partner terms. But there is a more strategic impact of this acquisition too. Noone knows what the future holds, and how the Cloud market will develop. Think of the situation of mainframes and distributed solutions. As the attractions of distributed systems grew, doomsayers were quick to predict the end of the mainframe. However, IBM developed a powerful range of integration solutions in order to allow organizations to leverage the advantages of both worlds WITHOUT having to choose one from the other. This situation almost feels like a repeat – Cloud has a lot of advantages, and some misguided ‘experts’ think that Cloud is the start of the end for on-premise systems. However, whether you believe this or not, IBM has once again ensured that it has got a running start in providing integration options to ensure that users can continue to gain value from both cloud and on-premise investments.

Steve

SOA success, and what causes it

I was recently pointed to an article in Mainframe Executive magazine written by David Linthicum on the subject of “Mainframe SOA: When SOA Works/When SOA fails”.

I think the friend who suggested I read it was making mischief, knowing my views on the subject of SOA and guessing (correctly) that this article would wind me up.

In summary, the article says that SOA is a large and complex change to your core architecture and working practices and procedures, and that the success or failure is dictated by questions such as executive buy-in/resourcing/funding/skills, and not technology selection.

The truth about success with SOA is that it has little to do with the technology you want to drag into the enterprise to make SOA work, and more to do with the commitment to the architectural changes that need to occur

I have two problems with the opinions stated in this article. The first is to do with changing attitudes to SOA, and the second with the technology comments.

Let me first state that I am well aware that if a company wants to adopt an enterprise-wide SOA strategy designed to take maximum long-term benefit from this new way of leveraging IT investments, then this requires all ofthe areas brought up in the article to be addressed – skills, management buy-in, political will, funding and a strategic vision coupled with a tactical roadmap. I have no beef with any of this.

But I would contend that the world has changed from two years ago. The financial constraints all companies are experiencing have more or less forced the long-term strategic play onto the back burner for many. Some analysts actually like to claim that SOA is dead, a statement designed to be controversial enough to gain attention but to some extent grounded in the fact that a lot of companies are pulling back from the popular SOA-based business transformation strategies of the past. In fact, SOA is absolutely not dead, but it has changed. Companies are using SOA principles to implement more tactical projects designed to deliver immediate benefits, with the vague thought of one day pulling these projects together under a wider strategic, enterprise-wide SOA banner.

So, as an example, today a company might look at a particular business service such as ‘Create Customer’, or ‘Generate Invoice’, and decide to replace the 27 versions of the service that exist in its silos today with a single shared service. The company might decide to use SOA principles and tools to achieve this, but the planning horizon is definitely on the short term – deliver a new level of functionality that will benefit all users, and help to reduce ongoing cost of ownership. While it would have been valid a few years ago to counsel this company to deliver this as part of an overarching shift to an SOA-oriented style of operations, today most companies will say that although this sounds sensible, current circumstances dictate that focus must remain on the near term.

The other issue I have with this article is the suggestion that SOA success is little to do with the technology choice. Given that the topic here was not just SOA but mainframe SOA, I take particular exception to this. There are a wide range of SOA tools available, but in the mainframe arena the quality and coverage of the tools vary widely. For example, although many SOA tools claim mainframe support, this may in actuality simply be anMQ adapter ‘for getting at the mainframe’. Anyone taking this route is more than likely to fail with SOA, regardless of how well it has taken on the non-technical issues of SOA. Even for those SOA tools with specific mainframe support, some of these offer environments alien to mainframe developers, thereby causing considerable problems in terms of skills utilization. It is critical that whatever technology IS chosen, itcan be used by CICS or IMS-knowledgable folk as well as just disributed specialists. Then there is the question of how intuitive the tools are. Retraining costs can destroy an SOA project before it even gets going.

For anyone interested, there is a free Lustratus report on selecting mainframe SOA tools available from the Lustratus store. However, I can assure companies that, particularly for mainframe SOA, technology selection absolutely IS a key factor for success, and that while all the other transformational aspects of SOA are indeed key to longer term, enterprise-wide SOA there are still benefits to be gained with a more short-term view that is more appropriate in today’s economic climate.

Steve

The REAL concern over Cloud data security

Recently I have been involved in a discussion in the LinkedIn Integration Consortium group on managing data in a Cloud Computing environment, and the subject has turned to security.

I had maintained that data security concerns may sometimes result in companies preferring to look at some sort of internal Cloud model rather than risk putting their data in the Cloud-

the concept that I find is intriguing larger companies is the idea of running an INTERNAL cloud – this removes a lot of the concerns over data security, supplier longevity etc.

This generated a reaction from one of the other discussion participants, Tom Gibbs of DiCOM Grid.

I hate to poke at other commentators but security is an overarching issue for IT and telcom as a whole. No more and probably less of an issue with cloud or SaaS.

It’s almost amusing to watch legacy IT managers whine that b/c it isn’t local it isn’t secure. I’m sorry but this is totally naive.

This brings up an important point. What Tom is saying is that the Cloud provider will almost certainly offer top-notch security tools to protect data from unauthorized access or exposure, and therefore what’s the problem?

The answer is that the executive concern with putting data outside the corporate environment is likely to be more of an emotional rather than logical argument. With so many topical examples of confidential information being exposed, and executives knowing that regulations/legislation/corporate policies often make them PERSONALLY responsible for protecting information such as personal details of clients/customers/citizens, for example, the whole thing is just too scary.

IT folk may see this as naive, just as Tom says. After all, modern security tools are extremely powerful and rigorous. But of course this depends on the tools being properly applied. In the UK, for example, there have been a number of high-profile incidents of CDs or memory sticks containing confidential citizen information being left on trains and exposed to the media. The argument allowing data to be taken off-site was based around the fact that policy required all such data to be encrypted, making it useless if it fell into anyone else’s hands. These encryption algorithms were top-notch, and provide almost total protection. BUT the users who downloaded the information in each of these cases did not bother to encrypt it – in other words, if the procedures had been followed then there would have been no exposure but because people did not implement the procedures then the data was exposed.

These situations have not only proved extremely embarrassing to the data owners involved, but have resulted in heads rolling in a very public fashion. So the concerns of the executive moaning about risk are visceral rather than rational – ‘Moving my data outside of the corporate boundary introduces personal risk to me, and no matter how much the experts try to reassure me I don’t want to take that risk’. Of course less sensitive information will not be so much of a concern, and therefore these worries will not affect every Cloud project. But for some executives the ‘security’ concern with moving data into the Cloud, while not logically and analytically based, is undeniably real.

Steve

Pragmatism is the theme for 2009

I have just returned from a couple of weeks around and about, culminating in presenting at the Integration Consortium’s Global Integration Summit (GIS), where I presented the Lustratus ‘BPM Sweet Spots’ paper.

One message seemed to come out loud and clear from the conference – pragmatism is the watchword for 2009.

There were two other analyst presentations apart from the Lustratus one, and I was surprised to see that both presenters pitched a message along the lines of ‘you will never succeed with SOA/Integration/BPM unless you get all the strategic planning and modelling for your enterprise done first’, combined with a suggestion that the presenter was just the resource to ask for help! This contrasted sharply with my own presentation of choosing tactical targets for BPM rather than going for a strategic, enterprise-wide, fully modelled approach.

I was wondering if I had read the mood wrong in the marketplace, but then the eight or so user case studies all proved to be tactical strikes for specific business benefits rather than the more extensive strategic approach more common a year or so ago. It was nice to be vindicated – it looks like 2009 really IS the year of pragmatism and short-term practical considerations.

Steve