Cloud computing – balancing flexibility with complexity

balance2In the “Cloud Computing without the hype – an executive guide” Lustratus report, available at no charge from the Lustratus store, one of the trade-offs I touch on is flexibility against complexity.

To be more accurate, flexibility in this case refers to the ability to serve many different use cases as opposed to a specific one.

This is an important consideration for any company looking to start using Cloud Computing. Basically, there are three primary Cloud service models; Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). In really simple terms, an IaaS cloud provides the user with virtual infrastructure (eg storage space, server, etc), PaaS offers a virtual platform where the user can run home-developed applications (eg a virtual server with an application server, database and development tools) and SaaS provides access to third-party supplied applications running in the cloud.

The decision of which is the most appropriate choice is often a trade-off. The attraction of SaaS is that it is a turn-key option – the applications are all ready to roll, and the user just uses them. This is pretty simple, but the user can only use those applications supplied. There is no ability to build new applications to do other things. Hence this approach is specific to the particular business problem addressed by the packaged application.

PaaS offers more flexibility of usage. A user builds the applications that will run in the cloud, and can therefore serve the needs of many different business needs. However, this requires a lot of development and testing work, and flexibility is restricted by the pre-packaged platform and tools offered by the PaaS provider. So, if the platform is WebSphere with DB2, and the user wants to build a .NET application for Windows, then tough.

IaaS offers the most flexibility, in that it effectively offers the infrastructure pieces and the user can then use them in any way necessary. However, of course, in this option the user is left with all the work. It is like being supplied with the raw hardware and having to develop all the necessary pieces to deliver the project.

So, when companies are looking at their Cloud strategies, it is important to consider how to balance this tradeoff between complexity/effort and flexibility/applicability.

Steve

Introducing Cloud for Executives

Example report front coverAt Lustratus we have been doing a lot of research into Cloud Computing, as have many firms.

I must confess the more I have dug into it, the more horrified I have become at the hype, confusion, miscommunication and manipulation of the whole Cloud Computing concept.

In the end, I decided the time was right for an Executive Guide to Cloud – defining it in as simple terms as possible and laying out the Cloud market landscape. Lustratus has just published the report, entitled “Cloud Computing without the hype; an executive guide” and available at no charge from the Lustratus store. Not only does the paper try to lock down the cloud definitions, but it also includes a summary of some 150 or so suppliers operating in the Cloud Computing space.

The paper deals with a number of the most common misunderstandings and confusions over Cloud. I plan to do a series of posts looking at some of these, of which this post is the first. I thought I would start with the Private Cloud vs Internal Cloud discussion.

When the Cloud Computing model first emerged, some were quick to try to define Cloud as a public, off-premise service (eg Amazon EC2), but this position was quickly destroyed as companies worldwide realized that Cloud Computing techniques were applicable in many different on and off premise scenarios. However, there has been a lot of confusion over the terms Private Cloud and Internal Cloud. The problem here is that analysts, media and vendors have mixed up discussions about who has access to the Cloud resources, and where the resources are located. So, when discussing the idea of running a Cloud onsite as opposed to using an external provider such as Amazon, people call one a Public Cloud and the other an Internal Cloud or Private Cloud.

This is the root of the problem. This makes people think that a Private Cloud is the same as an Internal Cloud – the two terms are often used interchangeably. However, these two terms cover to different Cloud characteristics, and it is time the language was tightened up. Clouds may be on-premise or off-premise (Internal or External), which refers to where the resources are. (Actually, this excludes the case where companies are running a mix of clouds, but let’s keep things simple). The other aspect of Cloud usage is who is allowed to use the Cloud resources. This is a Very Important Question for many companies, because if they want to use Cloud for sensitive applications then they will be very worried who else might be running alongside in the same cloud, or who might get to use the resources (eg disk space, memory, etc) after they have been returned to the cloud.

A Public Cloud is one where access is open to all, and therefore the user has to rely on the security procedures adopted by the cloud provider. A Private Cloud is one that is either owned or leased by a single enterprise, therefore giving the user the confidence that information and applications are locked away from others. Of course, Public Cloud providers will point to sophisticated security measures to mitigate any risk, but this can never feel as safe to a worried executive than ‘owning’ the resources.

Now, it is true that a Public Cloud will always be off-premise, by definition, and this may be why these two Cloud characteristics have become intertwined. However, a Private Cloud does not have to be on-premise – for example, if a client contracts with a third party to provide and run an exclusive cloud which can only be used by the client, then this is a Private Cloud but it is off-premise. It is true that USUALLY a Private Cloud will be on-premise, and hence equate to an Internal Cloud, but the two terms are not equal.

The best thing for any manager or exec trying to understand the company approach to cloud can do is to look at these two decisions separately – do I want the resources on or off premise, and do I want to ensure that the resources are exclusively for my use or am I prepared to share. It is a question of balancing risk against the greater potential for cost savings.

Steve

Is Cloud lock-in a good thing, or bad?

Salesforce.comI am doing a lot of research into Cloud Computing at the moment, and spent an enjoyable morning with Salesforce.com, one of the largest Cloud vendors.

However, one thing that particularly piqued my interest was the discussion on Cloud lock-in. One of the most frequent concerns I hear from companies thinking about Cloud is that they are worried about vendor lock-in. After all, with Cloud being so new, what if you lock into a supplier who does not survive?

The discussions with Saleforce.com highlighted an interesting aspect to this debate. One of its offerings, force.com, provides a ‘Platform as a Service’ (PaaS) cloud offering, where users are presented with an environment in the cloud complete with a whole host of useful tools to build their own applications to run int he cloud or customize existing ones. However, Salesforce.com offers its own programming environment which is “java-like” in its own words. This immediately raises the lock-in concern. If a company builds applications using this, then these applications are not portable to other Java environments, so the user is either stuck with Salesforce.com or faces a rewrite.

A bad thing, you might think. BUT Salesforce.com claims that the reason it has had to go with a Java-like environment is that this enables it to provide much improved isolation between different cloud tenants (users) and therefore better availability and lower risk. For the uninitiated, the point about Cloud is that lots of using companies share the same cloud in what the industry calls a multi-tenancy arrangement, and this obviously raises a risk that these tenants might interfere with each other in some way, either maliciously or accidentally. Salesforce.com has mediated that risk by offering a programming environment that specifically helps to guard against this happening, and hence differs from pure Java.

So, is this lock-in a bad thing or good? I don’t know whether Salesforce.com could have achieved its aims a different way, and I have to admit that to a cynic like me the fact that solving this problem ‘unfortunately’ locks you into the supplier seems a bit suspicious. However, this is irrelevant since the vendor is doing the work and has chosen its implementation method, which it is of course free to do. Therefore, the question facing the potential force.com user is simple – the strategic risk of being locked in to the supplier has to be balanced against the operational risk of possible interference from other tenants. Depending on how the user reads this balance, this will determine how good or bad the lock-in option is.

Steve

At last – Cloud Computing Clarified!

Clarify

No-one can have missed the marketing and media explosion over Cloud Computing.

Vendors talk of nothing else in an attempt to hook onto this latest hype, and analyst and media firms have stirred the pot. However, although very early in its lifecycle, there really could be value in Cloud Computing in the future.

But the problem is no-one seems to be able to cut through all the conflicting messages to describe what the emerging Cloud Computing market actually looks like – UNTIL NOW! Danny Goodall, marketing strategistand gurru at Lustratus, has just published his Cloud Computing market landscape in the Lustratus REPAMA blog. The blog post includes a slide presentation that summarizes in simple terms the different Cloud Computing models, splits these models into easy-to-understand pieces and then helpfully lists a selection of vendors playing in each.

This presentation is well worth a read for anyone interested in Cloud Computing. For myself, I think I would like to be the first to propose a possible extra category to be included – a new Cloud Platform service, B2B. As communities move to Cloud, it is likely there will be more and more need for B2B linkage, and although Danny includes an Integration platform service which is similar in nature to B2B, B2B actually has specific requirements that would not fit in most integration services.

However, a great piece of work that should provide a strong base for understanding the developing Cloud Computing market.

Steve

BAM vs BI

cognosLustratus recently received a comment to a post I wrote a couple of years back on IBM’s acquisition of Cognos.

The comment asked whether this meant IBM now had two BAM tools, COGNOS and WebSphere Business Monitor, and I thought that rather than respond to the original and now very old post I would create a new post, since this question actually crystallizes a very contemporary confusion over the roles of BAM and BI.

BI (Business Intelligence) is the term that originally emerged to describe the market for tools to find out more about data. Typically, BI tools aggregated data and provided ‘slice and dice’ services to look at data in different ways, correlating it and detecting interesting patterns. So, as a simple example, examining sales information allowed Amazon to identify trends in related customer buying – hence when you buy a DVD Amazon can helpfully pop up and point out that ‘people who bought this DVD also bought….’ to try to accelerate sales based on buying patterns. The key characteristicof BI was that it was typically a static activity, usually carried out against historical data. In modern times, however, it is more and more related to the analysis of any data, whether static or dynamic. COGNOS was a leading supplier of BI tools.

BAM (Business Activity Monitoring) was the term coined to describe tools that were primarily focused on analysing behaviour of run-time applications rather than static data. An example here might be monitoring a loans application in order to see how often loan requests are having to be queued up for supervisor approval rather than executed under the authority of the loans advisor. A trigger could then be defined to highlight excessive involvement of the supervisor which might indicate other problems, such as inadequately trained loans advisors.

So, to reinforce this distinction, an executive view of BAM might be a dashboard display that shows the oeprational performance of the business in real time, with a colour-coded scheme to point out areas of concern. In contrast, the BI view might be of a set of charts in a presentation that describe business or buyer trends based on analysis of company performance over the last three months, enabling new initiatives to support competitiveness or access new business opportunities.

Over time, these two markets have tended to overlap. After all, both markets involve the steps of gathering information, and then analysing it. While gathering info will be completely different in the two cases (looking at data files vs monitoring business application and process execution) the analysis may well involve the same procedures, and hence BI analysis technology may well be used to enhance BAM offerings. However, there is a more pressing reason for linking the two areas. More and more companies are looking to ‘BAM’ as a way to optimize and enhance operational execution, and it is foolish to limit its scope to just looking at application performance. The user really wants to take into account all information in order to reflect corporate performance and identify opportunities. This covers both real-time execution AND what is happening to the data files.

However, because of the different smarts required for these two areas, it is unlikely that products are going to merge – in other words IBM is unlikely to replace COGNOS and WebSphere Business Monitor with a single product. This would make little sense. Instead companies are likely to improve the linkage and integration between these two distinct areas of technology.

Steve

Come in Texas East District Court, your time is up

Judge If there is one thing guaranteed to get me gnashing my teeth, it is the role of the Texas Eastern District Court as the bully boy of the crumbling US Software patents world.

For those unfamiliar with this marvelous district court, every major software patent suit has been brought in this court, regardless of where the claimant or defendants are based. My own opinion is that just as the UK is the infamous world capital for divorce settlements because of its apparent unique and extensive bias towards the wife,the Texas Eastern District Court has the same levelof notoriety for software patents with its apparent unprecedented bias towards the plaintiffs. Any self-respecting patent troll (if that is not an oxymoron) is be quick to praise the name of the Texas Eastern District Court.

The latest in this long line of cases appears to be a couple of suits raised by a guy called Mitchell Prust, of Minnesota US, against Apple and others, that are threatening to completely derail the Cloud Computing model. These two cases can be taken as the tip of the iceberg – expect more to appear in the same courtroom. Essentially Prust got three patents approved in thearea of remote storage management, the earliest in 2000 – these patents basically deal with the virtualization of storage to allow multiple users across the world to carve out their own little space and manage and use it, as Cloud users do.

One thing that has forever confused me is how patents get approved in the US system. Anyone who knows IT will probably be aware that the IBM VM (Virtual Machine) operating system that started in the late 1960s provided this type of storage virtualization. Perhaps the difference with thesepatents is that each makes a big thing of the client system being attached through ‘a global computer network’. The implication is this means the Internet, which would rule out the IBM VM solution which clearly predates the Internet. However, global access to these systems through global networks was certainly possible in the old days too – when I worked in IBM in the 80s I was able to log on from a remote location across the network, and then continue to interact with my virtualized piece of the greater storage pool. Does this equate to a ‘global compute network’? Seems to me to be pretty damn close.

This brings up an interesting point. One reason this particular court is popular is that it has a habit of taking definitions in the patent claims, and interpreting them in a most eccentric way. In a recent case, still ongoing, the Texas Eastern District court judge decided on a definition of ‘Script’ that was a mile from what most IT people would think, and therefore instead of that particular patent covering software that employed scripts in the IT sense, it now covers a far wider set of products that are in reality nothing to do with scripts. For reference, the definitions for script (and I am indebted here to Vincent McBurney’s painstaking tracking of the case) were (and remember this was a patent to do with data movement)

SCRIPT

  1. Plaintiff: a group of commands to control data movement into and out of the system, and to control data transformation within the system
  2. Defendants: A series of text commands interpretively run by the script processor, such that one command at a time is translated and executed at runtime before the next command is translated and executed, and that control data movement into and out of the system and control data transformation within the system
  3. Judge: a group of commands to control data movement into and out of the system, and to control data transformation within the system

So, according to this definition, any code, for example a GUI or an executing program, that controls data movement based on some sort of input is now classed as a ‘script’.

If the Court follows the same approach in the case of these remote data storage patents, it could not only derail Cloud Computing but do a fairly comprehensive job of annihilating the virtualization market too.

Somehow, order has to be restored to the much-maligned US software patent system. It is absolutely right and proper that inventors should be properly recompensed for their innovations – this is healthy, and stimulates technology advancement. But to me the clear indication of the failure of the system is that every plaintiff heads to East Texas, presumably because it gives the answer the plantiff wants to hear. Statistics appear to bear this out. The implication is that any other court in the land would risk a less favourable judgement…dare I say it, perhaps a more just one?

I’ll sign off with the old joke about the soldier marching with his unit past a collection of family members. A spectator turn to a woman watching the march ans says, ‘Madam, your son is marching out of step!’. The woman replies, ‘No Sir, he is the only one marching IN step’…

Steve

SOA / ESB confusion

confusionI recently commented on a query in another blog about ESBs, and what they are in relation to SOA.

Since this is a subject that I continue to get asked every now and then, even though I have blogged on it a number of times before, I thought I would reproduce the response I gave.

It starts right at the ESB beginning, and concludes with a few old Lustratus paper references that I believe are still relevant in introducing the ESB and SOA concepts.

When the Enterprise Service Bus concept started off life in the mid 90s it was as an extension of a messaging pipe – that is, message based communications. Prior to the ESB, messages were sent with tools such as IBM’s MQSeries (now WebSphereMQ), Progress Software’s Sonic MQ and a range of others, particularly JMS (Java Messaging Service) implementations. Users quickly realized that more was needed than just the ability to send a message from A to B – value add capabilities were needed such as data transformation, message enrichment and dynamic routing of messages based on content or other circumstances. This resulted in the emergence of ‘message brokers’ – pieces of code that acted as a ‘hub’, where any required actions could be taken on in-flight messages. This is where the ‘hub and spoke’ concept that was the basis of the EAI market came from. Messages from A to B would go via the hub so that the required intelligence could be applied, with the A and B endpoints requiring little intelligence at all.

However, two things happened that caused the ESB concepts to emerge around 1996-97. Standards activity in the integration marketplace increased and took root, and users wanted to find ways to lower the entry price for integration – having to buy a hub was very expensive, particularly when connections were few in the early stages of integration development. There were also fairly groundless concerns about availability with the hub and spoke model due to the perceived single point of failure. As a result, the ESB emerged.


The ESB did two important things – it leveraged major standards, in particular the web services standards offering a standard way to invoke the messaging capabilities of the bus, and it adopted a ‘bus’ rather than ‘hub and spoke’ architecture which resulted in a much lower deployment cost, at least in the early stages of integration development. The bus concept involves placing more intelligence at each node, so that messages can flow A to B without going through the hub. In flight message processing happens at the individual nodes rather than at the hub.


So, an ESB is actually just a ‘smart’ communications pipe, providing not just a way to transfer the messages from A to B but also the in-flight capabilities (often called mediation services) required. In addition, this is all available under a layering of standards. This is why typically ESBs are used with web services invocations, and often utilize JMS servers for the actual transfer mechanism.

SOA is something much bigger. It is a way of architecting your IT programs around a service-oriented concept. The absolute key to this is that an SOA service relates to a BUSINESS piece of functionality as opposed to some programming activity. So, you do not have an SOA service to read a record from a file, but rather an SOA service to ‘Get Customer Details’ which internally will end up reading customer information from files and so on. The secondary characteristic is that this service must be able to be invoked from anywhere, with no requirement to know where the target service will actually run. Therefore, it is clear to see that SOA requires some sort of communications capability, and while this does not have to be an ESB, the ESB fits the role very well particularly with its affinity to standards such as web services.

There are a number of free white papers at lustratus.com that discuss this topic in more detail, particularly ‘SOAs and ESBs’, ‘What is an SOA Service’ and ‘The Year of ESB-ability’.

Steve

Software AG sitting pretty?

Software AG seems to be defying predictions and surprising the market at every turn.

Once seen as a sleepy European software house based largely around legacy system technologies, it has taken major strides to transform itself into a major global software industry player. Its acquisition of webMethods a few years ago surprised the market, with many analysts unconvinced that it could make a go of the move into integration / SOA middleware, but it has done a fair job of building some momentum by tying the webMethods portfolio up with its own CentraSite governance technology, providing service-oriented architecture (SOA) with integrated governance.

Then it once again shocked the market by snatching IDS Scheer, the well-known supplier of modelling tools, from under SAP’s nose. Given that the IDS Scheer technology is used by most of the major SOA suppliers across the world for modelling, and in particular is a key part of the SAP portfolio, this would appear to give Software AG lots of cross-sell opportunities across the two customer bases and throughout the SAP world.

Now it has announced its 2Q09 results, and they make pretty good reading ont he surface. A 9% increase in product revenues is particularly noteworthy give that so many companies are struggling to show any year-on-year growth in product sales. However, before getting too carried away it is worth delving a little deeper into the numbers. The product revenue numbers include maintenance as well as license sales. Licensesales actually fell, as with most other companies. Maintenance revenues jumped by 20% – does this mean that the company has built a much larger maintenance base, or is it actually a reflection of a more aggressive pricing policy? Then there is the split between the legacy business (ETS) and the SOA/BPM business(webMethods). License revenues in this segment were down 15% – not very encouraging since this is the strategic business unit. Also, it is noticeable that maintenance revenue in each segment increased by about 20%, suggesting that this rise does indeed reflect a price hike.

However, taking all this into consideration, Software AG is still looking to have moved forward substantially from a few years ago, and assuming the IDS Scheer acquisition goes through OK there should be lots of opportunities for the company. Of course, a cynic might point out that by adding IDS Scheer to the webMethods portfolio, the company has made itself a highly attractive acquisition target to someone – perhaps SAP?!

Steve

Mico Focus ReUZE misses the point

Micro Focus announced its latest mainframe migration tool, ReUZE yesterday – and once again it has completely missed the point.

The background is that for companies looking to move off the IBM mainframe, Micro Focus has been offering solutions for a number of different target platforms, but in each case the solutions have been based around the old emulation concept. Once again, it seems the company has fallen into the same trap. As the press release states

Unlike other solutions which insist on rewriting mainframe application data sources for SQL Server, or removing mainframe syntax from programs, the Micro Focus solution typically leaves the source code unchanged, thereby reducing costs, risk, and delivering the highest levels of performance and reliability.

The highlighted end to this statement is where I have a problem. Micro Focus seems to think that by offering an emulated environment for mainframe applications, it is reducing risk and delivering the best possible performance and reliability. But this is a load of rubbish. Think about it from the point of view of the mainframe user that has decided to move away from the mainframe – in this case to a Microsoft environment. This is a big step, and the company concerned must be pretty damn sure this is what it wants to do. It has obviously decided that the Microsoft environment is where it wants to be, and as such surely this will include moving to a Microsoft skills set, Microsoft products and tools – database, security, and all the rest. So why settle for an emulation option?

The point Micro Focus has missed is that emulation is a way of propagating the old. After all, it originally stemmed from terminal emulation, where the object was to make sure that end users still saw the same environment even when their workstation technology changed. This was very sensible, becuase it focused on the right priority – don’t force the end users to have to retrain. But let’s be clear – emulation costs. It provides an extra layer of software, affecting performance and scalability, and puts future development in a straightjacket because it propogates the old way of doing things. However, in this case the cost of retraining end users would far outweight these implications.

But in the situation where a user is moving off the mainframe to a Microsoft world, why would the user want to propogate the old? Yes, the user wants to reuse the investments in application logic and data structure and content, but surely the user wants to get to the destination – not be stuck in purgatory, neither in one place nor the other. Why restrict the power of .NET by forcing the user to operate through an insulating emulation environment? Why hold the user back from moving into the native .NET database system of SQL Server and thereby leveraging  the combined power of the operating system, database and hardware to maximum effect? Why force the user to have to maintain a skills set in the mainframe applications when one of the reasons for moving may well have been to get to a single, available and cheaper one?

Yes, the Micro Focus approach may end up reducing the risk of the porting process itself, since it tries to leave mainframe code unchanged, but that is a long way from reducing the risk of moving from one world to the other. And as for the comments on leaving everything unchanged to ‘deliver the highest levels of performance and reliability, that is just laughable. What makes Micro Focus think that the way an application is designed for the mainframe will deliver optimal performance and reliability in a .NET environment? The two environments are completely different with totally unlike characteristics. And when has an emulation layer EVER improved performance/reliability?

I see this ReUZE play as like offering someone drugs. If you’ve decided you want to move off the mainframe to .NET, I have a drug here that will reduce the pain. You will feel better …. honest. But the result is you will be left hooked on the drug, and wont actually get where you want to be. If you have decided this migration is for you, don;t try to cut corners and fall for the drug – do the job properly and focus on the end goal rather than the false appeal of an easy journey. Just Say No.

Steve

SOA success, and what causes it

I was recently pointed to an article in Mainframe Executive magazine written by David Linthicum on the subject of “Mainframe SOA: When SOA Works/When SOA fails”.

I think the friend who suggested I read it was making mischief, knowing my views on the subject of SOA and guessing (correctly) that this article would wind me up.

In summary, the article says that SOA is a large and complex change to your core architecture and working practices and procedures, and that the success or failure is dictated by questions such as executive buy-in/resourcing/funding/skills, and not technology selection.

The truth about success with SOA is that it has little to do with the technology you want to drag into the enterprise to make SOA work, and more to do with the commitment to the architectural changes that need to occur

I have two problems with the opinions stated in this article. The first is to do with changing attitudes to SOA, and the second with the technology comments.

Let me first state that I am well aware that if a company wants to adopt an enterprise-wide SOA strategy designed to take maximum long-term benefit from this new way of leveraging IT investments, then this requires all ofthe areas brought up in the article to be addressed – skills, management buy-in, political will, funding and a strategic vision coupled with a tactical roadmap. I have no beef with any of this.

But I would contend that the world has changed from two years ago. The financial constraints all companies are experiencing have more or less forced the long-term strategic play onto the back burner for many. Some analysts actually like to claim that SOA is dead, a statement designed to be controversial enough to gain attention but to some extent grounded in the fact that a lot of companies are pulling back from the popular SOA-based business transformation strategies of the past. In fact, SOA is absolutely not dead, but it has changed. Companies are using SOA principles to implement more tactical projects designed to deliver immediate benefits, with the vague thought of one day pulling these projects together under a wider strategic, enterprise-wide SOA banner.

So, as an example, today a company might look at a particular business service such as ‘Create Customer’, or ‘Generate Invoice’, and decide to replace the 27 versions of the service that exist in its silos today with a single shared service. The company might decide to use SOA principles and tools to achieve this, but the planning horizon is definitely on the short term – deliver a new level of functionality that will benefit all users, and help to reduce ongoing cost of ownership. While it would have been valid a few years ago to counsel this company to deliver this as part of an overarching shift to an SOA-oriented style of operations, today most companies will say that although this sounds sensible, current circumstances dictate that focus must remain on the near term.

The other issue I have with this article is the suggestion that SOA success is little to do with the technology choice. Given that the topic here was not just SOA but mainframe SOA, I take particular exception to this. There are a wide range of SOA tools available, but in the mainframe arena the quality and coverage of the tools vary widely. For example, although many SOA tools claim mainframe support, this may in actuality simply be anMQ adapter ‘for getting at the mainframe’. Anyone taking this route is more than likely to fail with SOA, regardless of how well it has taken on the non-technical issues of SOA. Even for those SOA tools with specific mainframe support, some of these offer environments alien to mainframe developers, thereby causing considerable problems in terms of skills utilization. It is critical that whatever technology IS chosen, itcan be used by CICS or IMS-knowledgable folk as well as just disributed specialists. Then there is the question of how intuitive the tools are. Retraining costs can destroy an SOA project before it even gets going.

For anyone interested, there is a free Lustratus report on selecting mainframe SOA tools available from the Lustratus store. However, I can assure companies that, particularly for mainframe SOA, technology selection absolutely IS a key factor for success, and that while all the other transformational aspects of SOA are indeed key to longer term, enterprise-wide SOA there are still benefits to be gained with a more short-term view that is more appropriate in today’s economic climate.

Steve