IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux ūüėČ . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

webMethods gets MDM with Data Foundations acquisition

Software AG, the owner of the popular webMethods suite of SOA and BPM products, has acquired Data Foundations, the US-based Master Data Management (MDM) vendor. This is a great acquisition, because the single version of the truth provided by MDM technology is often an essential component of business process management applications.

The only issue is that there is an element of catch-up here, since major BPM/SOA vendors like IBM and Oracle have had MDM capabilities for some time. But putting that aside, the fit between Data Foundations, Inc. and Software AG looks very neat. There is no product overlap to worry about, and the Data Foundations solution excels in one of the key areas that is also a strength for Software AG – that of Governance. Software AG offers one of the best governance solutions in the industry, built around its CentraSite technology, and Data Foundations has also made governance a major focus, which should result in a strong and effective marriage between the two technology bases. From a user perspective, MDM brings major benefits to business process implementations controlled through BPM technology, because the data accuracy and uniqueness enables more efficient solutions, eliminating duplication of work and effort while avoiding the customer relations disaster of marketing to the same customer multiple times.

Good job Software AG.

IBM reinforces its Appliance strategy with acquisition of Netezza

When IBM ¬†acquired DataPower’s range of appliances in 2005, it caused some raised eyebrows; was IBM really serious about getting into the appliances game?. Subsequently the silence from IBM was deafening, and ¬†people were starting to wonder whether IBM’s foray into the appliances market had fizzled out. However 2010 has been the year when IBM has made its strategic intent around appliances abundantly clear.

First it acquired Cast Iron, the leading provider of appliances for use in Cloud Computing, and now it is buying Netezza, one of the top suppliers of data warehouse appliances. Netezza has built up an impressive market presence in a very short time, dramatically accelerating time to value for data analytics and business intelligence applications. In addition, it has continued to extend its DataPower range, with the addition of a caching appliance and the particularly interesting ‘ESB-in-a-box’ integration appliance in a blade form factor. For any doubters, IBM has clearly stated its intentions of making appliances a key element of its strategic business plans.

This just leaves the question of why. Of course the cynical answer is because IBM must see itself making a lot of money from appliances, but behind this is the fact that this must indicate that appliances are doing something really useful for users. The interesting thing is that the key benefits are not necessarily the ones you might expect. In the early days of appliances such as firewalls and internet gateways, one key benefit was the security of a hardened device, particularly outside the firewall.  The other was commonly performance, with the ability in an appliance to customize hardware and software to deliver a single piece of functionality, for example in low-latency messaging appliances. But the most common driver for appliances today is much broader Рappliances reduce complexity. An appliance typically comes preloaded, and can replace numer0us different instances of code running in several machines. You bring in an appliance, cable it up and turn it on. It offers a level of uniformity. In short, it makes operations simpler and therefore cheaper to manage and less susceptible to human error.

Perhaps it is this simplicity argument and its harmonization with current user needs that is the REAL driving force behind IBM’s strategic interest in Appliances.

Was Visions Solutions right to acquire Double-Take?

On May 17th Vision Solutions announced that it was acquiring Double-Take – bringing together its own IBM Power HA/Disaster Recovery business and Double-Take’s strength in the same market with Windows and to a lesser extent Linux. But did it do the right thing?

On the face of it, this was a natural move for Vision. Its strength was in providing availability solutions based on IBM’s 64-bit Power servers. This market is a strong and inflential market, but is nowhere near the size of the Windows market. Also, since there are few players in the more specialist IBM Power market, Vision already holds a reasonable share. For growth, it was essential for Vision to do something. It had choices – it was already partnering with Double-Take to supply Windows to its IBM base, and had other partners for Linux. But by choosing to acquire Double-Take, it is definitely buying a Windows availability provider who also does Linux rather than the other way around.

My concern is that my instincts tell me the Linux market is going to be much more interested in availability than the Windows one. UNIX in general is a more secure and robust operating system than Windows, and therefore people using Linux may have greater expectations of availability, provided through file mirrors, backups, network switchovers and disaster recovery scenarios. In contrast, any user of Windows knows that the system is always going to have its availability problems – just think how many times you have to reboot when you use your own Windows laptop or desktop. The question is therefore; how strong and widespread is the demand for availability on Windows?

At some level, there is demand. For example, a lot of people would like their office data backed up so the can get it back quickly if there is a problem. But beyond this basic data level, it seems to me demand may be limited. Indeed, this seems to be backed up by the fact that in the past year or so Double-Take has suffered slowing growth and market pressures on its finances. Maybe Vision would have been wiser to acquire a Linux availability provider, like SteelEye Software for instance.


OK, so when IBM briefed me a few weeks ago on the new announcement about PHP support for CICS, I almost fell off my chair. IBM asked me what I thought and I said I was horrified…taking something as reliable and trustworthy as CICS and throwing it into the wild, unkempt PHP world just left me filled with dread. But on hearing more, my concerns were largely put to rest, and my message to others with the same initial reaction as me is ‘Don’t Panic’.

The initial description to me was ‘adding PHP support for CICS transactions’. Now I am not so old that I do not understand the power of PHP, and its ability to quickly generate nice, modern interfaces for websites and the like. But my own experience of PHP is playing games on the Internet (“Sorry the server has crashed, the damn PHP code has gone pear-shaped again”)¬† and messing about building pages and making a mess of them. I therefore initially viewed the¬†idea of PHP in CICS as a great way to take reliable applications and make them unreliable/unpredictable,¬†while probably crashing the rest of the innocent CICS apps at the same time.

However, it turns out IBM is not stupid. The biggest point that relieved my fears is that the PHP support is provided in its own address space. Now, CICS is REALLY good at protecting different address spaces from hurting each other Рin fact I was part of the team that delivered the multi-region operations (MRO) capabilities to I can vouch personally that this is the case.  So all of a sudden, what had me running screaming for the hills begins to sound like something quite exciting and yet also non-threatening. As I thought about it more (and talked to some people half my age who are PHP fans and really understand the sorts of things it can do) I began to realize how smart IBM has been here. This is a great way to provide a more flexible and rapid way to build jazzy front ends to CICS apps, extending their life sustantially. It also offers the modern wave of technical people an environment with which they are initmately familiar.

The upshot is, PHP support for CICS¬†looks like a winner. There is no need to panic about disruption to operations, because of IBM’s smart thinking in¬†isolating the PHP functionality, but on the other hand¬†this support offers companies a way to leverage their CICS investments, keep¬†the technology vital and alive, respond far more quickly to the need for more attractive interfaces enabling more efective multi-channel delivery and get the kids excited¬†and contributing.

Progress Software acquires Savvion

handshakeSo Progress Software has bought yet another software company; this time a BPM vendor, Savvion. But is this the right move for Progress?

Progress Software has spent most of its life growing through acquisition, making use of the piles of cash generated by its legacy mid-range database product to find new areas of growth. After all, the legacy business may be highly profitable, but its returms are dwindling by the year and Porgress desperately needs something else to shore up its balance sheet. Unfortunately its acquisitions have had a bit of a patchy record of success. Perhaps it will be different this time.

Savvion is a credible BPM (Business Process Management) software provider, and 2009 was a bumper year for BPM sales. Specialist companies like Pegasystems and Lombardi showed huge growth rates, bucking the downward trend triggered across many technology sectors by the economic upheaval. On top of this, Progress has been trying to establish itself as a viable SOA (Service Oriented Architecture) and business integration vendor ever since it launched the Sonic ESB in the early years of the last decade, and BPM was a glaring hole in its portfolio. For these reasons, it is easy to see why Savvion would seem a good fit.

There seem to be two problems for Progress, however.¬†Firstly, BPM is now rarely a solution bought in¬†its own right – hence¬†the rapid consolidation of the BPM market with Pegasystems more or less the only major¬†oure-play BPM left standing following IBM’s acquisition of Lombardi. Instead, BPM is deployed more and more as part of a business transformation strategy involving components such as SOA, application and data integration, business rules, business monitoring¬†and business¬†events management. ¬†Secondly, the¬†gorillas in the space are now IBM, Oracle and SAP. These companies all offer a full suite of products and more importantly services based around BPM and the rest of the modern infrastructure stack. Companies such as Software AG, TIBCO and Axway form a credible second tier, too.

In previous acquisitions, Progress has treated each acqusition as purely software products. This is not surprising, since selling databases is more about selling products than selling solutions. However, it is this factor that has been at the root of the patchy performance of Progress acquisitions. For instance, the Data Direct division of Progress, where it placed a number of acquisitions in the data space, has fared reasonably well. This is because it is more of a product business. However its attempts in areas such as ESBs and SOA governance have suffered due to a seeming reluctance to embrace a more industry-specific, services-based solution model.

With its acqusition of Savvion, Progress once again has the chance to try to show the market that it has learnt from its mistakes. BPM is absolutely an area where companies need to be offered solutions Рproducts together with services and guidance to develop effective and affordable business solutions. It will be hard enough for Progress to cut a share of the BPM pie with all the big players involved, but it does have one outstanding advantage; it has a strong and accessible customer base in the mid-range market where the larger companies struggle. However, if it fails to take on board the need to hire industryvertical skills and solution-based field and service professionals then this acquisition could prove to be yet another lost opportunity.


IBM acquires Lombardi to reinforce its BPM solutions

contractIBM has agreed an acqusition of Lomardi, one of the few remaining pure-play BPM suppliers, with target of closing the deal in 2010.

IBM has reaffirmed its position of strength in the burgeoning Business Process Management (BPM) space with this acquisition. Lombardi has three assets that IBM is particularly interested in; its human-centric BPM capabilities, its extensive professional services resources and its reputation and success with BPM at the departmental level.

For the uninitiated, business processes tend to span some or all of three distinct areas of usage Рhuman-oriented processes, document-oriented processes and prorgram-oriented processes. Human processes involve such aspects as task lists that people use as they carry out their assigned tasks, document processes upgrade traditional paper-oriented models and program-based processes involve the dynamic interaction of applications. IBM has always been most experienced at dealing with program-to-program interaction, delivering its own WebSphere BPM offering. A few years ago it also acquired FileNet, a major player in document-based processing that had document-related BPM products. Now it is making the Lombardi acquisition to strengthen its human interaction BPM capabilities.

This is an exciting acquisition, closing out the weakest areas of IBM’s BPM¬†solutions. However, the challenge for IBM will be to properly integrate the new product set with its existing BPM offerings. Frankly, IBM has not done a good job to date on this with its previous BPM acquisition of FileNet – IBM marketing collateral exhibits¬†confusion over what are essentially two differnent product solutions that both claim to be BPM. Hopefully it will handle the Lombardi acquisition better.


Did Teilhard’s JuxtaComm patent wipe out IBM, Microsoft and SAP?

US_Patent_coverOver the past two years, a little Canadian company called Teilhard has been suing IBM, Microsoft, SAP and many others over a data exchange patent it acquired from JuxtaComm, but all parties have now settled.

I have occasionally blogged¬†in the past on this case,¬†regarding¬†a patent from a company called JuxtaComm (now owned by Teilhard which is in turn owned by¬†on a ‚ÄėSystem for transforming and exchanging data between distributed heterogeneous computer systems‚Äô (US patent number 6,195,662). Legal¬†activities were leading¬†up to a November 2009 trial date.¬†Many people have contacted me recently to try to find the current status, since there is little information available in the public domain and what is available in the blogosphere seems terribly polarized one way or the other. In answer to these frequent contacts, I thought I would post an update.

Firstly, the trial is off. All parties reached mutually acceptable settlement agreements, and these agreements included protection for all parties from future legal action. It should be noted that no information has been released by any party yet on the terms of these settlements. Settlements are exactly what they say ‚Äď the dispute has been settled between the parties, for good. The only other point to note is that settlement amounts reflect the confidence and risk for each party in continuing to trial ‚Äď therefore settlements can be large or they can be small or even zero. It all depends how strong each party thought its case was, and how much each was prepared to risk in terms of legal costs and potentially unexpected¬†trial outcomes.

Secondly, as part of the pre-trial legal process, the judge involved formalized definitions for the terms involved in the patent, for example ‚Äėscript‚Äô. By choosing unusually wide-reaching interpretations for these terms which programmers would find highly eccentric, the result was that the patent applicability was increased dramatically from what it had originally covered, that is ETL (extract/transform/load). In the ‚Äėscript‚Äô example, for instance, while the defence argued that a script was “a series of text commands‚Äô that were interpretively run sequentially by the script processor”, the judge decided that a script meant “a group of commands”! However, the reason this is important is that subsequent to these definition rulings, the US patent office is now¬†re-examining the patent for validity based on the new definitions. This work should be finished in the new year, and will have a bearing on future legal actions against new perceived infringers.

The big question to some, particularly those who own shares in the private Canadian company owning the patent now,  is how big or small the settlements are. As I said, this will be determined by a combination of risk, confidence and desire to avoid burning legal costs unnecessarily. On the plus side for the patent holder, the judge’s definitions strengthened the case substantially, making it far more wide reaching and therefore widening the impact on potential infingers. On the negative side there were an awful lot of examples of software that seemed to do the same thing as the patent describes which predated the patent, although in legal terms this sort of prior art does not necessarily invalidate the patent or law suits, apparently. But in the end, the truth is no-one knows whether settlements were huge or miniscule.

The only trustworthy sources for this settlement information in my view would be any formal announcements from any of the parties involved. For example, if a company has had to make a large payment for settlement, then depending on how much this payment is as a percentage of its turnover it might be legally required to make a statement about it in its SEC filings. Similarly, if the patent holder announces any substantial dividends then that would be another indication of settlement sizes, since the patent-holder makes very little revenue in software sales and therefore any significant profit would have to have come from¬†patent settlements. Perhaps the¬†next six months or so will make things a little clearer, but don’t hold your breath;¬†as a private company, the only people is required to keep informed is its own¬†shareholders – it is not required to make¬†any¬†statements at all to the wider public.


Introducing Cloud for Executives

Example report front coverAt Lustratus we have been doing a lot of research into Cloud Computing, as have many firms.

I must confess the more I have dug into it, the more horrified I have become at the hype, confusion, miscommunication and manipulation of the whole Cloud Computing concept.

In the end, I decided the time was right for an Executive Guide to Cloud – defining it in as simple terms as possible and laying out the Cloud market landscape. Lustratus has just published the report, entitled “Cloud Computing without the hype; an executive guide” and available¬†at no charge from the Lustratus store. Not only does the¬†paper try to lock down the cloud definitions, but it also includes a summary of some 150 or so suppliers operating¬†in the Cloud Computing space.

The paper deals with a number of the most common misunderstandings and confusions over Cloud. I plan to do a series of posts looking at some of these, of which this post is the first. I thought I would start with the Private Cloud vs Internal Cloud discussion.

When the Cloud Computing model first emerged, some were quick to try to define Cloud as a public, off-premise service (eg Amazon EC2), but this position was quickly destroyed as companies worldwide realized that Cloud Computing techniques were applicable in many different on and off premise scenarios. However, there has been a lot of confusion over the terms Private Cloud and Internal Cloud. The problem here is that analysts, media and vendors have mixed up discussions about who has access to the Cloud resources, and where the resources are located. So, when discussing the idea of running a Cloud onsite as opposed to using an external provider such as Amazon, people call one a Public Cloud and the other an Internal Cloud or Private Cloud.

This is the root of the problem. This makes people think that a Private Cloud is the same as an Internal Cloud – the two terms are often used interchangeably. However, these two terms cover to different Cloud characteristics, and it is time the language was tightened up. Clouds may be on-premise or off-premise (Internal or External), which refers to where the resources are. (Actually, this excludes the case where companies are running a mix of clouds, but let’s keep things simple). The other aspect of Cloud usage is who is allowed to use the Cloud resources. This is a Very Important Question for many companies, because if they want to use Cloud for sensitive applications then they will be very worried who else might be running alongside in the same cloud, or who might get to use the resources (eg disk space, memory, etc) after they have been returned to the cloud.

A Public Cloud is one where access is¬†open to all, and therefore¬†the user has to rely on the security procedures adopted by the cloud provider. A¬†Private Cloud is one that is either owned or leased by a single enterprise, therefore giving the user the confidence that information and applications are locked away from others. Of course,¬†Public Cloud providers will point to sophisticated security measures to mitigate any risk, but¬†this can never feel as safe to a worried executive than ‘owning’ the resources.

Now, it is true that a Public Cloud will always be off-premise, by definition, and this may be why these two Cloud characteristics have become intertwined. However, a Private Cloud does not have to be on-premise Рfor example, if a client contracts with a third party to provide and run an exclusive cloud which can only be used by the client, then this is a Private Cloud but it is off-premise. It is true that USUALLY a Private Cloud will be on-premise, and hence equate to an Internal Cloud, but the two terms are not equal.

The best thing for any manager or exec trying to understand the company approach to cloud can do is to look at these two decisions separately Рdo I want the resources on or off premise, and do I want to ensure that the resources are exclusively for my use or am I prepared to share. It is a question of balancing risk against the greater potential for cost savings.


Is Cloud lock-in a good thing, or bad?

Salesforce.comI am doing a lot of research into Cloud Computing at the moment, and spent an enjoyable morning with, one of the largest Cloud vendors.

However, one thing that particularly piqued my interest was the discussion on Cloud lock-in. One of the most frequent concerns I hear from companies thinking about Cloud is that they are worried about vendor lock-in. After all, with Cloud being so new, what if you lock into a supplier who does not survive?

The discussions with highlighted an interesting aspect to this debate. One of its offerings,, provides a ‘Platform as a Service’ (PaaS) cloud offering, where users are presented with an environment in the cloud complete with a whole host of useful tools to build their own applications to run int he cloud or customize existing ones. However, offers its own programming environment which is “java-like” in its own words. This immediately raises the lock-in concern. If a company builds applications using this, then these applications are not portable to other Java environments, so the user is either stuck with or faces a rewrite.

A bad thing, you might think. BUT claims that the reason it has had to go with a Java-like environment is that this enables it to provide much improved isolation between different cloud tenants (users) and therefore better availability and lower risk. For the uninitiated, the point about Cloud is that lots of using companies share the same cloud in what the industry calls a multi-tenancy arrangement, and this obviously raises a risk that these tenants might interfere with each other in some way, either maliciously or accidentally. has mediated that risk by offering a programming environment that specifically helps to guard against this happening, and hence differs from pure Java.

So, is this lock-in a bad thing or good? I don’t know whether could have achieved its aims a different way, and I have to admit that to a cynic like me the fact that solving this problem ‘unfortunately’ locks you into the supplier seems a bit suspicious. However, this is irrelevant since the vendor is doing the work and has chosen its implementation method, which it is of course free to do. Therefore, the question facing the potential user is simple – the strategic risk of being locked in to the supplier has to be balanced against the operational risk of possible interference from other tenants. Depending on how the user reads this balance, this will determine how good or bad the lock-in option is.