IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux 😉 . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

A practical approach to Open Source (part 3)

The third post in this short series looks at the need to understand the business model surrounding the OSS offering being considered.

One of the defining qualities of OSS is that, at least at a basic level, the produt can be licensed at no charge. This immediately raises the question of how sustainable the model surrounding theOSS project is. The fundamental question is, who is going to keep the code base up to date? Who will apply fixes, develop new function, offer support, provide documentation, etc?

There are a number of different types of OSS project, and each has different business model implications that affect its likely success and future longevity. At its heart, the OSS movement really got under way based on an ‘anti-commercial’ theme, where programmers wanted to share their skills and use software that was developed ny them, for them. This is fine as far as it goes, but as people’s interests change, the exposure is that these developers will move on to something new and the original OSS project will wither away. In the rare situations where th problem is overcome, there is usually a viral element to the project’s success, like in the case of Firefox for example.

The next model is where a commercial company is set up around the OSS project. Usually, these companies sell services around the OSS project such as documentation and training, as well as offering commercial licenses to cover support, or verified and tested versions of the OSS code base. The success of this approach will depend on whether the OSS users are prepared to cross the ‘free software’ line and accept that there will still be costs incurred. A big problem here, however, is how extensive the support offered is. The worst threat is that OSS projects often use other OSS offerings to fill out capabilities, and therefore either the commercial support organization has to become expert in all these code bases, or there will be gaps in the support.

The most devious OSS model is where a vendor sponsors an OSS project for its own advantages, regardless of the implications on the user. Typically, a vendor might take a base level of code and make it an OSS project ‘for the good of the community’, but instead of this project attracting other development partners it remains drive by the single vendor. Now, that vendor typically produces an ‘authentic’ version of the project which DOES have a license cost and maintenance fee. The idea is to get users on board thinking the product is free, and then hook them with the professional versions.

Finally, the best OSS model of all from a user point of view is where a number of large vendors decide it is in their interests to back a particular OSS project. This is the case with LINUX, for example, where vendors such as IBM have put in millions of dollars of investment. As a result, a whole ecosystem of LINUX-oriented companies have sprung up, and all of this ensures that LINUX users can have a degree of confidence in its future.


SOA for the scientific community – a practical example

I was intrigued to discover a discussion paper from the University of Southampton in the UK about how SOA is helping the scientific community.

The paper, entitled ‘A Collaborative Orthopaedic Research Environment’, describes how SOA has been used to enable a Virtual Research Environment for orthopaedic researchers to collaborate in the design, analysis and dissemination of experiments.

What I found most interesting were the reasons for using SOA as the base architecture for this environment. The key objective, based on input from the specialists who will use the system, was to provide an easy way to share scientific data and results from collaborative research. However, it was deemed essential that the system could also evolve based on the changing requirements of the user community. One example of change that the paper gives is that of knowing which of a wide range of protocols will be followed for a particular clinical trial. Not only do these vary considerably, but it appears they are also susceptible to changes in regulations.

The solution decided to utilize a coarse level of services, essentially using just four – to manage the trial-related data, analyse it, submit and disseminate related research articles and support discussion forums. However, through the reliance on SOA, these services are flexible and extensible, making it much easier to address the changing needs of this particular scientific discipline. In addition, the services are reusable, and some could therefore be usable for other scientific areas.

It’s good to see SOA being used to effectively address clear user needs in this way.


BPEL4People: mad or not mad?

I have a lot of sympathy with Steve’s blog item on the WS-madness:

Vendors will sometimes (inevitably) cynically use standards only for differentiation, standard bodies and the professional standard writers will do what they are paid to do: write standards.  And finally end-users are too disengaged to make sure the standards solve the problems they want solved.  All of which was going through my mind when I was reading the recently published BPEL4People standard:  Is BPEL4People just another piece of WS-madness?  (BPEL4People is an extension of WS-BPEL to handle human involvement in business process modelling and execution.  BPEL is the WS specification for defining business processes.)

To answer my question, I came up with the following simple sanity test (which many WS-standards fail):

  1. Does it address problems that users have today or reasonably expect to have tomorrow?
  2. Is it the unique standard in addressing the problem? Anybody following of the web services standards around reliable messaging will remember the complete mess caused by multiple competing standards( WS-reliable messaging and WS-reliability).
  3. Is anybody actually going to implement it? There are too many standards both WS and previous which are never quite implemented.

In fact, BPEL has itself struggled to pass the three tests for sanity:  While business processes sound all pervasive, BPEL addresses only a subset of all business processes:  Those which have a central controller (i.e. not peer-to-peer collaborative processes which are covered by another standard), where the central controller is in the middleware (i.e. not controlled within your SAP application).  It has also been heavily criticised by BPM specialists who prefer more formalism than BPEL attempts to provide.  Therefore, it struggles to pass points 1 and 2.  These limitations have meant that many users are slow to adopt BPEL and are sometimes happy to stick with non-standard approaches with better capabilities.  All of which led to vendors delaying implementing BPEL for a number of years – which is point 3.  (However, I believe that BPEL does now make the sanity benchmark for those who need it and on that basis would have to disagree with Steve’s exclusion of its from his sane list.)

Turning to BPEL4People, the intention of the standard is to create a “BPEL extension to address human interactions in BPEL as a first-class citizen“.  To put it another way, it attempts to tie human-workflow back into the BPEL standard which is mostly used for system to system workflow (or business process flow if you prefer).  This means that BPEL4People contains concepts such as human roles, groups and tasks.

Returning to my sanity tests:  On the first test, BPEL4People certainly addresses a problem I have seen where some human interaction is required in a system-to-system business process.  It is a subset of the set of users of BPEL itself – maybe less than 50% of all BPEL use cases.

A bigger criticism is at point 2:  You could (and many have) solved the problem BPEL4People addresses with BPEL on its own.  Yes BPEL4People provides standards where people made it up as they went along but it is not clear that the overhead of a new standard justifies its creation.  Finally, is it going to be implemented:  On the face of it, SAP, BEA, Oracle and IBM’s backing makes it a simple answer: “Yes”.  We will have to see.

All of which means that for the moment BPEL4People scrapes a bare pass on the sanity test:  If you are sure that your problem domain needs this type of functionality I recommend watching how implementations develop and planning on transitioning to BPEL4People when they are mature.  Otherwise, you already have standards and solutions capable of muddling through with.



OK, I think it is time to stop all the WS-Madness.

Web services standards have become a complete joke. As far as I can see, there are now at least 70 (seventy) web services standards and drafts – more than anyone could humanly want, and enough to create chaos, and completely negate the advantages of standards in the first place. Standards are supposed to increase choice, but with so many the likelihood is it will actually REDUCE choice (do you support standards 27, 39 and 40? no, I support 23, 42 and 63 though, any good?).

So, who is to blame for this debacle? Well, the blame is pretty evenly spread. Perhaps the most obvious target is the vendors, who have unashamedly used WS standards as a battleground to try to create differentiation from competition. So, by creating a standard that fits in with ones own design, it is then possible to use this as a reason for rejecting other players. However, vendors are easy targets – but are they really the villains here? After all, you could argue that they are just doing what they have to do – trying to compete, to win business and pay their employees / shareholders. The next obvious choice is the standards bodies themselves. Sadly, there are many ‘professional’ standards body members who get intellectual kudos from defining standards – whether they are worth anything or not. But even this may be missing the point.

Perhaps, instead, blame should be turned on users. The vendors have stepped in because of two things – the opportunity created by the desire for standards in the SOA space, and the vacuum resulting from the failure of users to take an active role. Similarly, standards bodies have leapt into the void because in a way they have to generate standards to have any worth in the world. But the real accusation is that the standards are rubbish – most are immature, many are useless or pedantic. In other words, they do not add value for SOA users and implementers. And isn’t this a case of users getting what they asked for? If users don’t want to get involved to ensure the RIGHT standards are created, that really mean something to users, then they cannot complain when others jump in to fill the vacuum.

My advice to users on web services standards is to ignore all but the important ones – SOAP, WSDL, WS-Security, and maybe WS-Addressing although this is not as mature as the other three. Then take a more active role to ensure these standards mature into what you actually NEED.