IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux ūüėČ . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

Was Visions Solutions right to acquire Double-Take?


vision1
On May 17th Vision Solutions announced that it was acquiring Double-Take – bringing together its own IBM Power HA/Disaster Recovery business and Double-Take’s strength in the same market with Windows and to a lesser extent Linux. But did it do the right thing?

On the face of it, this was a natural move for Vision. Its strength was in providing availability solutions based on IBM’s 64-bit Power servers. This market is a strong and inflential market, but is nowhere near the size of the Windows market. Also, since there are few players in the more specialist IBM Power market, Vision already holds a reasonable share. For growth, it was essential for Vision to do something. It had choices – it was already partnering with Double-Take to supply Windows to its IBM base, and had other partners for Linux. But by choosing to acquire Double-Take, it is definitely buying a Windows availability provider who also does Linux rather than the other way around.

My concern is that my instincts tell me the Linux market is going to be much more interested in availability than the Windows one. UNIX in general is a more secure and robust operating system than Windows, and therefore people using Linux may have greater expectations of availability, provided through file mirrors, backups, network switchovers and disaster recovery scenarios. In contrast, any user of Windows knows that the system is always going to have its availability problems – just think how many times you have to reboot when you use your own Windows laptop or desktop. The question is therefore; how strong and widespread is the demand for availability on Windows?

At some level, there is demand. For example, a lot of people would like their office data backed up so the can get it back quickly if there is a problem. But beyond this basic data level, it seems to me demand may be limited. Indeed, this seems to be backed up by the fact that in the past year or so Double-Take has suffered slowing growth and market pressures on its finances. Maybe Vision would have been wiser to acquire a Linux availability provider, like SteelEye Software for instance.

IBM acquires Cast Iron

castironI am currently at IBM’s IMPACT show in¬†Las Vegas, where the WebSphere brand gets to flaunt its wares, and of course one of the big stories was IBM’s announcement that is has acquired Cast Iron.

While Cast Iron may only be a small company, the acquisition has major implications. Over the past few years, Cast Iron has established itself as the prime provider of Cloud to Cloud and Cloud to on-premise integration, with a strong position in the growing Cloud ecosystem of suppliers. Cast Iron has partnerships with a huge number of players in the Cloud and application packages spaces, including companies such as  Salesforce.com, SAP and Microsoft, and so IBM is not just getting powerful technology but also in one move it is taking control of the linkage between Cloud and anything else.

On the product front, the killer feature of Cast Iron’s offering is its extensive range of pre-built integration templates covering many of the major Cloud and on-premise environments.¬†So, for example, if an organization wants to link invoice information¬†in its SAP system with the¬†Salesforce.com¬†sales force environment,¬† then the¬†Cast Iron offering includes prepared templates for¬†the required definitions and configurations. The result is that the integration can be set up in a matter of hours rather than weeks.

So why is this so important? Well, for one, most people have already realized that Cloud usage must work hand-in-hand with on-premise applications, based on such things as security needs and prior investments. On top of this, different clouds will serve different needs. So integration between clouds and applications is going to be a fact of life. IBM’s acquisition leaps it into the forefront of this area, in both technology and partner terms. But there is a more strategic impact of this acquisition too. Noone knows what the future holds, and how the Cloud market will develop. Think of the situation of mainframes and distributed solutions. As the attractions of distributed systems grew, doomsayers were quick to predict the end of the mainframe. However, IBM developed a powerful range of integration solutions in order to allow organizations to leverage the advantages of both worlds WITHOUT having to choose one from the other. This situation almost feels like a repeat – Cloud has a lot of advantages, and some misguided ‘experts’ think that Cloud is the start of the end for on-premise systems. However, whether you believe this or not, IBM has once again ensured that it has got a running start in providing integration options to ensure that users can continue to gain value from both cloud and on-premise investments.

Steve

Did Teilhard’s JuxtaComm patent wipe out IBM, Microsoft and SAP?

US_Patent_coverOver the past two years, a little Canadian company called Teilhard has been suing IBM, Microsoft, SAP and many others over a data exchange patent it acquired from JuxtaComm, but all parties have now settled.

I have occasionally blogged¬†in the past on this case,¬†regarding¬†a patent from a company called JuxtaComm (now owned by Teilhard which is in turn owned by Shopplex.com)¬†on a ‚ÄėSystem for transforming and exchanging data between distributed heterogeneous computer systems‚Äô (US patent number 6,195,662). Legal¬†activities were leading¬†up to a November 2009 trial date.¬†Many people have contacted me recently to try to find the current status, since there is little information available in the public domain and what is available in the blogosphere seems terribly polarized one way or the other. In answer to these frequent contacts, I thought I would post an update.

Firstly, the trial is off. All parties reached mutually acceptable settlement agreements, and these agreements included protection for all parties from future legal action. It should be noted that no information has been released by any party yet on the terms of these settlements. Settlements are exactly what they say ‚Äď the dispute has been settled between the parties, for good. The only other point to note is that settlement amounts reflect the confidence and risk for each party in continuing to trial ‚Äď therefore settlements can be large or they can be small or even zero. It all depends how strong each party thought its case was, and how much each was prepared to risk in terms of legal costs and potentially unexpected¬†trial outcomes.

Secondly, as part of the pre-trial legal process, the judge involved formalized definitions for the terms involved in the patent, for example ‚Äėscript‚Äô. By choosing unusually wide-reaching interpretations for these terms which programmers would find highly eccentric, the result was that the patent applicability was increased dramatically from what it had originally covered, that is ETL (extract/transform/load). In the ‚Äėscript‚Äô example, for instance, while the defence argued that a script was “a series of text commands‚Äô that were interpretively run sequentially by the script processor”, the judge decided that a script meant “a group of commands”! However, the reason this is important is that subsequent to these definition rulings, the US patent office is now¬†re-examining the patent for validity based on the new definitions. This work should be finished in the new year, and will have a bearing on future legal actions against new perceived infringers.

The big question to some, particularly those who own shares in the private Canadian company owning the patent now,  is how big or small the settlements are. As I said, this will be determined by a combination of risk, confidence and desire to avoid burning legal costs unnecessarily. On the plus side for the patent holder, the judge’s definitions strengthened the case substantially, making it far more wide reaching and therefore widening the impact on potential infingers. On the negative side there were an awful lot of examples of software that seemed to do the same thing as the patent describes which predated the patent, although in legal terms this sort of prior art does not necessarily invalidate the patent or law suits, apparently. But in the end, the truth is no-one knows whether settlements were huge or miniscule.

The only trustworthy sources for this settlement information in my view would be any formal announcements from any of the parties involved. For example, if a company has had to make a large payment for settlement, then depending on how much this payment is as a percentage of its turnover it might be legally required to make a statement about it in its SEC filings. Similarly, if the patent holder announces any substantial dividends then that would be another indication of settlement sizes, since the patent-holder makes very little revenue in software sales and therefore any significant profit would have to have come from¬†patent settlements. Perhaps the¬†next six months or so will make things a little clearer, but don’t hold your breath;¬†as a private company, the only people Shopplex.com is required to keep informed is its own¬†shareholders – it is not required to make¬†any¬†statements at all to the wider public.

Steve

Microsoft and ESBs – what a shame!

I was recently doing some research into the latest state of play in the Enterprise Service Bus (ESB) market, and decided to take a¬†look at Microsoft’s ESB – or rather its pretend ESB.

I had never been sure about Microsoft and SOA- it tends to focus instead on BizTalk and the Microsoft world. However, recently I have heard a lot of encouraging noises from Microsoft about its belief in SOA, and how it sees BizTalk as a key component in an SOA architecture for application design and deployment. But I must admit I had not realized that Mircosoft gave any credence to the ESB concept.

With an element of hope¬†I delved into Microsoft’s ESB stuff – only to be¬†disappointed to discover it is not an ESB product at all,¬†but¬†‘ESB Guidance’, a collection of samples, templates and artifacts to deliver ESB functionality. In essence, Microsoft does not yet acknowledge the existence of the ESB class of product, preferring instead to take the old IBM line of a few years back pretending that an ESB is a style of implementation rather than a product. However, I thought, this doesn’t really matter as long as Microsoft offers ESB functionality, however it packages it.

But then¬†sad reality dawned. Microsoft ESB Guidance is not even supported. It is a collection¬†of samples and pieces offered on an ASIS basis, take it or leave it. Use it if you like, but don’t come to us with any issues or problems. How disappointing. See the Microsoft Guidance notes –

The Microsoft ESB Guidance for BizTalk Server R2 is a guidance offering, designed to be reused, customized, and extended. It is not a Microsoft product. Code-based guidance is shipped “as is” and without warranties.

So,¬†it looks like Microsoft isn’t really on the ESB bandwagon yet. The new release of BizTalk Server this year may introduce a ‘real’ ESB, but at this point in time Microsoft appears to be paying lip-service to SOA compliance, but not actually doing much about it.

Steve

BPM’s time has come

Could 2009 finally be the year BPM comes into its own? My own opinion is – YES!

This may seem a bit odd – after all, in previous years I have been a bit hesitant about BPM adoption, finding instead that many users were working on lower level integration problems first and then ‘backing into’ BPM. On top of this, with all the trading uncertainty around surely no-one will be rushing to BPM?

In fact, Lustratus thinks that the current economic environment is EXACTLY the right time for BPM. My worries in the past have been to do with people trying to move completely over to a BPM model. This requires a heck of a lot of effort, thought, maturity in process engineering and resources, and can take some time to generate a payback although the eventual gains are admittedly great. However, the current economic situation is forcing people to be much more pragmatic, and it is here that BPM really starts to deliver.

Lustratus recently produced a paper discussing the Lustratus BPM Sweet Spots –¬†five potential targeted uses of BPM technology¬†sorted in terms of speed of return, ease of implementation and overall benefit. A number of these¬†sweet spots represent quick ways to improve a particular process, increasing automation and hence providing the opportunity to reduce people costs. It is this improved efficiency and productivity that attracts companies in the current economic downturn – anything that makes use of what is already there but cuts the staffing bill is¬†almost a no-brainer. In addition, the visibility BPM brings with it into process execution is of enormous use when trying to implement responsible risk and compliance management measures, something greatly desired in the current circumstances.

So, 2009 should be the year when companies turn to BPM – but note the distinction of pragmatic, targeted BPM as opposed to grand BPM strategies that will make everything better ‘sometime’.

Steve

Open source hiatus in 2009

It’s that time of year again, and Lustratus has just produced its annual predictions for the software infrastructure marketplace.

The Lustratus Insight containing the 2009 predictions is available free of charge here.

As might be expected, the predictions this year are heavily influenced by the current economic downturn and projections that it will continue throughout the year. It may be surprising, therefore, that one of our predictions is that there will be a hiatus in the open source (OSS) marketplace. At first glance, this seems counter-intuitive. After all, if companies are desperate to cut costs, then surely open source products that have no license fees must be an attractive option? Wont this drive OSS demand in 2009?

The Lustratus reading is slightly different. While it may be true that on the face of it having free software would be great in today’s¬†constrained environment, the problems stem from the¬†nature of OSS¬†combined with the likely mandates¬†under which companies are¬†operating at the moment –¬†that is the need to reduce staffing wherever possible.

Most open source software is by its very nature collaborative, and this tends to lead to software that is¬†made available in kit form – that is, although the open source software may offer a framework or the basis for the desired functionality, it is expected that¬†the user will put effort into customizing and extending the functionality for his or her own needs.¬†Typically, therefore, embarking on an open source initiative involves a heavy investment of IT resources, at least at the beginning. Now while this will result in lower license costs, the problem is that in today’s climate companies are shying¬†away from anything that requires anything more than minimal resource investment. Users are looking for products that work out of the box, or more likely are trying to generate more¬†value from what is already in place.

As a result, Lustratus believes that although interest will remain in OSS, new OSS projects will be put on hold until economic conditions relax.

Steve

Linux v z/OS on IBM mainframes

Five or ten years ago, this sort of question would have been unthinkable, but now mainframe users are increasingly facing a choice between whether to use Linux on System z or z/OS to host new mainframe workloads.

These new workloads may be the result of a consolidation project, or simply taking advantage of flexible architectures like SOA to utilize spare mainframe capacity, but the decision is not an obvious one in either case.

On the one hand, long-time mainframe guys will say that z/OS has grown up with the mainframe and therefore must be the best choice. But IBM has done a lot to its version of Linux for the mainframe, and Linux bigots will be quick to point out that the license costs will be cheaper and there are strong advantages in standardizing on a portable and flexible operating system enterprise-wide. Worst of all, given the polarized nature of IT in general, the decision makers find it hard to get unbiased advice on such a divisive question.

In the end, the answer to the question of whether z/OS or¬†Linux on System z is better¬†is not surprising – “it depends”.¬†This subject is discussed in much more detail in a¬†free Lustratus report, “Choosing the right¬†SOA platform¬†on IBM System z”, available from the Lustratus web store. While this paper focuses particularly on developing or moving SOA workloads¬†onto System z,¬†the analysis applies to any new mainframe workload. Summarizing the arguments¬†in the paper, the major differences that affect the decision are that Linux is designed to offer a¬†common¬†environment across many platforms, and is¬†thus less attuned to individual¬†platform capabilities by definition, and that¬†whereas Linux has been¬†designed¬†for the ‘server’ model where it is used to operating one¬†type of workload, z/OS has been built to handle multiple subsystems from the start.

The common environment aspect of Linux offers flexibility, helps to drive license costs down and leverages widely available skills. The multi-system capabilities of z/OS combined with its close linkage to the System z platform offer the greatest exploitation of System z facilities. But as always the devil is in the details.

Steve

Mistakes marketeers make – and how NOT to make them

I was reminded how easy it is to get marketing completely wrong today when I saw (on UK television) an advertisement for New York Bagels.

Bagels are not as heavily embedded in the UK psyche as in the US, I admit. Brits think of them as something you see people eating on US cop shows and sitcoms, often in New York. So what did the marketing company do? It spent the entire time showing the British market how delicious bagels could be … and then finished with a picture of three packs of New York bagels.

The point here is that the advertising company had missed its mark. Yes, there was a need to educate the British audience, but we know so little about bagels that it seems the advert is for ‘bagels’ as opposed to a particular brand name. When it refers to New York bagels, I and many Brits like me thought that that was the name of the food – like you might say Scottish salmon. My reaction was to pop down to my local store and by ITS OWN brand of bagels – the advert said ‘eat bagels’, not ‘buy New York bagels’. I guess in the US the advertisement or should I say commercial would work OK because the audience would be very familiar with bagels and realize the New York Bagel is a brand name.

So the marketing failed at one of the first hurdles – it had not attuned the message for the target audience.

In the software world, marketing is often extremely poor. I have always thought the main reason is that software companies have often grown around a particular technology, run by technology people, and to these people it is completely obvious why someone should buy its products – because they are technically wonderful! However, the same general marketing principles apply – you must know your target audiences, understand your key value proposition and how you are positioned in the software firmament and what your competition are up to. But a short look at marketing for virtually any software offering will quickly show this is rarely the case!

Lustratus uses its REPAMA Strategic Marketing analysis tools with clients to look at how products are being marketed, and the results can be quite eye-opening…so in order to increase awareness of this topic and further understanding, Lustratus has started a new blog to discuss issues around software marketing and its effectiveness. While this will be of obvious interest to software marketeers, I recommend it to buyers of software too – it is always useful to see how potential suppliers are tuning their messages and what markets they are really interested in.

Steve

SOA market surveys: Buyers beware

Eric Knorr commented in InfoWorld a couple of days ago on Forrester’s latest market adoption estimates for SOA which showed much lower rates of adoption than the same survey suggested a year ago.

In my opinion, the key problem is probably not SOA adoption itself but problems with surveying end-users about hot technology areas.  This is an area that I have recently covered in an insight called Interpreting Market sizings:  It is a serious issue for organisations attempting to figure out what the real market take-up is and when or even if they should follow a particular technology trend.

Coming back to the article, Eric states that:

“SOA sounds great, but boy, is it hard. Especially on a wide scale, because doing it right generally requires rethinking how IT is organized.”

While I have some sympathy with the author‚Äôs sentiment, but Eric’s view really relates more to when and whether you decide to go for the wide scale big bang approach as opposed to the more achievable incremental build up.

More importantly, the evidence for this statement is the discrepancy between the 14% of respondents surveyed last year who said that they would implement SOA in 2006 (not including the 39% who already had) and the 2% who actually had a year later.¬† This is a great example of the ‚ÄúEverybody‚Äôs doing it apart from me‚ÄĚ effect.¬† Asking aspirational questions about what we hope to do next year will always give misleading and optimistic answers. It is as true for healthy eating as it is for SOA!¬† This effect is particularly strong with anything that is generally considered a hot topic (like SOA).¬† This is of course not to say that the respondents didn’t genuinely think they were going to do a SOA project.¬† But must-do projects appeared and other projects extended and then the wished-for SOA project was never got to.

What does all of this mean?¬† It probably does mean that SOA take-up is slower than the more wild claims – no surprise there.¬† It probably also means that people aren’t doing SOA as fast as they would like to – things always take longer anyway.¬† However, it almost certainly doesn’t mean that there is a radical slowdown in actual adoption.

Finally… It is good fun to knock this survey and others (such as the one last year which claimed that 90% of companies would have exited 2006 with ‚ÄúSOA planning, design and programming experience‚ÄĚ based on a sample of a massive 120 respondents).¬† ¬†However, as I said at the start, the whole area of market surveys is a serious topic as it distorts perceptions both at the start and later on when more general adoption does happen.¬† All of which means, with surveys ‚Äď buyer beware!

Ronan