IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux ūüėČ . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

Calling all integration experts!

Remember the old Universal Translator as modeled here by the late Mr. Spock? One of the first (or perhaps future?) examples of integration solutions, and certainly one of the most fondly rememberehttp://zagg-blog.s3.amazonaws.com/community/blog/wp-content/uploads/2012/03/12581.jpgd! But at its heart, it is also an almost perfect representation of the integration challenges today. Many years ago, there was EAI (Enterprise Application Integration) which was all about integrating homegrown applications with purchased package applications and/or alien applications brought in from Mergers and Acquisitions activity. The challenge was to find a way to make these applications from different planets communicate with one another to increase return on assets and provide a complete view of enterprise activity. EAI tools appeared from vendors such as TIBCO, SeeBeyond, IBM, Vitria, Progress Software, Software AG and webMethods to mention just a few.

Then there came the SOA initiative. By building computer systems with applications in the form of reusable chunks of business functionality (called services) the integration challenge could be met by enabling different applications to share common services.

Now the eternal wheel is turning once again, with the integration challenge clothed in yet another disguise. This time it is all about integrating systems with completely different usage a resource characteristics such as mobile devices, IoT components and traditional servers, but also applications of completely new types such as mobile apps and cloud-based SaaS solutions. In an echo of the past, lines of business are increasingly going out and buying cloud-based services to solve their immediate business needs, or paying a third-party developer to create the App they want, only to then turn to IT to get them to integrate the new solutions with the corporate systems of record.

Once again the vendors will respond to these user needs, probably extending and redeveloping their existing integration solutions or maybe adding new pieces where required. But as you look for potential partners to help you with this next wave of integration challenges, it is worth keeping in mind possibly the most important fact of all; a fact that has been evident throughout the decades of integration challenges to date. Every single time the integration challenge has surged to the top of the priority list, the key differentiator contributing to eventual success is not the smarts built into the tools and software / appliances on offer. Rather it is all about the advice and guidance you can get from people with extensive experience in integration challenges. Whether from vendors or service providers, these skills are absolutely essential. When it comes down to it, the technical challenges of integration are just the tip of the iceberg; all the real challenges are how you plan what you are going to do and how you work across disciplines and departments to ensure the solution is right for your company. You don’t have the time to learn this – find a partner who has spent years steeped in integration and listen to what they have to say!

IBM gives predictive analytics a friendly face

1987-predictions-2One of the big challenges facing the Business Analytics industry is the historical complexity of business intelligence and analytics tools. For years companies have had to rely on their BI experts to do just about anything useful; it isn’t that companies do not see value in putting analytics in the hands of business people, it is that the products needed a Diploma in Statistics and intimate familiarity with the technology behind the tools.

However the situation is improving. Products like Spotfire and Tableau have worked hard to deliver data visualization solutions that provide users with business-context easy to understand data, and suppliers of broader Analytics suites such as Oracle and IBM have been trying to improve other aspects of analytics usability. To be honest, IBM has been somewhat lagging in this area, but over the last year or so it is giving clear indication that it has woken up to the advantages of providing such tools as predictive analytics and decision management in a form that the wider business user community can access.

The recent IBM announcement of SPSS Analytic Catalyst is another proof point along the journey to broader access, usage and value. This exciting new development may have been named by a tongue-twisting demon, but the potential it offers companies to create more value from corporate information is huge. In essence, the tool looks at this information and automatically identifies predictive indicators within the data, expressing its discoveries in easy-to-use interactive visuals TOGETHER WITH plain language summaries of what it has found.¬† So for example, one SPSS Analytic Catalyst (really rolls off the tongue, doesn‚Äôt it) page displays the ‚ÄėTop Insights‚Äô it has found, such as the key drivers or influencers of a particular outcome.

The combination of simple visuals with associated plain language conceals all the statistical complexity underneath, making the information easily consumable. The business users can quickly identify the drivers of most interest related to corporate key performance measures, for example, and then drill down to gain a deeper insight. Removing the need for highly trained BI experts means that the wider business community can create substantially more value for the company.

IBM reinforces its Appliance strategy with acquisition of Netezza

When IBM ¬†acquired DataPower’s range of appliances in 2005, it caused some raised eyebrows; was IBM really serious about getting into the appliances game?. Subsequently the silence from IBM was deafening, and ¬†people were starting to wonder whether IBM’s foray into the appliances market had fizzled out. However 2010 has been the year when IBM has made its strategic intent around appliances abundantly clear.

First it acquired Cast Iron, the leading provider of appliances for use in Cloud Computing, and now it is buying Netezza, one of the top suppliers of data warehouse appliances. Netezza has built up an impressive market presence in a very short time, dramatically accelerating time to value for data analytics and business intelligence applications. In addition, it has continued to extend its DataPower range, with the addition of a caching appliance and the particularly interesting ‘ESB-in-a-box’ integration appliance in a blade form factor. For any doubters, IBM has clearly stated its intentions of making appliances a key element of its strategic business plans.

This just leaves the question of why. Of course the cynical answer is because IBM must see itself making a lot of money from appliances, but behind this is the fact that this must indicate that appliances are doing something really useful for users. The interesting thing is that the key benefits are not necessarily the ones you might expect. In the early days of appliances such as firewalls and internet gateways, one key benefit was the security of a hardened device, particularly outside the firewall.  The other was commonly performance, with the ability in an appliance to customize hardware and software to deliver a single piece of functionality, for example in low-latency messaging appliances. But the most common driver for appliances today is much broader Рappliances reduce complexity. An appliance typically comes preloaded, and can replace numer0us different instances of code running in several machines. You bring in an appliance, cable it up and turn it on. It offers a level of uniformity. In short, it makes operations simpler and therefore cheaper to manage and less susceptible to human error.

Perhaps it is this simplicity argument and its harmonization with current user needs that is the REAL driving force behind IBM’s strategic interest in Appliances.

IBM acquires Cast Iron

castironI am currently at IBM’s IMPACT show in¬†Las Vegas, where the WebSphere brand gets to flaunt its wares, and of course one of the big stories was IBM’s announcement that is has acquired Cast Iron.

While Cast Iron may only be a small company, the acquisition has major implications. Over the past few years, Cast Iron has established itself as the prime provider of Cloud to Cloud and Cloud to on-premise integration, with a strong position in the growing Cloud ecosystem of suppliers. Cast Iron has partnerships with a huge number of players in the Cloud and application packages spaces, including companies such as  Salesforce.com, SAP and Microsoft, and so IBM is not just getting powerful technology but also in one move it is taking control of the linkage between Cloud and anything else.

On the product front, the killer feature of Cast Iron’s offering is its extensive range of pre-built integration templates covering many of the major Cloud and on-premise environments.¬†So, for example, if an organization wants to link invoice information¬†in its SAP system with the¬†Salesforce.com¬†sales force environment,¬† then the¬†Cast Iron offering includes prepared templates for¬†the required definitions and configurations. The result is that the integration can be set up in a matter of hours rather than weeks.

So why is this so important? Well, for one, most people have already realized that Cloud usage must work hand-in-hand with on-premise applications, based on such things as security needs and prior investments. On top of this, different clouds will serve different needs. So integration between clouds and applications is going to be a fact of life. IBM’s acquisition leaps it into the forefront of this area, in both technology and partner terms. But there is a more strategic impact of this acquisition too. Noone knows what the future holds, and how the Cloud market will develop. Think of the situation of mainframes and distributed solutions. As the attractions of distributed systems grew, doomsayers were quick to predict the end of the mainframe. However, IBM developed a powerful range of integration solutions in order to allow organizations to leverage the advantages of both worlds WITHOUT having to choose one from the other. This situation almost feels like a repeat – Cloud has a lot of advantages, and some misguided ‘experts’ think that Cloud is the start of the end for on-premise systems. However, whether you believe this or not, IBM has once again ensured that it has got a running start in providing integration options to ensure that users can continue to gain value from both cloud and on-premise investments.

Steve

Unlocking more value from legacy CICS applications

old-lockIBM’s acquisition of ILOG has resulted in a great new opportunity to unlock the business value of CICS applications by turning the COBOL logic into easy-to-read/edit ‘business rules’.

IBM has taken the ILOG JRules Business Rules Management System (BRMS) and made it part of the WebSphere family. But even better for CICS users, IBM has made this business rules capability available for CICS applications too. This whole subject is discussed in more detail in a new and free Lustratus Report, downloadable from the Lustratus web store, entitled “Using business rules with CICS for greater flexibility and control”. But why is this capability of interest?

The answer is that many of the key business applications in the corporate world are still CICS COBOL mainframe applications, and although these applications are highly effective and reliable, they sometimes lack in terms of flexibility and adaptability.¬†Not unreasonably, companies are loath to go to the expense and risk of rewriting these essential programs, but are instead looking for some technology-based answer to their needs for greater agility and control. The BRMS idea provides just that. Basically, the logic implementing the business decisions in the operational CICS applications is extracted and turned into plain-speaking, non-technical business rules, such as ‘If this partner has achieved GOLD certification, then apply a 10% discount to all transactions’. This has a number of benefits:

  • It becomes easy for rules to be changed
  • It becomes easy for a business user to verify the rules are correctly implemented
  • If desired, business users can edit operational rules directly

While BRMS is a technology with a lot to offer in many scenarios, it seems particularly well suited to legacy environments, providing a way to unlock increased potential and value from existing investments.

Steve

IBM acquires Lombardi to reinforce its BPM solutions

contractIBM has agreed an acqusition of Lomardi, one of the few remaining pure-play BPM suppliers, with target of closing the deal in 2010.

IBM has reaffirmed its position of strength in the burgeoning Business Process Management (BPM) space with this acquisition. Lombardi has three assets that IBM is particularly interested in; its human-centric BPM capabilities, its extensive professional services resources and its reputation and success with BPM at the departmental level.

For the uninitiated, business processes tend to span some or all of three distinct areas of usage Рhuman-oriented processes, document-oriented processes and prorgram-oriented processes. Human processes involve such aspects as task lists that people use as they carry out their assigned tasks, document processes upgrade traditional paper-oriented models and program-based processes involve the dynamic interaction of applications. IBM has always been most experienced at dealing with program-to-program interaction, delivering its own WebSphere BPM offering. A few years ago it also acquired FileNet, a major player in document-based processing that had document-related BPM products. Now it is making the Lombardi acquisition to strengthen its human interaction BPM capabilities.

This is an exciting acquisition, closing out the weakest areas of IBM’s BPM¬†solutions. However, the challenge for IBM will be to properly integrate the new product set with its existing BPM offerings. Frankly, IBM has not done a good job to date on this with its previous BPM acquisition of FileNet – IBM marketing collateral exhibits¬†confusion over what are essentially two differnent product solutions that both claim to be BPM. Hopefully it will handle the Lombardi acquisition better.

Steve

Mico Focus ReUZE misses the point

Micro Focus announced its latest mainframe migration tool, ReUZE yesterday – and once again it has completely missed the point.

The background is that for companies looking to move off the IBM mainframe, Micro Focus has been offering solutions for a number of different target platforms, but in each case the solutions have been based around the old emulation concept. Once again, it seems the company has fallen into the same trap. As the press release states

Unlike other solutions which insist on rewriting mainframe application data sources for SQL Server, or removing mainframe syntax from programs, the Micro Focus solution typically leaves the source code unchanged, thereby reducing costs, risk, and delivering the highest levels of performance and reliability.

The highlighted end to this statement is where I have a problem. Micro Focus seems to think that by offering an emulated environment for mainframe applications, it is reducing risk and delivering the best possible performance and reliability. But this is a load of rubbish. Think about it from the point of view of the mainframe user that has decided to move away from the mainframe Рin this case to a Microsoft environment. This is a big step, and the company concerned must be pretty damn sure this is what it wants to do. It has obviously decided that the Microsoft environment is where it wants to be, and as such surely this will include moving to a Microsoft skills set, Microsoft products and tools Рdatabase, security, and all the rest. So why settle for an emulation option?

The point¬†Micro Focus has missed is¬†that emulation is a way of propagating the old. After all, it originally stemmed from terminal¬†emulation, where the object was to make sure that end users still saw the same environment even¬†when their workstation technology changed. This was very sensible, becuase it focused on the right priority – don’t force the end users to have to retrain.¬†But let’s be clear – emulation costs. It provides an extra layer of software, affecting performance and scalability, and puts future development in a straightjacket because it¬†propogates the old way of doing things. However, in this case the cost of retraining end users would far outweight these implications.

But in the situation where a user is moving off the mainframe to a Microsoft world, why would the user want to propogate the old? Yes, the user wants to reuse the investments in application logic and data structure and content, but surely the user wants to get to the destination Рnot be stuck in purgatory, neither in one place nor the other. Why restrict the power of .NET by forcing the user to operate through an insulating emulation environment? Why hold the user back from moving into the native .NET database system of SQL Server and thereby leveraging  the combined power of the operating system, database and hardware to maximum effect? Why force the user to have to maintain a skills set in the mainframe applications when one of the reasons for moving may well have been to get to a single, available and cheaper one?

Yes, the¬†Micro Focus approach may end up reducing the risk of the porting process itself, since¬†it tries to leave mainframe code unchanged, but¬†that is a long way from reducing the risk of¬†moving from one world to the other. And as for the comments on leaving everything¬†unchanged to¬†‘deliver the highest levels of performance and reliability, that is just laughable. What makes Micro Focus think that the way an application is designed for the mainframe will¬†deliver optimal performance¬†and reliability in a .NET environment? The two environments are completely different with totally unlike characteristics. And when has an emulation layer EVER improved performance/reliability?

I see this ReUZE play as like offering someone¬†drugs. If you’ve decided you want to move¬†off the mainframe to .NET, I have a drug here that will reduce the pain. You will feel better …. honest. But the result is you will be left hooked on the drug, and wont actually get where you want to be. If you have decided this migration is for you, don;t try to¬†cut corners and fall for the drug – do the job properly and focus on the end goal rather than¬†the false appeal of an easy journey. Just Say No.

Steve

SOA success, and what causes it

I was recently pointed to an article in Mainframe Executive magazine written by David Linthicum on the subject of “Mainframe SOA: When SOA Works/When SOA fails”.

I think the friend who suggested I read it was making mischief, knowing my views on the subject of SOA and guessing (correctly) that this article would wind me up.

In summary, the article says that SOA is a large and complex change to your core architecture and working practices and procedures, and that the success or failure is dictated by questions such as executive buy-in/resourcing/funding/skills, and not technology selection.

The truth about success with SOA is that it has little to do with the technology you want to drag into the enterprise to make SOA work, and more to do with the commitment to the architectural changes that need to occur

I have two problems with the opinions stated in this article. The first is to do with changing attitudes to SOA, and the second with the technology comments.

Let me first state that I am well aware that if a company wants to adopt an enterprise-wide SOA strategy designed to take maximum long-term benefit from this new way of leveraging IT investments, then this requires all ofthe areas brought up in the article to be addressed Рskills, management buy-in, political will, funding and a strategic vision coupled with a tactical roadmap. I have no beef with any of this.

But I would contend that the world has changed from two years ago. The financial constraints all companies are experiencing have more or less forced the long-term strategic play onto the back burner for many. Some analysts actually like to claim that SOA is dead, a statement designed to be controversial enough to gain attention but to some extent grounded in the fact that a lot of companies are pulling back from the popular SOA-based business transformation strategies of the past. In fact, SOA is absolutely not dead, but it has changed. Companies are using SOA principles to implement more tactical projects designed to deliver immediate benefits, with the vague thought of one day pulling these projects together under a wider strategic, enterprise-wide SOA banner.

So, as an example, today a company might look at a particular business service¬†such as ‘Create Customer’, or ‘Generate Invoice’, and decide to replace the 27 versions¬†of the service that exist¬†in its silos today with a single shared service. The company might decide to use SOA principles¬†and tools to achieve this, but the planning horizon is definitely on the short term – deliver¬†a new level of functionality that will benefit all users, and help to reduce ongoing cost of ownership. While it would have¬†been valid a¬†few years ago to counsel this company to deliver this as part of an overarching shift to an SOA-oriented style of operations, today most companies will say that¬†although this sounds sensible, current circumstances dictate that focus must remain on the near term.

The other issue I have with this article is the suggestion that¬†SOA success is¬†little¬†to do with the technology choice. Given that the topic here was not just¬†SOA but mainframe SOA, I take particular exception to this. There are a wide range of SOA tools¬†available, but¬†in the mainframe arena¬†the quality and coverage of the tools vary widely. For example, although many¬†SOA tools claim mainframe¬†support, this may in actuality simply be anMQ adapter ‘for getting at the mainframe’. Anyone taking this route¬†is more than likely to fail with¬†SOA, regardless of how well it has taken on the¬†non-technical issues of SOA.¬†Even for those SOA tools¬†with specific mainframe support, some of these offer¬†environments alien to mainframe developers,¬†thereby causing¬†considerable problems in terms of skills utilization. It is critical that whatever technology IS chosen,¬†itcan be used by CICS or IMS-knowledgable folk as well as just disributed specialists.¬†Then there is the question of¬†how intuitive¬†the tools are. Retraining¬†costs can destroy an SOA project before it even gets¬†going.

For anyone interested,¬†there is a free Lustratus report on selecting¬†mainframe SOA tools available¬†from the Lustratus store. However, I can assure companies that, particularly¬†for mainframe SOA, technology selection absolutely IS a key factor¬†for success, and that while all the other transformational¬†aspects of SOA are indeed¬†key to longer term, enterprise-wide SOA there are still benefits to be gained with a more short-term¬†view that is more appropriate in today’s¬†economic climate.

Steve

IBM 1Q09 results implications

When I posted last week on looking ahead to the IBM first quarter results, I put my head on the block by stating that I felt the results would hold up pretty well.

The formal results were announced yesterday, and I am pleased to say I live to look into my crystal ball another day, at least when discounting the effects of swinging currency markets.

Firstly, I had suggested that the IBM services arm would¬†probably benefit from users wanting to cut costs and¬†looking for help to do it. In fact, IBM claims that¬†overall signings were up 10% at constant currency, and up 27% in¬†the larger projects category. This bodes well for future revenue recognition as these¬†projects¬†flow through. I had also pointed to the¬†desire for quick hit benefitsdriving the IBM WebSphere-based SOA offerings such as BPM, and indeed¬†while overall IBM software was down 6% (up 2% at constant currency),¬†WebSphere revenues grew 5% (14% at constant currency). My¬†forecast was that hardware would take a bit of a¬†hit, but that this shouldn;t¬†damage the overall numbers too much. Once again¬†this seems to be borne out in the IBM announcements,¬†pointing¬†to a 23% drop (18% at constant currency) of its Systems and Technology segment¬†where the hardware products live. However, overall this had¬†little adverse impact on IBM’s overall figures as predicted because IBM has swung its business model much more heavily in favour of software and services now.

Looking ahead, these results can only be good news for IBM, even though revenue at common currency was down 4%. From a global market perspective this should also prove encouraging to other IT vendors, particularly those with investments in the high-growth enterprise middleware area and those providing advisory professional services. However, companies reliant on hardware revenues will probably suffer most.

The final interesting point was that IBM claims it is sitting on¬†$12B cash in hand….I wonder¬†what it plans to do with all that money at a time when assets are cheap and¬†it has just missed out on SUN….

Steve