IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux ūüėČ . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

Unlocking more value from legacy CICS applications

old-lockIBM’s acquisition of ILOG has resulted in a great new opportunity to unlock the business value of CICS applications by turning the COBOL logic into easy-to-read/edit ‘business rules’.

IBM has taken the ILOG JRules Business Rules Management System (BRMS) and made it part of the WebSphere family. But even better for CICS users, IBM has made this business rules capability available for CICS applications too. This whole subject is discussed in more detail in a new and free Lustratus Report, downloadable from the Lustratus web store, entitled “Using business rules with CICS for greater flexibility and control”. But why is this capability of interest?

The answer is that many of the key business applications in the corporate world are still CICS COBOL mainframe applications, and although these applications are highly effective and reliable, they sometimes lack in terms of flexibility and adaptability.¬†Not unreasonably, companies are loath to go to the expense and risk of rewriting these essential programs, but are instead looking for some technology-based answer to their needs for greater agility and control. The BRMS idea provides just that. Basically, the logic implementing the business decisions in the operational CICS applications is extracted and turned into plain-speaking, non-technical business rules, such as ‘If this partner has achieved GOLD certification, then apply a 10% discount to all transactions’. This has a number of benefits:

  • It becomes easy for rules to be changed
  • It becomes easy for a business user to verify the rules are correctly implemented
  • If desired, business users can edit operational rules directly

While BRMS is a technology with a lot to offer in many scenarios, it seems particularly well suited to legacy environments, providing a way to unlock increased potential and value from existing investments.

Steve

Software AG sitting pretty?

Software AG seems to be defying predictions and surprising the market at every turn.

Once seen as a sleepy European software house based largely around legacy system technologies, it has taken major strides to transform itself into a major global software industry player. Its acquisition of webMethods a few years ago surprised the market, with many analysts unconvinced that it could make a go of the move into integration / SOA middleware, but it has done a fair job of building some momentum by tying the webMethods portfolio up with its own CentraSite governance technology, providing service-oriented architecture (SOA) with integrated governance.

Then it once again shocked the market¬†by snatching IDS Scheer, the well-known supplier¬†of modelling tools, from under SAP’s nose. Given that the IDS Scheer technology is used by most of the major SOA suppliers across the world for modelling, and in particular is a key part of the SAP portfolio, this would appear to give Software AG lots of¬†cross-sell opportunities across the two customer bases and throughout the SAP world.

Now it has announced its 2Q09 results, and they make pretty good reading ont he surface. A 9% increase in product revenues is particularly noteworthy give that so many companies are struggling to show any year-on-year growth in product sales. However, before getting too carried away it is worth delving a little deeper into the numbers. The product revenue numbers include maintenance as well as license sales. Licensesales actually fell, as with most other companies. Maintenance revenues jumped by 20% Рdoes this mean that the company has built a much larger maintenance base, or is it actually a reflection of a more aggressive pricing policy? Then there is the split between the legacy business (ETS) and the SOA/BPM business(webMethods). License revenues in this segment were down 15% Рnot very encouraging since this is the strategic business unit. Also, it is noticeable that maintenance revenue in each segment increased by about 20%, suggesting that this rise does indeed reflect a price hike.

However, taking all this into consideration, Software AG is still looking to have moved forward substantially from a few years ago, and assuming the IDS Scheer acquisition goes through OK there should be lots of opportunities for the company. Of course, a cynic might point out that by adding IDS Scheer to the webMethods portfolio, the company has made itself a highly attractive acquisition target to someone Рperhaps SAP?!

Steve

Mico Focus ReUZE misses the point

Micro Focus announced its latest mainframe migration tool, ReUZE yesterday – and once again it has completely missed the point.

The background is that for companies looking to move off the IBM mainframe, Micro Focus has been offering solutions for a number of different target platforms, but in each case the solutions have been based around the old emulation concept. Once again, it seems the company has fallen into the same trap. As the press release states

Unlike other solutions which insist on rewriting mainframe application data sources for SQL Server, or removing mainframe syntax from programs, the Micro Focus solution typically leaves the source code unchanged, thereby reducing costs, risk, and delivering the highest levels of performance and reliability.

The highlighted end to this statement is where I have a problem. Micro Focus seems to think that by offering an emulated environment for mainframe applications, it is reducing risk and delivering the best possible performance and reliability. But this is a load of rubbish. Think about it from the point of view of the mainframe user that has decided to move away from the mainframe Рin this case to a Microsoft environment. This is a big step, and the company concerned must be pretty damn sure this is what it wants to do. It has obviously decided that the Microsoft environment is where it wants to be, and as such surely this will include moving to a Microsoft skills set, Microsoft products and tools Рdatabase, security, and all the rest. So why settle for an emulation option?

The point¬†Micro Focus has missed is¬†that emulation is a way of propagating the old. After all, it originally stemmed from terminal¬†emulation, where the object was to make sure that end users still saw the same environment even¬†when their workstation technology changed. This was very sensible, becuase it focused on the right priority – don’t force the end users to have to retrain.¬†But let’s be clear – emulation costs. It provides an extra layer of software, affecting performance and scalability, and puts future development in a straightjacket because it¬†propogates the old way of doing things. However, in this case the cost of retraining end users would far outweight these implications.

But in the situation where a user is moving off the mainframe to a Microsoft world, why would the user want to propogate the old? Yes, the user wants to reuse the investments in application logic and data structure and content, but surely the user wants to get to the destination Рnot be stuck in purgatory, neither in one place nor the other. Why restrict the power of .NET by forcing the user to operate through an insulating emulation environment? Why hold the user back from moving into the native .NET database system of SQL Server and thereby leveraging  the combined power of the operating system, database and hardware to maximum effect? Why force the user to have to maintain a skills set in the mainframe applications when one of the reasons for moving may well have been to get to a single, available and cheaper one?

Yes, the¬†Micro Focus approach may end up reducing the risk of the porting process itself, since¬†it tries to leave mainframe code unchanged, but¬†that is a long way from reducing the risk of¬†moving from one world to the other. And as for the comments on leaving everything¬†unchanged to¬†‘deliver the highest levels of performance and reliability, that is just laughable. What makes Micro Focus think that the way an application is designed for the mainframe will¬†deliver optimal performance¬†and reliability in a .NET environment? The two environments are completely different with totally unlike characteristics. And when has an emulation layer EVER improved performance/reliability?

I see this ReUZE play as like offering someone¬†drugs. If you’ve decided you want to move¬†off the mainframe to .NET, I have a drug here that will reduce the pain. You will feel better …. honest. But the result is you will be left hooked on the drug, and wont actually get where you want to be. If you have decided this migration is for you, don;t try to¬†cut corners and fall for the drug – do the job properly and focus on the end goal rather than¬†the false appeal of an easy journey. Just Say No.

Steve

SOA success, and what causes it

I was recently pointed to an article in Mainframe Executive magazine written by David Linthicum on the subject of “Mainframe SOA: When SOA Works/When SOA fails”.

I think the friend who suggested I read it was making mischief, knowing my views on the subject of SOA and guessing (correctly) that this article would wind me up.

In summary, the article says that SOA is a large and complex change to your core architecture and working practices and procedures, and that the success or failure is dictated by questions such as executive buy-in/resourcing/funding/skills, and not technology selection.

The truth about success with SOA is that it has little to do with the technology you want to drag into the enterprise to make SOA work, and more to do with the commitment to the architectural changes that need to occur

I have two problems with the opinions stated in this article. The first is to do with changing attitudes to SOA, and the second with the technology comments.

Let me first state that I am well aware that if a company wants to adopt an enterprise-wide SOA strategy designed to take maximum long-term benefit from this new way of leveraging IT investments, then this requires all ofthe areas brought up in the article to be addressed Рskills, management buy-in, political will, funding and a strategic vision coupled with a tactical roadmap. I have no beef with any of this.

But I would contend that the world has changed from two years ago. The financial constraints all companies are experiencing have more or less forced the long-term strategic play onto the back burner for many. Some analysts actually like to claim that SOA is dead, a statement designed to be controversial enough to gain attention but to some extent grounded in the fact that a lot of companies are pulling back from the popular SOA-based business transformation strategies of the past. In fact, SOA is absolutely not dead, but it has changed. Companies are using SOA principles to implement more tactical projects designed to deliver immediate benefits, with the vague thought of one day pulling these projects together under a wider strategic, enterprise-wide SOA banner.

So, as an example, today a company might look at a particular business service¬†such as ‘Create Customer’, or ‘Generate Invoice’, and decide to replace the 27 versions¬†of the service that exist¬†in its silos today with a single shared service. The company might decide to use SOA principles¬†and tools to achieve this, but the planning horizon is definitely on the short term – deliver¬†a new level of functionality that will benefit all users, and help to reduce ongoing cost of ownership. While it would have¬†been valid a¬†few years ago to counsel this company to deliver this as part of an overarching shift to an SOA-oriented style of operations, today most companies will say that¬†although this sounds sensible, current circumstances dictate that focus must remain on the near term.

The other issue I have with this article is the suggestion that¬†SOA success is¬†little¬†to do with the technology choice. Given that the topic here was not just¬†SOA but mainframe SOA, I take particular exception to this. There are a wide range of SOA tools¬†available, but¬†in the mainframe arena¬†the quality and coverage of the tools vary widely. For example, although many¬†SOA tools claim mainframe¬†support, this may in actuality simply be anMQ adapter ‘for getting at the mainframe’. Anyone taking this route¬†is more than likely to fail with¬†SOA, regardless of how well it has taken on the¬†non-technical issues of SOA.¬†Even for those SOA tools¬†with specific mainframe support, some of these offer¬†environments alien to mainframe developers,¬†thereby causing¬†considerable problems in terms of skills utilization. It is critical that whatever technology IS chosen,¬†itcan be used by CICS or IMS-knowledgable folk as well as just disributed specialists.¬†Then there is the question of¬†how intuitive¬†the tools are. Retraining¬†costs can destroy an SOA project before it even gets¬†going.

For anyone interested,¬†there is a free Lustratus report on selecting¬†mainframe SOA tools available¬†from the Lustratus store. However, I can assure companies that, particularly¬†for mainframe SOA, technology selection absolutely IS a key factor¬†for success, and that while all the other transformational¬†aspects of SOA are indeed¬†key to longer term, enterprise-wide SOA there are still benefits to be gained with a more short-term¬†view that is more appropriate in today’s¬†economic climate.

Steve

Is the time right for Progress Software to be bought?

In the course of my ongoing analysis of software infrastructure vendors I was intrigued by the recent earnings release from Progress Software…

…and it caused me to dig a bit deeper. Basically, Progress is¬†holding its revenue stream although not growing it,¬†and I guess in today’s environment that is OK. But¬†when the performance of the company over the last few years is considered, a different picture starts to build up.

Basically, Progress made a lot of money¬†from its¬†OpenEdge¬†database product, and this business is still providing a rich ‘cash-cow’ revenue stream. However, not only¬†has this stagnated but it is starting to decay, with Q109 showing a sharp drop. Admittedly this is probably in¬†part due to¬†currency movements, but the trend is clear – this is not a¬†growing business ans the writing is on the wall, at least in the longer term. Progress knows this, and so over the past few years it has been¬†on the acquisition trail, trying desperately to find a new business that can grow sufficiently to become the new OpenEdge. It has tried the area of Data, with its DataDirect division growing through acquisition, but this business¬†has reached a steady state with little or no growth.¬†It tried the area of messaging, being the company that brought the term ESB (Enterprise Service Bus)¬†to the world¬†through its SONIC line of business, but having got a great mindshare and market position it lost focus and this business is now fatally damaged, with others such as IBM, Oracle andMicrosoft taking up the mantle.¬†Recently it acquired the APAMA complex event processing business, Actional (SOA management) and IONA (a datedintegration business¬†based in Ireland). It has since found some success with the excellent¬†APAMA offering in the heartland of financial¬†market data¬†processing, but has struggled to replicate this success in other industries and use cases. Actional has¬†also had some success but¬†it is immutably tied to the SOA star which is having¬†its own problems.¬†And IONA, similarly to Progress, has a nice legacy integration business based around Orbix but has failed¬†utterly over the years to create anything else worthwhile.

The result is that¬†although the IONA purchase has increased revenues in¬†the Progress ‘integration infrastructure’ business unit, this is likely to be a one-off improvement and once again Progress is going to be stuck with an aging cash-cow¬†and no clear rising star to take over¬†responsibility for driving¬†growth.

This might seem a recipe for Progress itself to be acquired. Up to now, this has been unattractive due to the share price, but in thecurrent climate the acquisition looks a lot more interesting. My view is that there are probably two strong candidate acquirers for Progress:

  • Companies looking for attractive maintenance businesses¬†where profit can be maximized by cutting expenses and taking the money until the product line sunsets
  • Companies not currently in the integration space but wanting to get into this lucrative area and looking for a ready-made product set (perhaps to underpin a professional services business)

Who knows what will happen in the current turmoil? I may be way off the mark, but if I was a company fitting either of these two categories, and I had the money, I think now would be a good time to strike. After so many false dawns, I suspect the Progress management team might not resist too hard….

Steve

    What use is technology without flexibility?

    I was reading a post today from mainframe integration vendor GT Software…

    …about its¬†support of IBM’s mainframe speciality engines, and I was suddenly hit by the realization that in order to really add value for users, technology almost always has to be accompanied by flexibility. The two need to go hand in hand if returns¬†are to¬†be maximized and business risk minimized.

    The specific example discussed relates to an IBM mainframe invention called a speciality engine. For the uninitiated, think of a logical processing box within the overall mainframe environment where processing is much cheaper, with different boxes being aligned to specific activities such as running LINUX operations, data access or Java-type activities. What this basically means is that if part of your workload is doing something that is supported by one of the speciality engine types, then you can choose to run it more cheaply by moving it into this engine, and in fact this can often improve performance too.

    This is neat technology, offering the opportunity to reduce costs and improve effectiveness, and various mainframe software suppliers have jumped on the opportunity this offers by moving eligible workloads onto these specaility engines. However, as with any new technology development, things are not quite as simple as they seem. In the IT industry there is a terrible tendency to jump for a new technology and push everything onto it, without appreciating the implications. But, in this example, as pointed out in the referenced post,

    There are many use cases where it is much more efficient to NOT shift workload to a specialty engine. Why — because, there is overhead associated with moving workload

    This is typical with just about any new technology. It is great in SOME circumstances, but loses out in others. iPODs are great for listening to pop music, sounding little different to CDs and being very much more convenient, but try them on classical symphonies and you will wonder what has happened to the color and magic of the piece. The key is to use new technology for WHAT MAKES SENSE, as opposed to what is possible. There is another angle to this flexibility too. IT vendors often ignore the fact that users are not starting from a clean sheet of paper; they have existing investments and technologies that cannot just be written off. Therefore, it is important to have the flexibility to operate with whatever is in place rather than demand a specific new technology component. This is not a static need, but a dynamic one Рit may be that a company might change its approach further down the line, and a rigid, inflexible technology implementation can cause terrible future headaches.

    While new technology may promise a lot, it is only when coupled with flexibility over which technologies to use, for what, and when that technology can REALLY deliver its full value.

    Steve

    Message-driven SOA – what goes around?

    Starting from when I was running IBM’s MQSeries business, in the 1990s, I learnt a big lesson about seeing things from the user point of view.

    We had a great messaging product, and it started the EAI (Enterprise Application Integration) market rolling. Soon, vendors were pitching the wonders of business integration through an all-encompassing EAI framework….and users started moaning about it being complicated and too hard. Vendors brushed off these concerns and just shouted louder, and I was an evangelist in this….and then I started actually listening to users. I remember pitching for all I was worth on the strategic value of EAI, and then a user saying to me, “Steve, we believe you. But we can’t get there in one jump – at the moment, what we really need is to hook this application up with that one, that’s all”.

    For a moment my strategic eye was offended. How could you take this wonderful, clever, strategic software and then just hook two applications together? What a waste! But of course, I then learnt the practicalities of life, and the imperative to focus on the business need. If the business needs Application A to talk to Application B, then that is what it will fund, and that is what it wants to achieve. Sweeping frameworks are all very well, but for most companies practical considerations come first.

    Now I am having deja vu, all over again. I believe in SOA – I am an evangelist. I can see the huge benefits it promises as a strategic platform for business agility, business visibility and cost-efficiency. And yet, talking to users it has finally sunk in that while some of the more lucky companies have the funding and resources to go the whole hog with SOA, there are a large number of users who ‘just want to link A to B’, but want to do so in a way that is consistent with a goal of enterprise-wide SOA some time in the future.

    The new Lustratus report, free from the Lustratus web store, discusses a more tactical approach to SOA – “message-driven SOA”. It points out that even for those companies who are terrified by the prospect of having to work out their process implementations and flows, change the way they work and deal with business transformation issues, there is a way to leverage SOA ideas in a tactical, simple way that is at least a step on the road to overall SOA adoption. Message-driven SOA is almost a reprise of the tactical use of messaging in the 1990s, but with an SOA spin on it. So, message-based flows loosely couple applications and programs together, delivering the benefits of business integration without necessarily having to get tangled up in full-scale process re-engineering and modelling. And yet, the reuse concept of SOA is also leveraged, together with the ability to expose these message-based integrations as SOA services.

    Message-driven SOA may not be the answer to every problem. As a rule of thumb, it will be most attractive for integrations that are primarily of the application-to-application kind, where human interaction is limited and tasks are of short duration. But it is well worth a look to see if this simpler approach to getting tactical SOA benefits might be useful.

    Steve

    Secure mainframe SOA-in-a-box

    I was reading the announcement from Layer7 about its ‘SOA-in-a-box’ for IBM mainframe users, and a number of things struck me.

    First, I am SO PLEASED to see someone remembering that CICS is not the only mainframe transaction processing environment in use today. A significant number of large enterprises, particularly in the finance industry, use IBM’s IMS transaction processing system instead. With the strength and penetration of CICS in mainframe enterprises, it sometimes seems like these users have become the forgotten tribe, but investments in IMS are still huge in anyone’s numbers and it is a smart move to cater to them. I am sure that the fact that this solution serves IMS as well as CICS users will be a big plus.

    The other point that struck me was that I have felt for some time that, with the security/intrusion detection/firewall/identity management market seeing such a shift to security appliances, it was time vendors thought of piggy-backing functionality onto these platforms. Of course, one reason for having an appliance is to provide a dedicated environment to address issues such as security, but in truth these appliances are rarely used to anywhere near capacity. Therefore it makes a lot of sense to optimize the use of the available processing power rather than slavishly locking it away where it can;t help anyone.

    Finally, I have to admit my first reaction to this announcement was to worry about how good connectivity would be to the mainframe. Dealing with mainframes is an arcane area, and I was not aware that Layer7 had any special expertise or credentials here, but I see that GT Software is apparently providing the mainframe integration piece. This makes me a lot happier, since this company has been dealing with mainframes for 20 years. In fact, Lustratus did a review recently on GT Software’s Ivory mainframe SOA tool, which is apparently what is included in the Layer7 box.

    Anyway, on behalf of all those IMS users out there, thanks Layer7!

    Steve

    User experience with mainframe SOA provides interesting pointers

    Yesterday, I had the pleasure to host an Integration Consortium webinar on the topic of mainframe SOA.

    The user experiences were provided by SunTrust, a major US bank, and I found them most illuminating.

    One point that struck me was to do with ownership of the new services, from an organizational point of view. The issue here is that, although services representing mainframe transactions clearly fall into the domain of the mainframe programmer, the concepts of SOA and often the related tools can be quite alien to these programmers – having to worry about SOAP messages, XML, WSDL and web services standards for example. SunTrust cleverly selected a mainframe SOA toolset that masked much of this complexity, offering a development environment that COBOL programmers felt comfortable with. As a result, mainframe services are built and owned within the mainframe team, which is where they belong to be honest. To complete the picture, the tool transparently handles the service deployment, creating WSDL and exporting to UDDI registries as required, ensuring that SOA users will see familiar services.

    The lesson seems to be, be clear on who you want to own what, and then choose a toolset that supports that decision. The alternative is a confused mishmash where no-one knows who is responsible for what.

    Steve