IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux ūüėČ . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

Why enterprise mobile computing needs an mBroker – part 1

mobilephonesMobile computing is all the rage, with employees, consumers and customers all wanting to use their mobile devices to transact business. But how should an enterprise approach mobile computing without getting into a world of trouble? How can the enterprise future-proof itself so that as mobile enterprise access explodes the risks are mitigated?

mBrokers are emerging as the preferred method of building a sustainable, governable and effective enterprise mobile computing architecture. The mBroker brings together ESB, integration broker, service gateway, API management and mobile access technology to provide the glue necessary to bring the mobile and corporate worlds together effectively and efficiently; for a summary of mBroker functionality see this free Lustratus report. In this first post in a series looking at mBrokers, we will look at the fundamental drivers for the basic broking functionality offered by mBrokers.

Integration brokers have been around for many years now. The principle is that when trying to integrate different applications or computing environments, some form of ‘universal translator’ is needed. One application may expect data in one format while another expects a different format for example. A trivial example might be an intenrational application where some components expect mm/dd/yy while others want dd/mm/yy. The broker handles these transformation needs. But it plays another very important role apart from translating between different applications; it provides a logical separation between application components, so that requestors can request services and suppliers can supply services without either knowing anything about each other’s location/environment/technology. In order to achieve this, it provides other functionality such as intelligent routing to find the right service and execution location, once again without the service requestor having to know anything about it.

Enterprise mobile applications face a lot of the same challenges. When crossing from the mobile device end to the corporate business services end, the same problems must be addressed. For example, mobile applications often rely on JSON for format notation and use RESTful invocation mechanisms to drive services. But many corporate networks employ an SOA model based around XML data and SOAP-based invocations of services.  In addition, the same sort of abstraction layer offered by integration brokers is beneficial to avoid the mobile device needing to know about locations of back end applications. It is therefore not surprising to find that integration broker technology is one source for mBroker technology.

 

Lustratus sees 2011 as big year for Business Rules

Every year Lustratus digs out its crystal ball to identify the key trends in the global infrastructure market place for the next twelve months.

The latest set of predictions for 2011 can be found in this Lustratus Insight, available at no charge from the Lustratus store. However, one in particular deserves further mention. Lustratus predicts that in 2011 the use of Business Rules software (BRMS) will continue to grow rapidly.

To me, business rules represent the peak of business / IT alignment. For the uninitiated, the idea of Business Rules and Business Rules Management Systems is to enable a repository of rules to be created that control how the It implementation of business operates. These rules are written in non-technical (or at least non-IT technical) language, and can be authored and edited by business professionals. As a simple example, a bank might have a business rule governing how it charges its customers for their bank payments activities. This rule might say something along the lines of

“If payee is a personal customer, charge x per transaction. If payee is a business customer, charge y per transaction”.

Now, suppose the bank decides that it wants to have a marketing campaign to try to encourage more small businesses to start using its services. it might decide that as an incentive it will offer free payments processing for any business payments of less the £5,000. Most larger business clients would probably far exceed this number. Changing the IT systems to support this new initiative would involve no more than a business user editing the rule setting payment charges, and modifying it to

“If payee is a personal customer, charge x per transaction. If payee is a business customer and the amount is > ¬£5000 then charge y per transaction. If payee is a business customer and the amount is <= ¬£5,000 then set charge to zero.”

When the rule is altered, the BRMS would interpret this change into the necessary technical implementation to achieve the desired aims.

This is the root of Business Rules popularity. They provide the ultimate means for business users to change and adapt their business approach without having to involve heavy IT investment each time a change is made -efficient agility if you like. However, this business rules-based approach to IT implementation has another extremely useful by-product; it becomes much easier to demonstrate compliance with corporate or external policies and regulations. A compliance officer can review the easily-understandable business rules to validate that the company is correctly implementing regulatory requirements, without needing an IT translator.

I expect to see a lot of activity in 2001 in the area of Business Rules.

2010 crystal ball gazing

crysalballLustratus has just published the 2010 edition of its popular infrastructure software market predictions. This year, highlighted areas include BPM, BRMS, Cloud Computing, SOA Appliances, Integration, Security and even software patent litigation.

Every year Lustratus goes through this exercise, trying to identify the key trends for the year. Perhaps the most traumatic part of the forecast is the scoring of the predictions from the previous year Рalways an opportunity for embarassment. Fortunately, Lustratus has had a pretty good record over the years.

This year Lustratus is highlighting trends such as the continuing success of business alignment software like BPM, the effects that Cloud Computing is likely to have on the market, the resurgence of interest in good old integration. The Lustratus predictions can be downloaded at no charge from the Lustratus web store.

Steve

Progress Software acquires Savvion

handshakeSo Progress Software has bought yet another software company; this time a BPM vendor, Savvion. But is this the right move for Progress?

Progress Software has spent most of its life growing through acquisition, making use of the piles of cash generated by its legacy mid-range database product to find new areas of growth. After all, the legacy business may be highly profitable, but its returms are dwindling by the year and Porgress desperately needs something else to shore up its balance sheet. Unfortunately its acquisitions have had a bit of a patchy record of success. Perhaps it will be different this time.

Savvion is a credible BPM (Business Process Management) software provider, and 2009 was a bumper year for BPM sales. Specialist companies like Pegasystems and Lombardi showed huge growth rates, bucking the downward trend triggered across many technology sectors by the economic upheaval. On top of this, Progress has been trying to establish itself as a viable SOA (Service Oriented Architecture) and business integration vendor ever since it launched the Sonic ESB in the early years of the last decade, and BPM was a glaring hole in its portfolio. For these reasons, it is easy to see why Savvion would seem a good fit.

There seem to be two problems for Progress, however.¬†Firstly, BPM is now rarely a solution bought in¬†its own right – hence¬†the rapid consolidation of the BPM market with Pegasystems more or less the only major¬†oure-play BPM left standing following IBM’s acquisition of Lombardi. Instead, BPM is deployed more and more as part of a business transformation strategy involving components such as SOA, application and data integration, business rules, business monitoring¬†and business¬†events management. ¬†Secondly, the¬†gorillas in the space are now IBM, Oracle and SAP. These companies all offer a full suite of products and more importantly services based around BPM and the rest of the modern infrastructure stack. Companies such as Software AG, TIBCO and Axway form a credible second tier, too.

In previous acquisitions, Progress has treated each acqusition as purely software products. This is not surprising, since selling databases is more about selling products than selling solutions. However, it is this factor that has been at the root of the patchy performance of Progress acquisitions. For instance, the Data Direct division of Progress, where it placed a number of acquisitions in the data space, has fared reasonably well. This is because it is more of a product business. However its attempts in areas such as ESBs and SOA governance have suffered due to a seeming reluctance to embrace a more industry-specific, services-based solution model.

With its acqusition of Savvion, Progress once again has the chance to try to show the market that it has learnt from its mistakes. BPM is absolutely an area where companies need to be offered solutions Рproducts together with services and guidance to develop effective and affordable business solutions. It will be hard enough for Progress to cut a share of the BPM pie with all the big players involved, but it does have one outstanding advantage; it has a strong and accessible customer base in the mid-range market where the larger companies struggle. However, if it fails to take on board the need to hire industryvertical skills and solution-based field and service professionals then this acquisition could prove to be yet another lost opportunity.

Steve

Unlocking more value from legacy CICS applications

old-lockIBM’s acquisition of ILOG has resulted in a great new opportunity to unlock the business value of CICS applications by turning the COBOL logic into easy-to-read/edit ‘business rules’.

IBM has taken the ILOG JRules Business Rules Management System (BRMS) and made it part of the WebSphere family. But even better for CICS users, IBM has made this business rules capability available for CICS applications too. This whole subject is discussed in more detail in a new and free Lustratus Report, downloadable from the Lustratus web store, entitled “Using business rules with CICS for greater flexibility and control”. But why is this capability of interest?

The answer is that many of the key business applications in the corporate world are still CICS COBOL mainframe applications, and although these applications are highly effective and reliable, they sometimes lack in terms of flexibility and adaptability.¬†Not unreasonably, companies are loath to go to the expense and risk of rewriting these essential programs, but are instead looking for some technology-based answer to their needs for greater agility and control. The BRMS idea provides just that. Basically, the logic implementing the business decisions in the operational CICS applications is extracted and turned into plain-speaking, non-technical business rules, such as ‘If this partner has achieved GOLD certification, then apply a 10% discount to all transactions’. This has a number of benefits:

  • It becomes easy for rules to be changed
  • It becomes easy for a business user to verify the rules are correctly implemented
  • If desired, business users can edit operational rules directly

While BRMS is a technology with a lot to offer in many scenarios, it seems particularly well suited to legacy environments, providing a way to unlock increased potential and value from existing investments.

Steve

Is Cloud lock-in a good thing, or bad?

Salesforce.comI am doing a lot of research into Cloud Computing at the moment, and spent an enjoyable morning with Salesforce.com, one of the largest Cloud vendors.

However, one thing that particularly piqued my interest was the discussion on Cloud lock-in. One of the most frequent concerns I hear from companies thinking about Cloud is that they are worried about vendor lock-in. After all, with Cloud being so new, what if you lock into a supplier who does not survive?

The discussions with Saleforce.com highlighted an interesting aspect to this debate. One of its offerings, force.com, provides a ‘Platform as a Service’ (PaaS) cloud offering, where users are presented with an environment in the cloud complete with a whole host of useful tools to build their own applications to run int he cloud or customize existing ones. However, Salesforce.com offers its own programming environment which is “java-like” in its own words. This immediately raises the lock-in concern. If a company builds applications using this, then these applications are not portable to other Java environments, so the user is either stuck with Salesforce.com or faces a rewrite.

A bad thing, you might think. BUT Salesforce.com claims that the reason it has had to go with a Java-like environment is that this enables it to provide much improved isolation between different cloud tenants (users) and therefore better availability and lower risk. For the uninitiated, the point about Cloud is that lots of using companies share the same cloud in what the industry calls a multi-tenancy arrangement, and this obviously raises a risk that these tenants might interfere with each other in some way, either maliciously or accidentally. Salesforce.com has mediated that risk by offering a programming environment that specifically helps to guard against this happening, and hence differs from pure Java.

So, is this lock-in a bad thing or good? I don’t know whether Salesforce.com could have achieved its aims a different way, and I have to admit that to a cynic like me the fact that solving this problem ‘unfortunately’ locks you into the supplier seems a bit suspicious. However, this is irrelevant since the vendor is doing the work and has chosen its implementation method, which it is of course free to do. Therefore, the question facing the potential force.com user is simple – the strategic risk of being locked in to the supplier has to be balanced against the operational risk of possible interference from other tenants. Depending on how the user reads this balance, this will determine how good or bad the lock-in option is.

Steve

Software AG sitting pretty?

Software AG seems to be defying predictions and surprising the market at every turn.

Once seen as a sleepy European software house based largely around legacy system technologies, it has taken major strides to transform itself into a major global software industry player. Its acquisition of webMethods a few years ago surprised the market, with many analysts unconvinced that it could make a go of the move into integration / SOA middleware, but it has done a fair job of building some momentum by tying the webMethods portfolio up with its own CentraSite governance technology, providing service-oriented architecture (SOA) with integrated governance.

Then it once again shocked the market¬†by snatching IDS Scheer, the well-known supplier¬†of modelling tools, from under SAP’s nose. Given that the IDS Scheer technology is used by most of the major SOA suppliers across the world for modelling, and in particular is a key part of the SAP portfolio, this would appear to give Software AG lots of¬†cross-sell opportunities across the two customer bases and throughout the SAP world.

Now it has announced its 2Q09 results, and they make pretty good reading ont he surface. A 9% increase in product revenues is particularly noteworthy give that so many companies are struggling to show any year-on-year growth in product sales. However, before getting too carried away it is worth delving a little deeper into the numbers. The product revenue numbers include maintenance as well as license sales. Licensesales actually fell, as with most other companies. Maintenance revenues jumped by 20% Рdoes this mean that the company has built a much larger maintenance base, or is it actually a reflection of a more aggressive pricing policy? Then there is the split between the legacy business (ETS) and the SOA/BPM business(webMethods). License revenues in this segment were down 15% Рnot very encouraging since this is the strategic business unit. Also, it is noticeable that maintenance revenue in each segment increased by about 20%, suggesting that this rise does indeed reflect a price hike.

However, taking all this into consideration, Software AG is still looking to have moved forward substantially from a few years ago, and assuming the IDS Scheer acquisition goes through OK there should be lots of opportunities for the company. Of course, a cynic might point out that by adding IDS Scheer to the webMethods portfolio, the company has made itself a highly attractive acquisition target to someone Рperhaps SAP?!

Steve

Mico Focus ReUZE misses the point

Micro Focus announced its latest mainframe migration tool, ReUZE yesterday – and once again it has completely missed the point.

The background is that for companies looking to move off the IBM mainframe, Micro Focus has been offering solutions for a number of different target platforms, but in each case the solutions have been based around the old emulation concept. Once again, it seems the company has fallen into the same trap. As the press release states

Unlike other solutions which insist on rewriting mainframe application data sources for SQL Server, or removing mainframe syntax from programs, the Micro Focus solution typically leaves the source code unchanged, thereby reducing costs, risk, and delivering the highest levels of performance and reliability.

The highlighted end to this statement is where I have a problem. Micro Focus seems to think that by offering an emulated environment for mainframe applications, it is reducing risk and delivering the best possible performance and reliability. But this is a load of rubbish. Think about it from the point of view of the mainframe user that has decided to move away from the mainframe Рin this case to a Microsoft environment. This is a big step, and the company concerned must be pretty damn sure this is what it wants to do. It has obviously decided that the Microsoft environment is where it wants to be, and as such surely this will include moving to a Microsoft skills set, Microsoft products and tools Рdatabase, security, and all the rest. So why settle for an emulation option?

The point¬†Micro Focus has missed is¬†that emulation is a way of propagating the old. After all, it originally stemmed from terminal¬†emulation, where the object was to make sure that end users still saw the same environment even¬†when their workstation technology changed. This was very sensible, becuase it focused on the right priority – don’t force the end users to have to retrain.¬†But let’s be clear – emulation costs. It provides an extra layer of software, affecting performance and scalability, and puts future development in a straightjacket because it¬†propogates the old way of doing things. However, in this case the cost of retraining end users would far outweight these implications.

But in the situation where a user is moving off the mainframe to a Microsoft world, why would the user want to propogate the old? Yes, the user wants to reuse the investments in application logic and data structure and content, but surely the user wants to get to the destination Рnot be stuck in purgatory, neither in one place nor the other. Why restrict the power of .NET by forcing the user to operate through an insulating emulation environment? Why hold the user back from moving into the native .NET database system of SQL Server and thereby leveraging  the combined power of the operating system, database and hardware to maximum effect? Why force the user to have to maintain a skills set in the mainframe applications when one of the reasons for moving may well have been to get to a single, available and cheaper one?

Yes, the¬†Micro Focus approach may end up reducing the risk of the porting process itself, since¬†it tries to leave mainframe code unchanged, but¬†that is a long way from reducing the risk of¬†moving from one world to the other. And as for the comments on leaving everything¬†unchanged to¬†‘deliver the highest levels of performance and reliability, that is just laughable. What makes Micro Focus think that the way an application is designed for the mainframe will¬†deliver optimal performance¬†and reliability in a .NET environment? The two environments are completely different with totally unlike characteristics. And when has an emulation layer EVER improved performance/reliability?

I see this ReUZE play as like offering someone¬†drugs. If you’ve decided you want to move¬†off the mainframe to .NET, I have a drug here that will reduce the pain. You will feel better …. honest. But the result is you will be left hooked on the drug, and wont actually get where you want to be. If you have decided this migration is for you, don;t try to¬†cut corners and fall for the drug – do the job properly and focus on the end goal rather than¬†the false appeal of an easy journey. Just Say No.

Steve

SOA success, and what causes it

I was recently pointed to an article in Mainframe Executive magazine written by David Linthicum on the subject of “Mainframe SOA: When SOA Works/When SOA fails”.

I think the friend who suggested I read it was making mischief, knowing my views on the subject of SOA and guessing (correctly) that this article would wind me up.

In summary, the article says that SOA is a large and complex change to your core architecture and working practices and procedures, and that the success or failure is dictated by questions such as executive buy-in/resourcing/funding/skills, and not technology selection.

The truth about success with SOA is that it has little to do with the technology you want to drag into the enterprise to make SOA work, and more to do with the commitment to the architectural changes that need to occur

I have two problems with the opinions stated in this article. The first is to do with changing attitudes to SOA, and the second with the technology comments.

Let me first state that I am well aware that if a company wants to adopt an enterprise-wide SOA strategy designed to take maximum long-term benefit from this new way of leveraging IT investments, then this requires all ofthe areas brought up in the article to be addressed Рskills, management buy-in, political will, funding and a strategic vision coupled with a tactical roadmap. I have no beef with any of this.

But I would contend that the world has changed from two years ago. The financial constraints all companies are experiencing have more or less forced the long-term strategic play onto the back burner for many. Some analysts actually like to claim that SOA is dead, a statement designed to be controversial enough to gain attention but to some extent grounded in the fact that a lot of companies are pulling back from the popular SOA-based business transformation strategies of the past. In fact, SOA is absolutely not dead, but it has changed. Companies are using SOA principles to implement more tactical projects designed to deliver immediate benefits, with the vague thought of one day pulling these projects together under a wider strategic, enterprise-wide SOA banner.

So, as an example, today a company might look at a particular business service¬†such as ‘Create Customer’, or ‘Generate Invoice’, and decide to replace the 27 versions¬†of the service that exist¬†in its silos today with a single shared service. The company might decide to use SOA principles¬†and tools to achieve this, but the planning horizon is definitely on the short term – deliver¬†a new level of functionality that will benefit all users, and help to reduce ongoing cost of ownership. While it would have¬†been valid a¬†few years ago to counsel this company to deliver this as part of an overarching shift to an SOA-oriented style of operations, today most companies will say that¬†although this sounds sensible, current circumstances dictate that focus must remain on the near term.

The other issue I have with this article is the suggestion that¬†SOA success is¬†little¬†to do with the technology choice. Given that the topic here was not just¬†SOA but mainframe SOA, I take particular exception to this. There are a wide range of SOA tools¬†available, but¬†in the mainframe arena¬†the quality and coverage of the tools vary widely. For example, although many¬†SOA tools claim mainframe¬†support, this may in actuality simply be anMQ adapter ‘for getting at the mainframe’. Anyone taking this route¬†is more than likely to fail with¬†SOA, regardless of how well it has taken on the¬†non-technical issues of SOA.¬†Even for those SOA tools¬†with specific mainframe support, some of these offer¬†environments alien to mainframe developers,¬†thereby causing¬†considerable problems in terms of skills utilization. It is critical that whatever technology IS chosen,¬†itcan be used by CICS or IMS-knowledgable folk as well as just disributed specialists.¬†Then there is the question of¬†how intuitive¬†the tools are. Retraining¬†costs can destroy an SOA project before it even gets¬†going.

For anyone interested,¬†there is a free Lustratus report on selecting¬†mainframe SOA tools available¬†from the Lustratus store. However, I can assure companies that, particularly¬†for mainframe SOA, technology selection absolutely IS a key factor¬†for success, and that while all the other transformational¬†aspects of SOA are indeed¬†key to longer term, enterprise-wide SOA there are still benefits to be gained with a more short-term¬†view that is more appropriate in today’s¬†economic climate.

Steve