IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux ūüėČ . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

Why enterprise mobile computing needs an mBroker – part 1

mobilephonesMobile computing is all the rage, with employees, consumers and customers all wanting to use their mobile devices to transact business. But how should an enterprise approach mobile computing without getting into a world of trouble? How can the enterprise future-proof itself so that as mobile enterprise access explodes the risks are mitigated?

mBrokers are emerging as the preferred method of building a sustainable, governable and effective enterprise mobile computing architecture. The mBroker brings together ESB, integration broker, service gateway, API management and mobile access technology to provide the glue necessary to bring the mobile and corporate worlds together effectively and efficiently; for a summary of mBroker functionality see this free Lustratus report. In this first post in a series looking at mBrokers, we will look at the fundamental drivers for the basic broking functionality offered by mBrokers.

Integration brokers have been around for many years now. The principle is that when trying to integrate different applications or computing environments, some form of ‘universal translator’ is needed. One application may expect data in one format while another expects a different format for example. A trivial example might be an intenrational application where some components expect mm/dd/yy while others want dd/mm/yy. The broker handles these transformation needs. But it plays another very important role apart from translating between different applications; it provides a logical separation between application components, so that requestors can request services and suppliers can supply services without either knowing anything about each other’s location/environment/technology. In order to achieve this, it provides other functionality such as intelligent routing to find the right service and execution location, once again without the service requestor having to know anything about it.

Enterprise mobile applications face a lot of the same challenges. When crossing from the mobile device end to the corporate business services end, the same problems must be addressed. For example, mobile applications often rely on JSON for format notation and use RESTful invocation mechanisms to drive services. But many corporate networks employ an SOA model based around XML data and SOAP-based invocations of services.  In addition, the same sort of abstraction layer offered by integration brokers is beneficial to avoid the mobile device needing to know about locations of back end applications. It is therefore not surprising to find that integration broker technology is one source for mBroker technology.

 

Ultramatics works with IBM to defuse SOA security threat

Ultramatics has just announced SOA SafeGuard product, which is designed to shut one of the major SOA security holes – the opportunity to inject virus and other malware threats through XML file sharing.

This is good news for SOA implementers, but also introduces an interesting new stress point for IBM. Back in 2007 I was on a podcast where I identified the five SOA security traps, one of which was the XML problem. To summarize, most virus and other threat detection solutions look at the datastreams coming into the system and identify threat signatures that indicate the presence of some noxious code, but unfortunately¬†they cannot see inside the XML wrapper, so¬†to all intents and purposes the contents of¬†any attached XML file are invisible. This offers the opportunity for malicious agencies to pop in some nasty code into¬†the XML content and¬†smuggle it¬†through the security gates to the enterprise. Of course, it is not immediately obvious how this would help, in that getting this code executed might not be so easy, but hackers are smart….therefore it is best to close this exposure.

One way to close the window is simply to forbid any XML file sharing, but since industries such as healthcare now more or less rely on this to conform to industry standards and regulations, this is not really practical. The new Ultramatics product claims to be able to protect from these types of intruders. It runs on the IBM DataPower XI50 Integration Appliance, providing a hardware-based shield that can see into the XML files and weed out anything unpleasant. This solution will be very valuable to many SOA companies worried about security.

But there is¬†something else interesting in the product details.¬†The datasheet for the product says it can be used (in conjunction with IBM’s MQSeries) to:

Create a SOA ESB that can perform routing, transformation and protocol mediation functions

This is intriguing. Of course, the idea of an¬†ESB appliance is not new, but the interesting point is that IBM is supplying this capability through the Ultramatics product…..I wonder¬†if the other IBM ESBs, WebSphere ESB and WebSphere Message Broker, see this is encroachment?

Steve

A practical approach to Open Source (part 2)

Following my first post in this short series, in which I looked at the benefits of going with an open source project, this post focuses on the risks that counterbalance those benefits.

The main risks for users to consider fall into four areas:

  1. Support
  2. Longevity
  3. Governance and control
  4. The 80/20 rule

The first is perhaps the most obvious risk of OSS. The principle behind OSS is that the code is developed by a team of interested parties, and support is often initially offered in the same light. So, problem resolution is based upon someone else giving time and effort to work out what is wrong and fixing it. Similarly with documentation, education etc – they all depend on someone else’s good will. Some proponents of OSS will point to the fact that commercial companies have sprung up that provide fee-based support for various different projects, but this has a number of big drawbacks. One is that this is fee-based and therefore impacts one of the open source advantages (free licenses), and the other is down to the nature of most OSS projects. OSS developers usually operate on a shared code mentality, and frequently use many other OSS components in their own solutions. While one of these groups might form a company to offer support contracts for the code they have developed, they usually have to rely on ‘best can do’ support for the other components they have embedded. All of this brings in a great deal of risk attached to using OSS. The one difference is where a company has decided it wants the OSS specifically because it gets the source and can now staff its own maintenance and support team.

Longevity relates to the support question. OSS projects need their driving teams of developers, and if for some reason these people decide to move onto something more exciting or interesting, the user could be left using a code-base that has noone left who is interested in it. This means future upgrades are extremely unlikely, and leaves the user stranded. A special case to watch out for here is those crafty vendors who start an Open Source project around some new area they are interested in, supplying a very basic code base, and then when enough users have bitten on the bait they announce that future development is going to be supplied int he shape of a traditional licenses software offering, while the original code base remains at a basic level.

The governance / control risk is a tricky one. The problem is that be definition anyone interested in a particular OSS project can download and use the code. A company might decide to use OSS for specific needs, only to discover that it has proliferated without control due to the ease with which people can download it. These downloaders may think they base is supported because the company has decided to use it in other places, but they may have downloaded different versions or anything. Related to this is the control of quality – as other OSS users develop updates, these are supplied back into the OSS project, but who if anyone is policing these changes to make sure they do not alter functionality or remain compatible?

The final risk area relates to the nature of people. The driving force behind any OSS project, at least in the early days, tends to be a fanatical group of developers who really enjoy making something work. However, anyone familiar with software development will be aware of the 80/20 rule Рthat 80% of the code takes 20% of the effort to write, and vice versa. The natural inclination for any OSS team is to do all the fun stuff (the 80%), but to tend to shy away from the really heavy-duty complicated stuff. So, areas such as high availability, fault tolerance, maintainability and documentation are the sorts of areas that may suffer, at least until the project gains some heavyweight backers. This can severely limit the attraction of open source software for commercial, mission-critical needs at least.

It is for many of the reasons given above that the area of greatest attraction for new OSS projects tends to be in the personal productivity area, where there are alternatives to mitigate the risks. So, Firefox is great, and has super usability features because it has been created more or less by users, but if something went sour with a latest level of the code the users could switch to IE without too much pain. Deciding to use OSS in a production, mission-critical environment is much more dangerous, and generally only acceptable when there is a cadre of heavyweight underwriters, such as in the case of LINUX. In this case, IBM would never let anything go wrong with LINUX because it is too reliant on its continued success.

Part 3 will look at this area more closely as it discusses the OSS project business model.

Steve

Increasing payments fraud highlights rules-based processing benefits

The recent AFP report (Association for Financial Professionals) on payments fraud in 2008 makes grim reading for financial institutions…

…especially compared to the previous 2007 one.¬†30% of respondents said they experienced more¬†payments fraud attempts¬†in 2008 than 2007, and¬†incidence accelerated throughout the year with 38% saying that fraud increased in the second half of the year, probably due to worsening economic conditions.

From an IT point of view, this just reinforces the need to have systems that are easy and cost-effective to change. Lustratus developed a detailed report in 2008 about the shift in the area of IT payments¬†processing solutions, where it outlined the key elements of the new,¬†‘2nd generation’ approach to payments processing as follows:

…the 2nd generation payments processing model needs to be based on the following key design points:-

-An extensible, service-oriented approach featuring plug-in capabilities

-Enterprise-wide, consolidated and centralized, real-time visibility and control of all payments processing activities, from anywhere to anywhere, based on a generic, standards-based payments model

-A business rules-based architecture governing payments processing functionality

– And, of course, the continued provision of reliable, efficient and repeatable payments handling as provided by 1st generation systems

The two key features that apply to thispayments fraud example are the extensible approach with plug-in capabilities, and in particular the rules-based concept. The idea behind having a business rules-driven payments processing infrastructure is that when changes are needed in the way payments are handled, these can be implemented quickly, safely and verifiably with rules as opposed to having to change program internals with all the associated implications. For example, rules could be used to enforce the use of separate accounts for handling checks or ACH payments, or having different accounts for every different payment type. In addition, rules when combined with events processing can provide an easy way to detect out of line situations and raise a red flag.

Payments fraud is only one of many examples of the need for systems to be able to respond quickly to change – and rules-based processing combined with an extensible, service-based approach to delivering functionality provides the ideal environment to tolerate that change quickly, safely, cheaply and effectively.

Steve

What use is technology without flexibility?

I was reading a post today from mainframe integration vendor GT Software…

…about its¬†support of IBM’s mainframe speciality engines, and I was suddenly hit by the realization that in order to really add value for users, technology almost always has to be accompanied by flexibility. The two need to go hand in hand if returns¬†are to¬†be maximized and business risk minimized.

The specific example discussed relates to an IBM mainframe invention called a speciality engine. For the uninitiated, think of a logical processing box within the overall mainframe environment where processing is much cheaper, with different boxes being aligned to specific activities such as running LINUX operations, data access or Java-type activities. What this basically means is that if part of your workload is doing something that is supported by one of the speciality engine types, then you can choose to run it more cheaply by moving it into this engine, and in fact this can often improve performance too.

This is neat technology, offering the opportunity to reduce costs and improve effectiveness, and various mainframe software suppliers have jumped on the opportunity this offers by moving eligible workloads onto these specaility engines. However, as with any new technology development, things are not quite as simple as they seem. In the IT industry there is a terrible tendency to jump for a new technology and push everything onto it, without appreciating the implications. But, in this example, as pointed out in the referenced post,

There are many use cases where it is much more efficient to NOT shift workload to a specialty engine. Why — because, there is overhead associated with moving workload

This is typical with just about any new technology. It is great in SOME circumstances, but loses out in others. iPODs are great for listening to pop music, sounding little different to CDs and being very much more convenient, but try them on classical symphonies and you will wonder what has happened to the color and magic of the piece. The key is to use new technology for WHAT MAKES SENSE, as opposed to what is possible. There is another angle to this flexibility too. IT vendors often ignore the fact that users are not starting from a clean sheet of paper; they have existing investments and technologies that cannot just be written off. Therefore, it is important to have the flexibility to operate with whatever is in place rather than demand a specific new technology component. This is not a static need, but a dynamic one Рit may be that a company might change its approach further down the line, and a rigid, inflexible technology implementation can cause terrible future headaches.

While new technology may promise a lot, it is only when coupled with flexibility over which technologies to use, for what, and when that technology can REALLY deliver its full value.

Steve

Secure mainframe SOA-in-a-box

I was reading the announcement from Layer7 about its ‘SOA-in-a-box’ for IBM mainframe users, and a number of things struck me.

First, I am SO PLEASED to see someone remembering that CICS is not the only mainframe transaction processing environment in use today. A significant number of large enterprises, particularly in the finance industry, use IBM’s IMS transaction processing system instead. With the strength and penetration of CICS in mainframe enterprises, it sometimes seems like these users have become the forgotten tribe, but investments in IMS are still huge in anyone’s numbers and it is a smart move to cater to them. I am sure that the fact that this solution serves IMS as well as CICS users will be a big plus.

The other point that struck me was that I have felt for some time that, with the security/intrusion detection/firewall/identity management market seeing such a shift to security appliances, it was time vendors thought of piggy-backing functionality onto these platforms. Of course, one reason for having an appliance is to provide a dedicated environment to address issues such as security, but in truth these appliances are rarely used to anywhere near capacity. Therefore it makes a lot of sense to optimize the use of the available processing power rather than slavishly locking it away where it can;t help anyone.

Finally, I have to admit my first reaction to this announcement was to worry about how good connectivity would be to the mainframe. Dealing with mainframes is an arcane area, and I was not aware that Layer7 had any special expertise or credentials here, but I see that GT Software is apparently providing the mainframe integration piece. This makes me a lot happier, since this company has been dealing with mainframes for 20 years. In fact, Lustratus did a review recently on GT Software’s Ivory mainframe SOA tool, which is apparently what is included in the Layer7 box.

Anyway, on behalf of all those IMS users out there, thanks Layer7!

Steve

Will mashups mash up your infrastucture?

One of the forecasts in the Lustratus predictions for 2008 Insight, available free of charge from the Lustratus web store, deals with the emergence and adoption of mashups.

At this moment it is unlcear how fast mashups will be adopted, but Lustratus thinks that any serious adoption will place massive strain on enterprise infrastructures, causing the unwary to buckle and collapse.

Mashups seem great. The user is suddenly in a position to create his or her own page layout with all the business applications needed to carry out this user’s activities. A great productivity boost, perhaps, but what are the impacts on the enterprise? Basically, as Lustratus points out, every desktop becomes an application. Instead of an IT department having to worry about 10 or 20 applications, all of a sudden there are 100s or even 1000s. Worse still, while traditional IT-controlled applications are usually controlled fairly rigorously with procedures, policies and management practices, the world of mashups could well be more akin to anarchy.

Fundamental to a productive mashup will be the need to drive the different business services required by the particular user, and therefore services will suddenly become tools used by hundreds in many different ways. All of this activity could create huge traffic increase as well as a generally uncoordinated style of operations, causing major difficulties for the infrastructure software trying to hold everything together.

Well, OK, maybe this is a little negative – but the point is, enterprise architects and management should start considering these issues now. Trying to sort this out when the genie is out of the bottle will be a lot more difficult…..

Steve

Can SOA be bad for your health?

Recently I featured in a podcast and wrote an article on the 5 SOA Security traps, and one particularly sticks in my mind.

The issue is about flexibility – a good thing, most people agree, but in security / governance terms it can be a two-edged sword, and so it proves to be in the case of SOA.

The problem comes down to security domains. IT implementations can be thought of as a group of structures with varying levels of security – all the way from a community village where anyone can wander in anywhere, up to castles with moats, drawbridges and even boiling oil! Imagine for example a company with a particular silo application which is highly sensitive and must be absolutely secure. This could be implemented on a high-availability cluster with hardware encryption, and even have physical access controlled by putting it in a room with locks on the door and a guard! Well, OK, this might a little over the top, but the point is the company can take whatever measures it sees fit to implement a high level security domain – think castle.

Now along comes SOA, with its philosophy of flexibility and shared, reusable services. Instead of running silos, applications become a linked set of services and logic, and the wonderful flexibility of SOA means these services could be running anywhere across the enterprise, on any platform and in any technology environment. So supposing there is a shared ‘create customer’ service, and the high-security application switches to using this service instead of its own redundant create customer code. Now, since the security is only as good as the weakest link, the security domain is broken. Someone just drilled a hole in the castle wall.

Of course, companies can take measures to ensure this disaster does not befall their critical apps. Procedures can be put in place to protect the integrity of the security domains, restricting changes to these applications and blocking them from SOA-based distribution. But many people are unaware of the exposure, and sometimes programmers, with the best intentions, might accidentally end up compromising operations. In the end, it is up to management to put in place any education programs, working practices and policies and then to enforce them. But at least forewarned is forearmed.

Steve

Beware of the OSS Trojan Horse

Open source software (OSS) seems a great idea, particularly for segments of the market like SOA where there are lots of standards – some would even say too many.

After all, the software is free isn’t it? Of course, in reality,the decision whether to go with an open source approach to SOA is a lot more complicated than that. The key thing when considering SOA is to be realistic about the business case, as discussed by Ronan in his recent post. Evaluating the value proposition for open source SOA is a non-trivial exercise. This is a subject that Ronan goes into in much greater detail in his recent Lustratus Report, “The open source value proposition for SOA”, available from the Lustratus web store, where he considers a wide range of factors affecting the final decision.

One point that jumped out at me from the report related to the need to be sure that the chosen OSS solution is not a trojan horse. The problem is that some open source projects are actually being used as test-beds by commercial vendors, as a way of gaining valuable input and experience that can then be used as part of a future commercial offering. On the one hand, this can be attractive – after all, if a vendor is driving the project then it is likely that skilled resources will be available to ensure its vitality. But on the other hand, if the vendor plans a commercial offering then what functions will be reserved for the ‘full function’ offering? Will these be needed in the future? As Ronan states in the report,

It is common with the larger vendors in particular to promote OSS as a light weight alternative to their full strength closed source products. For these vendors, it is essential that due diligence verifies that the OSS solution will be sufficient for all current and future requirements. If this is not the case, the cost of the closed source product must be factored into the business case.

This doesn’t mean to say that these projects should be avoided – just that it is wise to consider the gifts the Greeks are bringing, and what’s in it for them….

Steve