IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux ūüėČ . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

Cloud computing – balancing flexibility with complexity

balance2In the “Cloud Computing without the hype – an executive guide” Lustratus report, available at no charge from the Lustratus store, one of the trade-offs I touch on is flexibility against complexity.

To be more accurate, flexibility in this case refers to the ability to serve many different use cases as opposed to a specific one.

This is an important consideration for any company looking to start using Cloud Computing. Basically, there are three primary Cloud service models; Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). In really simple terms, an IaaS cloud provides the user with virtual infrastructure (eg storage space, server, etc), PaaS offers a virtual platform where the user can run home-developed applications (eg a virtual server with an application server, database and development tools) and SaaS provides access to third-party supplied applications running in the cloud.

The decision of which is the most appropriate choice is often a trade-off. The attraction of SaaS is that it is a turn-key option – the applications are all ready to roll, and the user just uses them. This is pretty simple, but the user can only use those applications supplied. There is no ability to build new applications to do other things. Hence this approach is specific to the particular business problem addressed by the packaged application.

PaaS offers more flexibility of usage. A user builds the applications that will run in the cloud, and can therefore serve the needs of many different business needs. However, this requires a lot of development and testing work, and flexibility is restricted by the pre-packaged platform and tools offered by the PaaS provider. So, if the platform is WebSphere with DB2, and the user wants to build a .NET application for Windows, then tough.

IaaS offers the most flexibility, in that it effectively offers the infrastructure pieces and the user can then use them in any way necessary. However, of course, in this option the user is left with all the work. It is like being supplied with the raw hardware and having to develop all the necessary pieces to deliver the project.

So, when companies are looking at their Cloud strategies, it is important to consider how to balance this tradeoff between complexity/effort and flexibility/applicability.

Steve

Introducing Cloud for Executives

Example report front coverAt Lustratus we have been doing a lot of research into Cloud Computing, as have many firms.

I must confess the more I have dug into it, the more horrified I have become at the hype, confusion, miscommunication and manipulation of the whole Cloud Computing concept.

In the end, I decided the time was right for an Executive Guide to Cloud – defining it in as simple terms as possible and laying out the Cloud market landscape. Lustratus has just published the report, entitled “Cloud Computing without the hype; an executive guide” and available¬†at no charge from the Lustratus store. Not only does the¬†paper try to lock down the cloud definitions, but it also includes a summary of some 150 or so suppliers operating¬†in the Cloud Computing space.

The paper deals with a number of the most common misunderstandings and confusions over Cloud. I plan to do a series of posts looking at some of these, of which this post is the first. I thought I would start with the Private Cloud vs Internal Cloud discussion.

When the Cloud Computing model first emerged, some were quick to try to define Cloud as a public, off-premise service (eg Amazon EC2), but this position was quickly destroyed as companies worldwide realized that Cloud Computing techniques were applicable in many different on and off premise scenarios. However, there has been a lot of confusion over the terms Private Cloud and Internal Cloud. The problem here is that analysts, media and vendors have mixed up discussions about who has access to the Cloud resources, and where the resources are located. So, when discussing the idea of running a Cloud onsite as opposed to using an external provider such as Amazon, people call one a Public Cloud and the other an Internal Cloud or Private Cloud.

This is the root of the problem. This makes people think that a Private Cloud is the same as an Internal Cloud – the two terms are often used interchangeably. However, these two terms cover to different Cloud characteristics, and it is time the language was tightened up. Clouds may be on-premise or off-premise (Internal or External), which refers to where the resources are. (Actually, this excludes the case where companies are running a mix of clouds, but let’s keep things simple). The other aspect of Cloud usage is who is allowed to use the Cloud resources. This is a Very Important Question for many companies, because if they want to use Cloud for sensitive applications then they will be very worried who else might be running alongside in the same cloud, or who might get to use the resources (eg disk space, memory, etc) after they have been returned to the cloud.

A Public Cloud is one where access is¬†open to all, and therefore¬†the user has to rely on the security procedures adopted by the cloud provider. A¬†Private Cloud is one that is either owned or leased by a single enterprise, therefore giving the user the confidence that information and applications are locked away from others. Of course,¬†Public Cloud providers will point to sophisticated security measures to mitigate any risk, but¬†this can never feel as safe to a worried executive than ‘owning’ the resources.

Now, it is true that a Public Cloud will always be off-premise, by definition, and this may be why these two Cloud characteristics have become intertwined. However, a Private Cloud does not have to be on-premise Рfor example, if a client contracts with a third party to provide and run an exclusive cloud which can only be used by the client, then this is a Private Cloud but it is off-premise. It is true that USUALLY a Private Cloud will be on-premise, and hence equate to an Internal Cloud, but the two terms are not equal.

The best thing for any manager or exec trying to understand the company approach to cloud can do is to look at these two decisions separately Рdo I want the resources on or off premise, and do I want to ensure that the resources are exclusively for my use or am I prepared to share. It is a question of balancing risk against the greater potential for cost savings.

Steve

Is Cloud lock-in a good thing, or bad?

Salesforce.comI am doing a lot of research into Cloud Computing at the moment, and spent an enjoyable morning with Salesforce.com, one of the largest Cloud vendors.

However, one thing that particularly piqued my interest was the discussion on Cloud lock-in. One of the most frequent concerns I hear from companies thinking about Cloud is that they are worried about vendor lock-in. After all, with Cloud being so new, what if you lock into a supplier who does not survive?

The discussions with Saleforce.com highlighted an interesting aspect to this debate. One of its offerings, force.com, provides a ‘Platform as a Service’ (PaaS) cloud offering, where users are presented with an environment in the cloud complete with a whole host of useful tools to build their own applications to run int he cloud or customize existing ones. However, Salesforce.com offers its own programming environment which is “java-like” in its own words. This immediately raises the lock-in concern. If a company builds applications using this, then these applications are not portable to other Java environments, so the user is either stuck with Salesforce.com or faces a rewrite.

A bad thing, you might think. BUT Salesforce.com claims that the reason it has had to go with a Java-like environment is that this enables it to provide much improved isolation between different cloud tenants (users) and therefore better availability and lower risk. For the uninitiated, the point about Cloud is that lots of using companies share the same cloud in what the industry calls a multi-tenancy arrangement, and this obviously raises a risk that these tenants might interfere with each other in some way, either maliciously or accidentally. Salesforce.com has mediated that risk by offering a programming environment that specifically helps to guard against this happening, and hence differs from pure Java.

So, is this lock-in a bad thing or good? I don’t know whether Salesforce.com could have achieved its aims a different way, and I have to admit that to a cynic like me the fact that solving this problem ‘unfortunately’ locks you into the supplier seems a bit suspicious. However, this is irrelevant since the vendor is doing the work and has chosen its implementation method, which it is of course free to do. Therefore, the question facing the potential force.com user is simple – the strategic risk of being locked in to the supplier has to be balanced against the operational risk of possible interference from other tenants. Depending on how the user reads this balance, this will determine how good or bad the lock-in option is.

Steve

At last – Cloud Computing Clarified!

Clarify

No-one can have missed the marketing and media explosion over Cloud Computing.

Vendors talk of nothing else in an attempt to hook onto this latest hype, and analyst and media firms have stirred the pot. However, although very early in its lifecycle, there really could be value in Cloud Computing in the future.

But the problem is no-one seems to be able to cut through all the conflicting messages to describe what the emerging Cloud Computing market actually looks like РUNTIL NOW! Danny Goodall, marketing strategistand gurru at Lustratus, has just published his Cloud Computing market landscape in the Lustratus REPAMA blog. The blog post includes a slide presentation that summarizes in simple terms the different Cloud Computing models, splits these models into easy-to-understand pieces and then helpfully lists a selection of vendors playing in each.

This presentation is well worth a read for anyone interested in Cloud Computing. For myself, I think I would like to be the first to propose a possible extra category to be included Рa new Cloud Platform service, B2B. As communities move to Cloud, it is likely there will be more and more need for B2B linkage, and although Danny includes an Integration platform service which is similar in nature to B2B, B2B actually has specific requirements that would not fit in most integration services.

However, a great piece of work that should provide a strong base for understanding the developing Cloud Computing market.

Steve

Come in Texas East District Court, your time is up

Judge If there is one thing guaranteed to get me gnashing my teeth, it is the role of the Texas Eastern District Court as the bully boy of the crumbling US Software patents world.

For those unfamiliar with this marvelous district court, every major software patent suit has been brought in this court, regardless of where the claimant or defendants are based. My own opinion is that just as the UK is the infamous world capital for divorce settlements because of its apparent unique and extensive bias towards the wife,the Texas Eastern District Court has the same levelof notoriety for software patents with its apparent unprecedented bias towards the plaintiffs. Any self-respecting patent troll (if that is not an oxymoron) is be quick to praise the name of the Texas Eastern District Court.

The latest in this long line of cases appears to be a couple of suits raised by a guy called Mitchell Prust, of Minnesota US, against Apple and others, that are threatening to completely derail the Cloud Computing model. These two cases can be taken as the tip of the iceberg Рexpect more to appear in the same courtroom. Essentially Prust got three patents approved in thearea of remote storage management, the earliest in 2000 Рthese patents basically deal with the virtualization of storage to allow multiple users across the world to carve out their own little space and manage and use it, as Cloud users do.

One thing that has forever confused me is how patents get approved in the US system. Anyone who knows IT will probably be aware that the IBM VM (Virtual Machine) operating system that started in the late 1960s provided this type of storage virtualization. Perhaps the difference with thesepatents is that each makes a big thing of the client system being attached through ‘a global computer network’. The implication is this means the¬†Internet, which would rule out the IBM VM solution which clearly predates the Internet. However, global access to these systems through global networks was certainly possible in the old days too – when I worked in IBM in the 80s I was able to log on from a remote location across the network, and then continue to interact with my virtualized piece of the greater storage pool. Does this equate to a ‘global compute network’? Seems to me to be pretty damn close.

This brings up an interesting point. One reason this particular court is popular is that it has a habit of taking definitions in the patent claims, and interpreting them in a most eccentric way. In a recent case, still ongoing, the Texas Eastern District court judge decided on a definition of¬†‘Script’ that was a mile from what most IT people would think, and¬†therefore instead of that particular patent covering software that employed scripts in the IT sense, it now covers a far wider set of products that are in reality nothing to do with scripts. For reference, the definitions for script (and I am indebted here to Vincent McBurney’s painstaking tracking of the case) were (and remember this was a patent to do with data movement)

SCRIPT

  1. Plaintiff: a group of commands to control data movement into and out of the system, and to control data transformation within the system
  2. Defendants: A series of text commands interpretively run by the script processor, such that one command at a time is translated and executed at runtime before the next command is translated and executed, and that control data movement into and out of the system and control data transformation within the system
  3. Judge: a group of commands to control data movement into and out of the system, and to control data transformation within the system

So, according to this definition, any code, for example a GUI or an executing¬†program, that controls data movement based on some sort of input is now classed as a ‘script’.

If the Court follows the same approach in the case of these remote data storage patents, it could not only derail Cloud Computing but do a fairly comprehensive job of annihilating the virtualization market too.

Somehow, order¬†has to be restored to the¬†much-maligned US software¬†patent system. It is absolutely right and proper that¬†inventors should be properly recompensed for their innovations – this is healthy, and stimulates technology advancement. But to me the clear indication of the failure of the system is that every plaintiff heads to East Texas,¬†presumably¬†because it gives the answer the plantiff wants to hear. Statistics appear to bear this out. The implication is that any other court in the land would risk a less favourable judgement…dare I say it, perhaps a more just one?

I’ll sign off with the old joke about the soldier marching with his unit past a collection of family members. A spectator turn to a woman watching the march ans says, ‘Madam, your son is marching out of step!’. The woman replies, ‘No¬†Sir, he is the only one marching¬†IN step’…

Steve

The REAL concern over Cloud data security

Recently I have been involved in a discussion in the LinkedIn Integration Consortium group on managing data in a Cloud Computing environment, and the subject has turned to security.

I had maintained that data security concerns may sometimes result in companies preferring to look at some sort of internal Cloud model rather than risk putting their data in the Cloud-

the concept that I find is intriguing larger companies is the idea of running an INTERNAL cloud – this removes a lot of the concerns over data security, supplier longevity etc.

This generated a reaction from one of the other discussion participants, Tom Gibbs of DiCOM Grid.

I hate to poke at other commentators but security is an overarching issue for IT and telcom as a whole. No more and probably less of an issue with cloud or SaaS.

It’s almost amusing to watch legacy IT managers whine that b/c it isn’t local it isn’t secure. I’m sorry but this is totally naive.

This brings up an important point. What Tom is saying is that the Cloud provider will almost certainly offer top-notch security tools to protect data from unauthorized access or exposure, and therefore what’s the problem?

The answer is that the executive concern with putting data outside the corporate environment is likely to be more of an emotional rather than logical argument. With so many topical examples of confidential information being exposed, and executives knowing that regulations/legislation/corporate policies often make them PERSONALLY responsible for protecting information such as personal details of clients/customers/citizens, for example, the whole thing is just too scary.

IT folk may see this as naive, just as Tom says. After all, modern security tools are extremely powerful and rigorous. But of course this depends on the tools being properly applied. In the UK, for example, there have been a number of high-profile incidents of CDs or memory sticks containing confidential citizen information being left on trains and exposed to the media. The argument allowing data to be taken off-site was based around the fact that policy required all such data to be encrypted, making it useless if it fell into anyone else’s hands.¬†These encryption algorithms were top-notch, and¬†provide almost total protection. BUT the users who downloaded the information in each of these cases did¬†not bother to encrypt it –¬†in other words, if the procedures had been followed then¬†there would have been no¬†exposure but because¬†people did not implement the procedures then the¬†data was¬†exposed.

These situations have not only proved extremely embarrassing to the data owners involved, but have resulted in heads rolling in a very public fashion. So the concerns of the executive moaning about risk are visceral rather than rational – ‘Moving my data outside of the corporate boundary introduces personal risk to me, and no matter how much the experts try to reassure me I don’t want to take that risk’. Of course less sensitive information will not be so much of a concern, and therefore these worries will not affect every Cloud project. But for some executives the ‘security’ concern with moving data into the Cloud,¬†while not logically and analytically based, is undeniably real.

Steve