IBM LinuxONE; what’s in a name?

So the new IBM LinuxONE has now been officially launched. And not to put too fine a point on it, the Lustratus opinion is that it is pretty much the best Linux server around. In fact, to really stiEmperor_300x230ck my neck out, the LinuxONE could become the premier Linux server of choice in the next 5 years. As long as IBM doesn’t trip over its own feet to snatch defeat from the jaws of victory…

Let’s just take a moment to reflect on what IBM’s got. The LinuxONE currently comes in two sizes, the full-scale enterprise Linux server (Emperor) and an entry level server (Rockhopper). Cunning use of penguins to stress the link to Linux ūüėČ . LinuxONE offers a range (if two is a range) of Linux servers with outstanding reliability, security and non-disruptive scalability coupled with probably the best data and transaction handling facilities in the world. Bold words, but there is proof (see later).

But the LinuxONE also offers the openness and productivity support expected in the Linux world. Customers can choose between Red Hat, SuSE and Ubuntu environments, a range of hypervisors such as KVM and PR/SM, familiar languages such as Python, Perl, Ruby, Rails and Node.js, various databases like Oracle, DB2, MongoDB, MariaDB. In addition, LinuxONE adopts open technologies extensively, including Openstack, Docker, Chef and Puppet.  Even the financiang for the LinuxONE is more aligned with Linux and Cloud expectations, with a usage-based fixed monthly charge or even a rental option being offered. The LinuxONE is even the basis of an IBM community cloud being rolled out now.

So how can anything go wrong? And anyway, how can I make those claims about reliability, security and so on? Well of course, the secret is that the IBM LinuxONE is based on the IBM mainframe, arguably the most proven server the world has ever known for reliability, availability, data and I/O handling, transaction processing and enterprise serving. To this base, IBM has been able to build on its extensive experience over the last few years of running Linux workloads and serving Linux needs with z/Linux, providing the ideal launchpad for delivering the ultimate Linux servers. Fortunately IBM has not tried to resist the march of open technologies, taking the opportunity to bring open, non-IBM and IBM offerings together with the aim of delivering the premier Linux server environment.

The ‘but’ is that IBM cannot manage to tear itself away from its pride in the mainframe. Rightly, IBM is very proud of its mainframe technology and its long history of success under the most demanding environments. Perfectly understandable. And so I suppose it is only natural that IBM would want to refer in all its marketing literature to the fact that the LinuxONE is an enterprise Linux mainframe, and to stress that it IS a mainframe, albeit with significant Linux and open technology support added. But from the outside, this makes no sense. let’s split the world up into three camps; mainframe fans, those who do not know about mainframes and the mainframe ‘haters’. Perhaps ‘haters’ is a bit strong, but there is absolutely no doubt that there are a significant number of companies across the world who for various reasons see ‘mainframe’ as almost a derogatory word; old-fashioned, expensive, etc.. So how will the three markets react to the LinuxONE? IBM mainframe fans don’t need to be told it is a mainframe; they know, and they will also usually have an IBM rep who will be pointing it out with great frequency! The uninitiated who know nothing of mainframes would not see any plus or minus from being told the LinuxONE is a mainframe; they will simply want to look at what the LinuxONE can do for them, what tools and environments it supports etc.. But the third category can only see the ‘mainframe’ word as negative.

I can almost hear some people pointing out that this is a silly argument. That anyone who starts to look at the LinuxONE and who knows anything will quickly work out it is essentially an IBM mainframe. But I would submit that is not the point. Reaction to the mainframe word is to put the third group off from taking a closer look. Once they do look, as long as the server has the tools and offers the capabilities they need, and they can carry it forwards in their company without overtly exposing the ‘mainframe’ word, the strength of the LinuxONE offering will carry it through.

So I make this plea to IBM. Please, please, remove ‘mainframe’ from all the literature. Replace it with ‘server’ or ‘Linux server’ or enterprise Linux server’ or whatever. LinuxONE should be associated with being the best, most reliable, most productive, most scalable, most effective and safest range of Linux servers in the world, not with being a Linux-enabled mainframe.

Calling all integration experts!

Remember the old Universal Translator as modeled here by the late Mr. Spock? One of the first (or perhaps future?) examples of integration solutions, and certainly one of the most fondly rememberehttp://zagg-blog.s3.amazonaws.com/community/blog/wp-content/uploads/2012/03/12581.jpgd! But at its heart, it is also an almost perfect representation of the integration challenges today. Many years ago, there was EAI (Enterprise Application Integration) which was all about integrating homegrown applications with purchased package applications and/or alien applications brought in from Mergers and Acquisitions activity. The challenge was to find a way to make these applications from different planets communicate with one another to increase return on assets and provide a complete view of enterprise activity. EAI tools appeared from vendors such as TIBCO, SeeBeyond, IBM, Vitria, Progress Software, Software AG and webMethods to mention just a few.

Then there came the SOA initiative. By building computer systems with applications in the form of reusable chunks of business functionality (called services) the integration challenge could be met by enabling different applications to share common services.

Now the eternal wheel is turning once again, with the integration challenge clothed in yet another disguise. This time it is all about integrating systems with completely different usage a resource characteristics such as mobile devices, IoT components and traditional servers, but also applications of completely new types such as mobile apps and cloud-based SaaS solutions. In an echo of the past, lines of business are increasingly going out and buying cloud-based services to solve their immediate business needs, or paying a third-party developer to create the App they want, only to then turn to IT to get them to integrate the new solutions with the corporate systems of record.

Once again the vendors will respond to these user needs, probably extending and redeveloping their existing integration solutions or maybe adding new pieces where required. But as you look for potential partners to help you with this next wave of integration challenges, it is worth keeping in mind possibly the most important fact of all; a fact that has been evident throughout the decades of integration challenges to date. Every single time the integration challenge has surged to the top of the priority list, the key differentiator contributing to eventual success is not the smarts built into the tools and software / appliances on offer. Rather it is all about the advice and guidance you can get from people with extensive experience in integration challenges. Whether from vendors or service providers, these skills are absolutely essential. When it comes down to it, the technical challenges of integration are just the tip of the iceberg; all the real challenges are how you plan what you are going to do and how you work across disciplines and departments to ensure the solution is right for your company. You don’t have the time to learn this – find a partner who has spent years steeped in integration and listen to what they have to say!

Heroku Versus AppEngine and Amazon EC2 – Where Does it fit in?

I’ve just had a really pleasant experience looking at Heroku – the ‘cloud application platform’ from Salesforce.com but it’s left me wondering where it fits in.

A mate of mine who works for Salesforce.com suggested I look at Heroku after I told him that I’d had some good and bad experiences with Google’s AppEngine and Amazon’s EC2. I’d been looking for somewhere to host some Python code that I’d written in my spare time and I had looked at both AppEngine and EC2 and found pros and cons with both of them.

As it turns out it was a good suggestion ¬†because Heroku’s approach is very good for the spare-time developer like me. That’s not to say that it’s only an entry level environment – I’m sure it will scale with my needs, but getting up and running with it is very easy.

Having had some experience of the various platforms, I’m wondering where Heroku fits in. My high-level thoughts…

Amazon’s EC2 – A Linux prompt in the sky

Starting with EC2, I found EC2 the simplest concept to get to grips with but by far the most complex to configure. For the uninitiated, EC2 provides you with a machine instance in the cloud which is a very simple concept to understand. Every time you start a machine instance you effectively get a Linux prompt, of varying degrees of power and capacity, in the sky. What this means is that you have to manually configure the OS, database, web infrastructure, caching, etc. This is excellent in that it gives unrivalled flexibility and after all, we’ve all had to configure our development and test environment anyway so we should understand the technology.

But imagine that you’ve architected your system to have multiple machines hosting the database, multiple machines processing logic and multiple web servers managing user load; you have to configure each of these instances yourself. This is non-trivial and if you want to be able to flexibly scale each of the machine layers then you own that problem yourself (although there are after market solutions to this too).

But what it does mean is that if you’re taking a system that is currently deployed on internal infrastructure and deploying it to the cloud, you can mimic the internal configuration in the cloud. This in turn means that the application itself does not necessarily need to be re-archtected.

The sheer amount of additional infrastructure that Amazon makes available to cloud developers (Queuing, cloud storage,  MapReduce farms, storage, caching, etc) coupled with their experience of managing both the infrastructure and the associated business models, makes Amazon an easy choice for serious cloud deployments.

Google AppEngine – Sandbox deployment dumbed down to the point of being dumb?

So I’m a fan of Google, in the same way that I might say I’m a fan of oxygen. It’s ominpresent and it turns out that it’s easier to use a Google service than not – for pretty much all of Google’s services. They really understand the “giving crack cocaine free to school kids” model of adoption. They also like Python (my drug of choice) and so using AppEngine was a natural choice for me. AppEngine presents you with an abstracted view of a machine instance that runs your code and supports Java, Python or Google’s new Go language. With such language restrictions it’s clear to see that, unlike EC2, Google is presenting developers with a cosseted, language-aware, sand-boxed environment in which to run code. The fact that Google tunes the virtual machines to host and scale code optimally is, depending on your mindset, either a very good thing or close to being the end of the world. For me, not wanting, knowing how to, or needing to push the bounds of the language implementation, I found the AppEngine environment intuitive and easy. It’s Google right?

But some of the Python restrictions, such as not being able to use modules that contain C code are just too restrictive. Google also doesn’t present the developer with a standard SQL database interface, which adds another layer of complexity as you have to use Google’s high replication datastore. ¬†Google would argue, with some justification I’m sure, that you can’t use a standard SQL database in an environment when the infrastructure that happens to be running your code at any given moment could be anywhere in Google’s data centres worldwide. But it meant that my code wouldn’t port without a little bit of attention.

The other issue I had with Google is that the pricing model works from quotas for various internal resources. Understanding how your application is likely to use these resources and therefore arriving at a projected cost is pretty difficult. So whilst Google has made getting code into the cloud relatively easy, it’s also put in place too many restrictions to make it of serious value.

Heroku- Goldilock’s porridge too hot, too cold or just right?

It would be tempting, and not a little symmetrical, to place Heroku squarely between the two other PaaS environments above. And whilst that is sort of where it fits in my mind, it would also be too simplistic. Heroku does avoid the outright complexity of EC2 and seems to also avoid some of the terminal restrictions (although it’s early days) of AppEngine. But the key difference with EC2 lies in how Heroku manages Dynos¬†(Heroku’s name for an executing instance).¬†To handle scale and to maximise use of its own resources, Heroku runs your code only for the specific instance that it is being executed. After that, the code, the machine instance and any data it contained are forgotten. This means that things like a persistent file system or a having a piece of your code always running cannot be relied upon.

These problems are pretty easily surmountable. Amazon’s S3 can be used as a persistent file store and Heroku apps can also launch a worker process that can be relied upon to not be restarted in the same way as the other Dyno web processes.

Scale is managed intelligently by Heroku in that you simply increase the number of web and worker processes that your application has access to – obviously this also has an impact on the cost. Finally there is an apparently thriving add-on community that provides (at additional monthly cost) access to caching, queuing and in fact any type of additional service that you might otherwise have installed for free on your Amazon EC2 instance.

Conclusion

I guess the main conclusion of this simple comparison is that whilst Heroku does make deploying web apps simple, you can’t simple take code already deployed on internal servers and¬†git commit¬†it to Heroku.com. Heroku forces you to think about the interactions your application will have with its new deployment environment, because if it didn’t, your app wouldn’t scale. This is also true of Google’s AppEngine, but the restrictions that AppEngine places on the type of code you can run makes it of limited value to my mind. These restrictions do not appear to be there with Amazon EC2. You can simply take an internally hosted system and build a deployment environment in the cloud that mimics the current environment. But at some point down the line, you’re going to have to think about making the code a better cloud citizen. With EC2, you’re simply able to defer the point of re-architecture. And the task of administering EC2 is a full time job in itself and should not be underestimated. Heroku is amazingly simply by comparison.

Anyway, those are my top of mind thoughts on the relative strengths and weaknesses of the different cloud hosting solutions I’ve personally looked at. Right now I have to say that Heroku really does strike an excellent balance between ease and capability. Worth a look.

Danny Goodall

Cloud gives ESBs a new lease of life

ESBs have become the cornerstone of many SOA deployments, providing a reliable and flexible integration backbone across enterprises. However, the Cloud Computing model has given ESBs a new lease of life as the link between the safe, secure world behind the firewall and the great unknown of the Cloud.

As ESB vendors look for more reasons for users to buy their products, the Cloud model has emerged at just the right time. Companies looking to take advantage of Cloud Computing quickly discover that because of key inhibitors like data location, they are forced to run applications that are spread between the Cloud and the Enterprise. But the idea of hooking up the safe, secure world of the enterprise, hiding behind its firewall, and the Cloud which lies out in the big, wide and potentially hostile world is frightening to many. Step forward the ESB – multi-platform integration with security and flexibility, able to hook up different types of applications and platforms efficiently and securely.

More and more ESB vendors are now jumping on the ‘Cloud ESB’ bandwagon. Cast Iron, now part of IBM, made a great name for itself as the ESB for hooking Salesforce.com with in-house applications; Open Source vendor MuleSource has been quick to point to the advantages of its Mule ESB as a cost-effective route to cloud integration; Fiorano has tied its flag to the Cloud bandwagon too, developing some notable successes. Recently, for instance, Fiorano announced that Switzerland’s Ecole h√īteli√®re de Lausanne (EHL) had adopted the Fiorano Cloud ESB to integrate 70 on-premise applications with its Salesforce.com CRM system.

Over the next few months, we expect to see a growing number of these ‘cloud ESB’ implementations as more companies realize the potential benefits of combining ESBs and Cloud.

If you want to fly in the Cloud, check the exits first

clouds1While Cloud adoption may be very cautious for core business systems, desktop clouds have seen a high take-up. But if you want to fly in the Clouds, you really should check your nearest emergency exit before you take off.

The cost advantages of putting all your desktop files and storage into the cloud are very persuasive, not to mention the attractions of access anywhere. But as Lustratus has pointed out before, there is a concern here. There are LOTS and LOTS of cloud suppliers – not unexpected when a new and radical idea comes along. But remember the real problem with cloud from the supplier’s side; the supplier has to put in all the investment in infrastructure up front, and then receives income in small per-user usage charges. This might look a great plan on a five-year basis with a rapidly¬†expanding user base, but when year one or year two¬†is tied to a period of tighter credit conditions it is easy to get over-extended. Look at G.host for instance, which went bust two or three months ago because it found its cloud no longer economical. Not a nice situation for all the people who had¬†files and data living in it, although to be fair they got reasonable warning from the company.

The sensible thing to do is check the escape routes before you go in.¬†Perhaps your cloud vendor will be fine, growing into the market¬†leader with oodles of cash to invest in new infrastructure to sustain the huge number of users, but just maybe it might be one of the ones that doesn’t make it. Look at your back-up procedures, and put in place an emergency plan to avoid any disruption if the worst happens. And make sure above all that you do your due diligence before selecting¬†your cloud.

IBM acquires Cast Iron

castironI am currently at IBM’s IMPACT show in¬†Las Vegas, where the WebSphere brand gets to flaunt its wares, and of course one of the big stories was IBM’s announcement that is has acquired Cast Iron.

While Cast Iron may only be a small company, the acquisition has major implications. Over the past few years, Cast Iron has established itself as the prime provider of Cloud to Cloud and Cloud to on-premise integration, with a strong position in the growing Cloud ecosystem of suppliers. Cast Iron has partnerships with a huge number of players in the Cloud and application packages spaces, including companies such as  Salesforce.com, SAP and Microsoft, and so IBM is not just getting powerful technology but also in one move it is taking control of the linkage between Cloud and anything else.

On the product front, the killer feature of Cast Iron’s offering is its extensive range of pre-built integration templates covering many of the major Cloud and on-premise environments.¬†So, for example, if an organization wants to link invoice information¬†in its SAP system with the¬†Salesforce.com¬†sales force environment,¬† then the¬†Cast Iron offering includes prepared templates for¬†the required definitions and configurations. The result is that the integration can be set up in a matter of hours rather than weeks.

So why is this so important? Well, for one, most people have already realized that Cloud usage must work hand-in-hand with on-premise applications, based on such things as security needs and prior investments. On top of this, different clouds will serve different needs. So integration between clouds and applications is going to be a fact of life. IBM’s acquisition leaps it into the forefront of this area, in both technology and partner terms. But there is a more strategic impact of this acquisition too. Noone knows what the future holds, and how the Cloud market will develop. Think of the situation of mainframes and distributed solutions. As the attractions of distributed systems grew, doomsayers were quick to predict the end of the mainframe. However, IBM developed a powerful range of integration solutions in order to allow organizations to leverage the advantages of both worlds WITHOUT having to choose one from the other. This situation almost feels like a repeat – Cloud has a lot of advantages, and some misguided ‘experts’ think that Cloud is the start of the end for on-premise systems. However, whether you believe this or not, IBM has once again ensured that it has got a running start in providing integration options to ensure that users can continue to gain value from both cloud and on-premise investments.

Steve

Platform Computing takes on the Cloud

ListeningI was on a call this week with Platform Computing, a well-known software vendor in the high-performance computing (HPC) world of grids and clusters that is now trying to make the leap to the Cloud Computing market.

Platform Computing has a strong reputation in the HPC world, selling software that helps manage these multi-processing environments, but it is keen to expand its market coverage and open up new opportunities in more general areas of IT, and it has selected the Cloud Computing marketplace to help it achieve these diversification aims. At first, this may seem odd, but a little thought quickly shows that this is not nearly as big a leap for Platform as it might at first seem. After all, internal clouds almost always involve virtualization, and handling the management needs of a virtualized environment is very much up Platform Computing’s street.

But for me, the real nugget that came out of this briefing was an interesting distinction that helps improve understanding of Cloud Computing and its relationship to Virtualization. I meet a growing number of people who have heard about Cloud, but do not see the distinction between Cloud and virtualization. While there are a number of ways to look at this distinction, as I discussed in my Executive Overview to Cloud which Lustratus offers at no charge from its web store, the discussions with Platform brought another one that I think is an interesting take. The Platform position is that virtualization solutions by definition only make virtualized resources available for usage. Its Cloud management software differentiates itself from virtualization by offering heterogeneous access to resources – that is, Cloud-based access to resources that have already been virtualized AND ones that haven’t. I think this is a useful distinction to keep in mind when looking at data centre strategies.

Steve

Cloud computing – balancing flexibility with complexity

balance2In the “Cloud Computing without the hype – an executive guide” Lustratus report, available at no charge from the Lustratus store, one of the trade-offs I touch on is flexibility against complexity.

To be more accurate, flexibility in this case refers to the ability to serve many different use cases as opposed to a specific one.

This is an important consideration for any company looking to start using Cloud Computing. Basically, there are three primary Cloud service models; Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). In really simple terms, an IaaS cloud provides the user with virtual infrastructure (eg storage space, server, etc), PaaS offers a virtual platform where the user can run home-developed applications (eg a virtual server with an application server, database and development tools) and SaaS provides access to third-party supplied applications running in the cloud.

The decision of which is the most appropriate choice is often a trade-off. The attraction of SaaS is that it is a turn-key option – the applications are all ready to roll, and the user just uses them. This is pretty simple, but the user can only use those applications supplied. There is no ability to build new applications to do other things. Hence this approach is specific to the particular business problem addressed by the packaged application.

PaaS offers more flexibility of usage. A user builds the applications that will run in the cloud, and can therefore serve the needs of many different business needs. However, this requires a lot of development and testing work, and flexibility is restricted by the pre-packaged platform and tools offered by the PaaS provider. So, if the platform is WebSphere with DB2, and the user wants to build a .NET application for Windows, then tough.

IaaS offers the most flexibility, in that it effectively offers the infrastructure pieces and the user can then use them in any way necessary. However, of course, in this option the user is left with all the work. It is like being supplied with the raw hardware and having to develop all the necessary pieces to deliver the project.

So, when companies are looking at their Cloud strategies, it is important to consider how to balance this tradeoff between complexity/effort and flexibility/applicability.

Steve

Introducing Cloud for Executives

Example report front coverAt Lustratus we have been doing a lot of research into Cloud Computing, as have many firms.

I must confess the more I have dug into it, the more horrified I have become at the hype, confusion, miscommunication and manipulation of the whole Cloud Computing concept.

In the end, I decided the time was right for an Executive Guide to Cloud – defining it in as simple terms as possible and laying out the Cloud market landscape. Lustratus has just published the report, entitled “Cloud Computing without the hype; an executive guide” and available¬†at no charge from the Lustratus store. Not only does the¬†paper try to lock down the cloud definitions, but it also includes a summary of some 150 or so suppliers operating¬†in the Cloud Computing space.

The paper deals with a number of the most common misunderstandings and confusions over Cloud. I plan to do a series of posts looking at some of these, of which this post is the first. I thought I would start with the Private Cloud vs Internal Cloud discussion.

When the Cloud Computing model first emerged, some were quick to try to define Cloud as a public, off-premise service (eg Amazon EC2), but this position was quickly destroyed as companies worldwide realized that Cloud Computing techniques were applicable in many different on and off premise scenarios. However, there has been a lot of confusion over the terms Private Cloud and Internal Cloud. The problem here is that analysts, media and vendors have mixed up discussions about who has access to the Cloud resources, and where the resources are located. So, when discussing the idea of running a Cloud onsite as opposed to using an external provider such as Amazon, people call one a Public Cloud and the other an Internal Cloud or Private Cloud.

This is the root of the problem. This makes people think that a Private Cloud is the same as an Internal Cloud – the two terms are often used interchangeably. However, these two terms cover to different Cloud characteristics, and it is time the language was tightened up. Clouds may be on-premise or off-premise (Internal or External), which refers to where the resources are. (Actually, this excludes the case where companies are running a mix of clouds, but let’s keep things simple). The other aspect of Cloud usage is who is allowed to use the Cloud resources. This is a Very Important Question for many companies, because if they want to use Cloud for sensitive applications then they will be very worried who else might be running alongside in the same cloud, or who might get to use the resources (eg disk space, memory, etc) after they have been returned to the cloud.

A Public Cloud is one where access is¬†open to all, and therefore¬†the user has to rely on the security procedures adopted by the cloud provider. A¬†Private Cloud is one that is either owned or leased by a single enterprise, therefore giving the user the confidence that information and applications are locked away from others. Of course,¬†Public Cloud providers will point to sophisticated security measures to mitigate any risk, but¬†this can never feel as safe to a worried executive than ‘owning’ the resources.

Now, it is true that a Public Cloud will always be off-premise, by definition, and this may be why these two Cloud characteristics have become intertwined. However, a Private Cloud does not have to be on-premise Рfor example, if a client contracts with a third party to provide and run an exclusive cloud which can only be used by the client, then this is a Private Cloud but it is off-premise. It is true that USUALLY a Private Cloud will be on-premise, and hence equate to an Internal Cloud, but the two terms are not equal.

The best thing for any manager or exec trying to understand the company approach to cloud can do is to look at these two decisions separately Рdo I want the resources on or off premise, and do I want to ensure that the resources are exclusively for my use or am I prepared to share. It is a question of balancing risk against the greater potential for cost savings.

Steve