The REAL concern over Cloud data security

Recently I have been involved in a discussion in the LinkedIn Integration Consortium group on managing data in a Cloud Computing environment, and the subject has turned to security.

I had maintained that data security concerns may sometimes result in companies preferring to look at some sort of internal Cloud model rather than risk putting their data in the Cloud-

the concept that I find is intriguing larger companies is the idea of running an INTERNAL cloud – this removes a lot of the concerns over data security, supplier longevity etc.

This generated a reaction from one of the other discussion participants, Tom Gibbs of DiCOM Grid.

I hate to poke at other commentators but security is an overarching issue for IT and telcom as a whole. No more and probably less of an issue with cloud or SaaS.

It’s almost amusing to watch legacy IT managers whine that b/c it isn’t local it isn’t secure. I’m sorry but this is totally naive.

This brings up an important point. What Tom is saying is that the Cloud provider will almost certainly offer top-notch security tools to protect data from unauthorized access or exposure, and therefore what’s the problem?

The answer is that the executive concern with putting data outside the corporate environment is likely to be more of an emotional rather than logical argument. With so many topical examples of confidential information being exposed, and executives knowing that regulations/legislation/corporate policies often make them PERSONALLY responsible for protecting information such as personal details of clients/customers/citizens, for example, the whole thing is just too scary.

IT folk may see this as naive, just as Tom says. After all, modern security tools are extremely powerful and rigorous. But of course this depends on the tools being properly applied. In the UK, for example, there have been a number of high-profile incidents of CDs or memory sticks containing confidential citizen information being left on trains and exposed to the media. The argument allowing data to be taken off-site was based around the fact that policy required all such data to be encrypted, making it useless if it fell into anyone else’s hands. These encryption algorithms were top-notch, and provide almost total protection. BUT the users who downloaded the information in each of these cases did not bother to encrypt it – in other words, if the procedures had been followed then there would have been no exposure but because people did not implement the procedures then the data was exposed.

These situations have not only proved extremely embarrassing to the data owners involved, but have resulted in heads rolling in a very public fashion. So the concerns of the executive moaning about risk are visceral rather than rational – ‘Moving my data outside of the corporate boundary introduces personal risk to me, and no matter how much the experts try to reassure me I don’t want to take that risk’. Of course less sensitive information will not be so much of a concern, and therefore these worries will not affect every Cloud project. But for some executives the ‘security’ concern with moving data into the Cloud, while not logically and analytically based, is undeniably real.

Steve

Pragmatism is the theme for 2009

I have just returned from a couple of weeks around and about, culminating in presenting at the Integration Consortium’s Global Integration Summit (GIS), where I presented the Lustratus ‘BPM Sweet Spots’ paper.

One message seemed to come out loud and clear from the conference – pragmatism is the watchword for 2009.

There were two other analyst presentations apart from the Lustratus one, and I was surprised to see that both presenters pitched a message along the lines of ‘you will never succeed with SOA/Integration/BPM unless you get all the strategic planning and modelling for your enterprise done first’, combined with a suggestion that the presenter was just the resource to ask for help! This contrasted sharply with my own presentation of choosing tactical targets for BPM rather than going for a strategic, enterprise-wide, fully modelled approach.

I was wondering if I had read the mood wrong in the marketplace, but then the eight or so user case studies all proved to be tactical strikes for specific business benefits rather than the more extensive strategic approach more common a year or so ago. It was nice to be vindicated – it looks like 2009 really IS the year of pragmatism and short-term practical considerations.

Steve

Don’t get handcuffed by EA

I have to confess up front that I have never been desperately comfortable with Enterprise Architecture (EA) frameworks and disciplines, and therefore the opinions I am about to express should be taken in that light.

However, I do worry that EA may be handcuffing some companies to the point where potential benefits are strangled at birth.

I was recently reading an interesting article by Nagesh Anipindi, entitled “Enterprise Architecture: Hope or Hype?”, which discusses the lengthy presence of EA as an innovation and considers the reasons for its failure to move into the mainstream of acceptance. As Nagesh writes,

For 2009, Greta has predicted that more than half the existing EA programs are at risk and will be discontinued in the near future. Remaining ones that survive this economy, per Greta, will struggle with framework and information management problems.

Nagesh goes on to postulate that the current pressures on IT budgets will result in a lot of EA efforts being abandoned, not because they are unworthy but because they fall below other more critical areas such as Operations and development of new business solutions. He then goes on to say that once the industry realizes the massive benefits that EA can deliver, he believes this situation will turn around and EA will become an essential part of every corporate IT organization.

I think Nagesh may have missed the point slightly, although I agree with a lot of what he says. Look at one of the many definitions of Enterprise Architecture, as Nagesh records it –

Gartner defines EA as: “Enterprise Architecture is the process of translating business vision and strategy into effective enterprise change by creating, communicating and improving the key requirements, principles and models that describe the enterprise’s future state and enable its evolution.

This definition epitomizes the problem as far as I am concerned. The basic purpose of EA is there, but clouded with the sort of mumbo-jumbo that frightens off potential decision-makers. What is EA REALLY about? It is about tying the IT architecture and implementation to the business vision and needs, both now and in the future. It is about making sure IT really serves the business. Does this require communication? Of course. Does it require principles and practices – yes. But the complex phrasing of this definition is typical of ‘EA Experts’. These people talk of EA Frameworks, of EA Models, and of rigid procedures. From an intellectual point of view, this is all absolutely fine. If you were writing a thesis on how to architect an enterprise IT system to match business needs and to be able to continue to do so, it might be perfectly acceptable to introduce loads of models, a single tool-driven interface, definition language and frameworks.

However, this is the real world. The danger I see is that this over-enthusiastic approach can tie the hands of professionals and organizations so tightly that they cannot achieve anything. There is also the danger that when this approach is considered over time, it introduces a real skills problem, with the need to train new people on all these tools and methods which do not actually contribute to delivering new business value. In effect, the mechanisms to deliver the effective enterprise architecture develop a life of their own and start to consume development resources for their own purposes as opposed to business needs.

A small example may illustrate my point. In the old days, when I worked with IBM, a purist movement pointed out that because we wrote our design documentation in English, we were impacting quality and accuracy by introducing the potential for misunderstandings as to what a passage of English might actually mean. As a result, IBM worked with Oxford University to develop a mathematically-based specification language to eliminate the problem. This made complete sense at an intellectual level. However, although this language was adopted for a time, there were always new people coming onto the team who didn’t understand it, and training began to be a real overhead. Eventually, the language was dropped. Although it made intellectual sense to use it, it did not work at a practical level.

I am all for Enterprise Architecture – at least in spirit. I believe the role of an Enterprise Architect is exactly to ensure that the technology correctly delivers on the business needs, and is architected in such a way to enable new business changes to be implemented quickly and effectively. But I don’t think this requires a complex framework of models and requirements tools and so on. In fact, I think a strong EA edicts the minimum, but offers a loose framework that allows departmental innovation. In truth, there is nothing new about EA – it is all about doing things sensibly and remembering that IT is there purely to serve the business and not itself. All the rest of the formal EA clutter is a set of handcuffs that can hold organizations back.

Steve

What is behind SAP’s ‘go-slow’ on SaaS?

There have been many reports recently on the problems with SAP’s Software as a Service (SaaS) offering, Business ByDesign – see for example the article by Timothy Morgan here.

To summarize, SAP is backing off its initial, bullish claims on SAP Business ByDesign, saying that it is now going to proceed at a much slower pace than originally planned. Of course, the SAP trade show Sapphire, which is being held this week, might provide more info, but I somehow doubt it.

So, what is going on? Why the sudden backtrack? After great trumpeting 18 months ago from SAP about Business ByDesign being the magic bullet for SMEs, offering the ability to run remote copies of SAP applications on a per user basis without having to cough up for a full license, why the hesitation?

I suspect the truth of the matter may be partly political, partly execution oriented and partly financial. There are those who would argue that SAP does not really WANT a SaaS market for its packages to come rushing into existence. After all, from a supplier point of view wouldn’t you prefer to sell more expensive licenses that lock the user in rather than a cheap usage-based service that the user can walk away from at any time?  So the conspiracy theorists would say SAP deliberately tried to freeze the market for SAP SaaS offerings to discourage competition and slow down the emergence of this market.

On the execution side, perhaps it is possible that SAP did not realize that selling SaaS solutions is a world away from selling large application suites directly to big companies. SaaS solutions are low-cost high-volume as opposed to high-cost low-volume, and hence need much more efficient and layered distribution channels – and SMEs are used to picking up the phone to ask someone whenever they have to change something, not a great strength for SAP’s support structure.

Then finally, the financial side. Many SaaS suppliers have discovered an uncomfortable truth – while in a license model the user pays a substantial sum of money for purchase followed by maintenance, in a SaaS model the risk position is reversed, with the supplier having to put the resources in place up front to support the POTENTIAL usage of the infrastructure by all those signed up users and then receiving revenues in a slow trickle over time. Is it possible that SAP just didn’t like the financial implications of having to continually invest while looking at payback times of years? Did they therefore decide to deliberately throttle the number of new customers, giving them a chance to make some money before making more investments?

Maybe SAP will tell all at Sapphire … or maybe we will just have to keep guessing.

Steve

Does Microsoft ESB Guidance have a future?

As one might have expected, Microsoft tried to ignore the Enterprise Service Bus (ESB) movement for a long time, but eventually it had to do something to answer demands of its customer base looking for SOA support.

Its response was Microsoft ESB Guidance, a package of

architectural guidance, patterns, practices, and a set of BizTalk Server and .NET components to simplify the development of an Enterprise Service Bus (ESB) on the Microsoft platform

Let’s be honest. This is a typical Microsoft ‘fudge’. Microsoft ESB Guidance is not a Supported Product, but is instead a set of guidelines and one or two components. It is a Microsoft Patterns and Practices offering – in other words, you are on your own. This may be fine if you are a Microsoft development shop, but far more worrying if you are real business user with extensive Microsoft presence. It has a lot of the disadvantages of Open Source, but you still have to pay for Bizt Talk etc..

So what does the future hold? Will trying to bring the Microsoft server world into the SOA domain always be a matter of risk and going it alone? Will Microsoft productize Microsoft ESB Guidance? Are there any alternatives other than just consigning the Microsoft platform to run in isolation on the fringes of the enterprise?

Fortunately, the Microsoft model may actually be working here. I do not believe Microsoft will ever productize ESB Guidance – after all, they have had two years and are still maintaining there are no plans to do this. However, what this position does do is it encourages opportunists to jump in and develop products based around the Microsoft technology and guidance materials. An example is Neuron-ESB, from Microsoft specialists Neudesic.

So, while Lustratus strongly cautions users about the effort, cost and risk of using Microsoft’s own ESB Guidance package, the idea of utilizing a Microsoft-based supported ESB product from a specialist vendor is much more attractive. Of course, whether these new Microsoft-based ESBs are any good is yet to be seen….

Steve

BPM is flying off the shelves – at least at Pegasystems

It’s always nice to be proved right. At the end of 2008, when Lustratus published its 2009 predictions for the infrastructure market, we highlighted BPM and predicted that 2009 would (at last) be its year.

In March I discussed the impressive 2008 for Pegasystemsin a previous Litebytes post, and now the company has made its 1Q09 announcement of earnings.

Briefly, we are talking about revenue increasing 29% YOY to $62.4M for the quarter, and license revenue up a storming 60% to $28M. Recession – what recession? Admittedly the results were skewed a little by a single large deal closing at around 12% of the total, which may put Pega under pressure for the next quarter, but this cannot disguise the point we made in our 2009 predictions – tactical, targeted BPM can deliver the real savings and flexibility to support broadening customer bases and types that businesses are looking for in the current economic downturn, or can respond to specific business channels such as tracking and reducing fraud.

The other point that these results reaffirm is that companies are looking for solutions that are geared to their own industry vertical needs – Pegasystems has a strong industry framework philosophy that responds to this need very effectively. The only possible ‘cloud’ on the horizon seems to be Peagsystems’ tentative move towards the dangerous ‘Platform-as-a-Service’ (PaaS) market segment – this area is a minefield at the moment and it is to be hoped that Pega do not find themselves sucked into the abyss by getting to wedded to this idea. Just stick to what you do best, guys!

In summary, for all those companies who have heard about BPM and then shied away, put off by the thought of the effort required to deployBPM across the enterprise for all processes, take another look with a tactical, laser-focused mind-set. BPM really can be selectively applied at a reasonable price, with rapid payback and an attractive ongoing benefit stream.

Steve

SAP takes a hammering in 1Q 2009 results

SAP released its first quarter results today – and they do not make pretty reading at all.

Although overall revenue was only slightly down overall it is the software license figures that are so alarming, crashing by a third compared to 2008. This may seem not to be important since the software license numbers are only a relatively small part of overall revneues, but in fact it is the software license performance that drives a lot of the other related activities, so weakness here will feed through over time. SAP points to the fact that 1Q08 was before the global problems had really taken hold, but while I think this is partially true, I think there is another problem evident here.

Companies are still investing in IT – there have been enough results in the last few weeks that show great growth for some, with Pegasystems and Sybase being two particular examples. However, the SAP results seem to show a greater weakness in the application package market – and this is only to be expected. The problem is that while companies like Pegasystems and Sybase are looking to help companies get immediate return through doing things differently (using BPM and going mobile respectively), SAP packages are SAP packages. They do what they do, and although it is generally a good idea to keep updating them and spreading them more widely, these tasks are

  • Very time-consuming and costly
  • Not exactly urgent

On this basis, most companies are electing to stick with what they have at the moment on the packages front, while concentrating on other areas of more immediate return in the infrastructure like BPM and Business Events implementations. This gives SAP a real headache in the near term. Eventually, once everyone is spending again, companies may well return to the question of their SAP application package portfolio, but at least in 2009 I suspect this will be put on the back burner. I guess that for SAP, 2010 can’t come soon enough.

Steve

IBM 1Q09 results implications

When I posted last week on looking ahead to the IBM first quarter results, I put my head on the block by stating that I felt the results would hold up pretty well.

The formal results were announced yesterday, and I am pleased to say I live to look into my crystal ball another day, at least when discounting the effects of swinging currency markets.

Firstly, I had suggested that the IBM services arm would probably benefit from users wanting to cut costs and looking for help to do it. In fact, IBM claims that overall signings were up 10% at constant currency, and up 27% in the larger projects category. This bodes well for future revenue recognition as these projects flow through. I had also pointed to the desire for quick hit benefitsdriving the IBM WebSphere-based SOA offerings such as BPM, and indeed while overall IBM software was down 6% (up 2% at constant currency), WebSphere revenues grew 5% (14% at constant currency). My forecast was that hardware would take a bit of a hit, but that this shouldn;t damage the overall numbers too much. Once again this seems to be borne out in the IBM announcements, pointing to a 23% drop (18% at constant currency) of its Systems and Technology segment where the hardware products live. However, overall this had little adverse impact on IBM’s overall figures as predicted because IBM has swung its business model much more heavily in favour of software and services now.

Looking ahead, these results can only be good news for IBM, even though revenue at common currency was down 4%. From a global market perspective this should also prove encouraging to other IT vendors, particularly those with investments in the high-growth enterprise middleware area and those providing advisory professional services. However, companies reliant on hardware revenues will probably suffer most.

The final interesting point was that IBM claims it is sitting on $12B cash in hand….I wonder what it plans to do with all that money at a time when assets are cheap and it has just missed out on SUN….

Steve

Looking ahead to IBM 1Q09 results

IT market watchers are eagerly awaiting IBM’s 1Q09 results, to be announced in the next few days, anxious to see how IBM is finding the current global market conditions.

Putting my own neck on the block, I suspect the results will look pretty good despite the economic downturn. There are a number of reasons for this.

Firstly, Lustratus is seeing a lot of users looking for professional services assistance in reducing IT costs and increasing flexibility and agility. This is pretty natural in a downturn. Doing more with less is obvious, but also companies are looking to expand their customer bases into new markets with new offerings as quickly as possible to shore revenues up, driving the need for better agility and adaptability. This plays into IBM’s hands with its extensive services experience, so services revenue could well hold up OK.

Secondly, one thing users ARE looking for at the moment seems to be quick hits – do something that isn’t too costly and is not a major architectural shift to get a fast return. As I have blogged about before, BPM (Business Process Management) and Business Events processing offer two areas that fit beautifully with this need – and note this is not the BPM where a company sets about rewriting all its processes, but instead BPM targeted on fast return, pragmatic sweet spots. Both BPM and Events will tend to drag in SOA requirements (although again at the pragmatic rather than ‘change the world’ level) which is another strong area for IBM. Although other companies such as Oracle and SAP offer technology in these areas, the advantage of being able to link the products to services engagements from IBM’s massive services arm to help aim the investment most effectively is a big one for IBM. Given that IBM also has a large portion of software revenue on ‘contract’ basis, this means the software revenues should hold up well too.

Hardware may have taken a bit of a ding in 1Q09, but this is unlikely to do too much damage to the overall numbers.

So, a reasonable set of results to come from IBM? We shall see…..

Steve

A practical approach to Open Source (part 2)

Following my first post in this short series, in which I looked at the benefits of going with an open source project, this post focuses on the risks that counterbalance those benefits.

The main risks for users to consider fall into four areas:

  1. Support
  2. Longevity
  3. Governance and control
  4. The 80/20 rule

The first is perhaps the most obvious risk of OSS. The principle behind OSS is that the code is developed by a team of interested parties, and support is often initially offered in the same light. So, problem resolution is based upon someone else giving time and effort to work out what is wrong and fixing it. Similarly with documentation, education etc – they all depend on someone else’s good will. Some proponents of OSS will point to the fact that commercial companies have sprung up that provide fee-based support for various different projects, but this has a number of big drawbacks. One is that this is fee-based and therefore impacts one of the open source advantages (free licenses), and the other is down to the nature of most OSS projects. OSS developers usually operate on a shared code mentality, and frequently use many other OSS components in their own solutions. While one of these groups might form a company to offer support contracts for the code they have developed, they usually have to rely on ‘best can do’ support for the other components they have embedded. All of this brings in a great deal of risk attached to using OSS. The one difference is where a company has decided it wants the OSS specifically because it gets the source and can now staff its own maintenance and support team.

Longevity relates to the support question. OSS projects need their driving teams of developers, and if for some reason these people decide to move onto something more exciting or interesting, the user could be left using a code-base that has noone left who is interested in it. This means future upgrades are extremely unlikely, and leaves the user stranded. A special case to watch out for here is those crafty vendors who start an Open Source project around some new area they are interested in, supplying a very basic code base, and then when enough users have bitten on the bait they announce that future development is going to be supplied int he shape of a traditional licenses software offering, while the original code base remains at a basic level.

The governance / control risk is a tricky one. The problem is that be definition anyone interested in a particular OSS project can download and use the code. A company might decide to use OSS for specific needs, only to discover that it has proliferated without control due to the ease with which people can download it. These downloaders may think they base is supported because the company has decided to use it in other places, but they may have downloaded different versions or anything. Related to this is the control of quality – as other OSS users develop updates, these are supplied back into the OSS project, but who if anyone is policing these changes to make sure they do not alter functionality or remain compatible?

The final risk area relates to the nature of people. The driving force behind any OSS project, at least in the early days, tends to be a fanatical group of developers who really enjoy making something work. However, anyone familiar with software development will be aware of the 80/20 rule – that 80% of the code takes 20% of the effort to write, and vice versa. The natural inclination for any OSS team is to do all the fun stuff (the 80%), but to tend to shy away from the really heavy-duty complicated stuff. So, areas such as high availability, fault tolerance, maintainability and documentation are the sorts of areas that may suffer, at least until the project gains some heavyweight backers. This can severely limit the attraction of open source software for commercial, mission-critical needs at least.

It is for many of the reasons given above that the area of greatest attraction for new OSS projects tends to be in the personal productivity area, where there are alternatives to mitigate the risks. So, Firefox is great, and has super usability features because it has been created more or less by users, but if something went sour with a latest level of the code the users could switch to IE without too much pain. Deciding to use OSS in a production, mission-critical environment is much more dangerous, and generally only acceptable when there is a cadre of heavyweight underwriters, such as in the case of LINUX. In this case, IBM would never let anything go wrong with LINUX because it is too reliant on its continued success.

Part 3 will look at this area more closely as it discusses the OSS project business model.

Steve