Why enterprise mobile applications need an mBroker – part 2

mobile marketingThis is the second in a series of posts about the mBroker, an essential component of enterprise mobile application deployments.

The previous post discussed the general need for broking services to handle differences in mobile and corporate application environments. In this post we will look more closely at the security issues that mBrokers address.

Mobile applications are often written in the REST style using JSON as a format, because these mechanisms are simple, lightweight and perfect for the limited resources available to mobile devices. However, when these applications need to use corporate applications and APIs it can open a number of security holes. For starters, in the corporate SOA world integration is usually addressed through SOAP-based messages and web services. SOAP messages are usually encrypted, and there are extensive security protocols built into the web services standards specifications such as WS-Security. But the REST style of programming has little in the way of security protection; after all, REST is basically calling up URLs in a similar fashion to when you are surfing the net. This means that data may be ‘in the open’ and therefore exposed to prying eyes, and also intercepting the data and injecting malicious content is relatively easy.

The mBroker security services address these issues. For example, policies can be put in place so that sensitive information can be detected and secured, and the traffic can be scrutinized on entry to the corporate network for any injected threats or attacks. For example, content might be restricted to a small set of QueryString parameters, headers may be inspected to identify the type of data expected, and so on.

The other tricky aspect of securing enterprise mobile applications is the authentication and identity management area. As touched on in Part 1, OAuth is a loose standard providing a mechanism for delivering a level of authorization in the mobile world. In essence, resource owners authorize other services to use only that set of resources required for the task. The idea is that instead of having to log in everywhere, exposing your userid and password to different third party systems, the OAuth mechanism enables you to share a token with the service providers that restricts access. However, OAuth is quite new. OAuth was a typical web-based user-driven project which has now been developed, with OAuth 2.0, into a wider reaching standard specification. Not all of the web community are in favour of this wider direction, and the fact that OAuth 2.0 is not backward compatible with OAuth has not helped the situation at all. As a result different third party environments may not support OAuth at all or may support different levels.

Again, this is ideal territory for the mBroker. The mBroker can provide consistent OAuth implementation across all services, as well as bridging between OAuth and non-OAuth forms of authentication as required.

So mBrokers provide the mechanism to ensure that mobile enterprise applications do not compromise your corporate security goals.

IBM gives predictive analytics a friendly face

1987-predictions-2One of the big challenges facing the Business Analytics industry is the historical complexity of business intelligence and analytics tools. For years companies have had to rely on their BI experts to do just about anything useful; it isn’t that companies do not see value in putting analytics in the hands of business people, it is that the products needed a Diploma in Statistics and intimate familiarity with the technology behind the tools.

However the situation is improving. Products like Spotfire and Tableau have worked hard to deliver data visualization solutions that provide users with business-context easy to understand data, and suppliers of broader Analytics suites such as Oracle and IBM have been trying to improve other aspects of analytics usability. To be honest, IBM has been somewhat lagging in this area, but over the last year or so it is giving clear indication that it has woken up to the advantages of providing such tools as predictive analytics and decision management in a form that the wider business user community can access.

The recent IBM announcement of SPSS Analytic Catalyst is another proof point along the journey to broader access, usage and value. This exciting new development may have been named by a tongue-twisting demon, but the potential it offers companies to create more value from corporate information is huge. In essence, the tool looks at this information and automatically identifies predictive indicators within the data, expressing its discoveries in easy-to-use interactive visuals TOGETHER WITH plain language summaries of what it has found.  So for example, one SPSS Analytic Catalyst (really rolls off the tongue, doesn’t it) page displays the ‘Top Insights’ it has found, such as the key drivers or influencers of a particular outcome.

The combination of simple visuals with associated plain language conceals all the statistical complexity underneath, making the information easily consumable. The business users can quickly identify the drivers of most interest related to corporate key performance measures, for example, and then drill down to gain a deeper insight. Removing the need for highly trained BI experts means that the wider business community can create substantially more value for the company.

Why enterprise mobile computing needs an mBroker – part 1

mobilephonesMobile computing is all the rage, with employees, consumers and customers all wanting to use their mobile devices to transact business. But how should an enterprise approach mobile computing without getting into a world of trouble? How can the enterprise future-proof itself so that as mobile enterprise access explodes the risks are mitigated?

mBrokers are emerging as the preferred method of building a sustainable, governable and effective enterprise mobile computing architecture. The mBroker brings together ESB, integration broker, service gateway, API management and mobile access technology to provide the glue necessary to bring the mobile and corporate worlds together effectively and efficiently; for a summary of mBroker functionality see this free Lustratus report. In this first post in a series looking at mBrokers, we will look at the fundamental drivers for the basic broking functionality offered by mBrokers.

Integration brokers have been around for many years now. The principle is that when trying to integrate different applications or computing environments, some form of ‘universal translator’ is needed. One application may expect data in one format while another expects a different format for example. A trivial example might be an intenrational application where some components expect mm/dd/yy while others want dd/mm/yy. The broker handles these transformation needs. But it plays another very important role apart from translating between different applications; it provides a logical separation between application components, so that requestors can request services and suppliers can supply services without either knowing anything about each other’s location/environment/technology. In order to achieve this, it provides other functionality such as intelligent routing to find the right service and execution location, once again without the service requestor having to know anything about it.

Enterprise mobile applications face a lot of the same challenges. When crossing from the mobile device end to the corporate business services end, the same problems must be addressed. For example, mobile applications often rely on JSON for format notation and use RESTful invocation mechanisms to drive services. But many corporate networks employ an SOA model based around XML data and SOAP-based invocations of services.  In addition, the same sort of abstraction layer offered by integration brokers is beneficial to avoid the mobile device needing to know about locations of back end applications. It is therefore not surprising to find that integration broker technology is one source for mBroker technology.

 

Oracle BPM improving

I sat in on the latest Oracle webinar yesterday to hear about its latest developments with its Oracle BPM offering. http://upload.wikimedia.org/wikipedia/commons/1/15/Tick-red.pngI have to say I was pleasantly surprised. I have been a little harsh in the past about Oracle BPM, but it seems Oracle is finally getting its BPM act together. Process Composer (the Oracle ‘business user’ environment) now offers Oracle BPM Web Forms, an intuitive and easy to use tool to allow user forms to quickly be assembled. The business analyst or architect can assemble whatever user form makes the most sense for each step of the workflow, using a palette of handy selections for such elements as phone numbers, addresses, text input boxes etc.. The mechanism for adding a rule into a process flow is also pretty simple, although of course it assumes a developer has already set up the relevant options for rule specification. Oracle has even started to add Process Accelerators to provide process templates for a small selection of business needs, for example employee onboarding.

I did get one surprise though – this may not be new, but it certainly was to me. Apparently, as well as offering the ability to run Oracle BPM on Oracle’s WebLogic Suite, Oracle also supports IBM WebSphere as the application server layer 😮

Heroku Versus AppEngine and Amazon EC2 – Where Does it fit in?

I’ve just had a really pleasant experience looking at Heroku – the ‘cloud application platform’ from Salesforce.com but it’s left me wondering where it fits in.

A mate of mine who works for Salesforce.com suggested I look at Heroku after I told him that I’d had some good and bad experiences with Google’s AppEngine and Amazon’s EC2. I’d been looking for somewhere to host some Python code that I’d written in my spare time and I had looked at both AppEngine and EC2 and found pros and cons with both of them.

As it turns out it was a good suggestion  because Heroku’s approach is very good for the spare-time developer like me. That’s not to say that it’s only an entry level environment – I’m sure it will scale with my needs, but getting up and running with it is very easy.

Having had some experience of the various platforms, I’m wondering where Heroku fits in. My high-level thoughts…

Amazon’s EC2 – A Linux prompt in the sky

Starting with EC2, I found EC2 the simplest concept to get to grips with but by far the most complex to configure. For the uninitiated, EC2 provides you with a machine instance in the cloud which is a very simple concept to understand. Every time you start a machine instance you effectively get a Linux prompt, of varying degrees of power and capacity, in the sky. What this means is that you have to manually configure the OS, database, web infrastructure, caching, etc. This is excellent in that it gives unrivalled flexibility and after all, we’ve all had to configure our development and test environment anyway so we should understand the technology.

But imagine that you’ve architected your system to have multiple machines hosting the database, multiple machines processing logic and multiple web servers managing user load; you have to configure each of these instances yourself. This is non-trivial and if you want to be able to flexibly scale each of the machine layers then you own that problem yourself (although there are after market solutions to this too).

But what it does mean is that if you’re taking a system that is currently deployed on internal infrastructure and deploying it to the cloud, you can mimic the internal configuration in the cloud. This in turn means that the application itself does not necessarily need to be re-archtected.

The sheer amount of additional infrastructure that Amazon makes available to cloud developers (Queuing, cloud storage,  MapReduce farms, storage, caching, etc) coupled with their experience of managing both the infrastructure and the associated business models, makes Amazon an easy choice for serious cloud deployments.

Google AppEngine – Sandbox deployment dumbed down to the point of being dumb?

So I’m a fan of Google, in the same way that I might say I’m a fan of oxygen. It’s ominpresent and it turns out that it’s easier to use a Google service than not – for pretty much all of Google’s services. They really understand the “giving crack cocaine free to school kids” model of adoption. They also like Python (my drug of choice) and so using AppEngine was a natural choice for me. AppEngine presents you with an abstracted view of a machine instance that runs your code and supports Java, Python or Google’s new Go language. With such language restrictions it’s clear to see that, unlike EC2, Google is presenting developers with a cosseted, language-aware, sand-boxed environment in which to run code. The fact that Google tunes the virtual machines to host and scale code optimally is, depending on your mindset, either a very good thing or close to being the end of the world. For me, not wanting, knowing how to, or needing to push the bounds of the language implementation, I found the AppEngine environment intuitive and easy. It’s Google right?

But some of the Python restrictions, such as not being able to use modules that contain C code are just too restrictive. Google also doesn’t present the developer with a standard SQL database interface, which adds another layer of complexity as you have to use Google’s high replication datastore.  Google would argue, with some justification I’m sure, that you can’t use a standard SQL database in an environment when the infrastructure that happens to be running your code at any given moment could be anywhere in Google’s data centres worldwide. But it meant that my code wouldn’t port without a little bit of attention.

The other issue I had with Google is that the pricing model works from quotas for various internal resources. Understanding how your application is likely to use these resources and therefore arriving at a projected cost is pretty difficult. So whilst Google has made getting code into the cloud relatively easy, it’s also put in place too many restrictions to make it of serious value.

Heroku- Goldilock’s porridge too hot, too cold or just right?

It would be tempting, and not a little symmetrical, to place Heroku squarely between the two other PaaS environments above. And whilst that is sort of where it fits in my mind, it would also be too simplistic. Heroku does avoid the outright complexity of EC2 and seems to also avoid some of the terminal restrictions (although it’s early days) of AppEngine. But the key difference with EC2 lies in how Heroku manages Dynos (Heroku’s name for an executing instance). To handle scale and to maximise use of its own resources, Heroku runs your code only for the specific instance that it is being executed. After that, the code, the machine instance and any data it contained are forgotten. This means that things like a persistent file system or a having a piece of your code always running cannot be relied upon.

These problems are pretty easily surmountable. Amazon’s S3 can be used as a persistent file store and Heroku apps can also launch a worker process that can be relied upon to not be restarted in the same way as the other Dyno web processes.

Scale is managed intelligently by Heroku in that you simply increase the number of web and worker processes that your application has access to – obviously this also has an impact on the cost. Finally there is an apparently thriving add-on community that provides (at additional monthly cost) access to caching, queuing and in fact any type of additional service that you might otherwise have installed for free on your Amazon EC2 instance.

Conclusion

I guess the main conclusion of this simple comparison is that whilst Heroku does make deploying web apps simple, you can’t simple take code already deployed on internal servers and git commit it to Heroku.com. Heroku forces you to think about the interactions your application will have with its new deployment environment, because if it didn’t, your app wouldn’t scale. This is also true of Google’s AppEngine, but the restrictions that AppEngine places on the type of code you can run makes it of limited value to my mind. These restrictions do not appear to be there with Amazon EC2. You can simply take an internally hosted system and build a deployment environment in the cloud that mimics the current environment. But at some point down the line, you’re going to have to think about making the code a better cloud citizen. With EC2, you’re simply able to defer the point of re-architecture. And the task of administering EC2 is a full time job in itself and should not be underestimated. Heroku is amazingly simply by comparison.

Anyway, those are my top of mind thoughts on the relative strengths and weaknesses of the different cloud hosting solutions I’ve personally looked at. Right now I have to say that Heroku really does strike an excellent balance between ease and capability. Worth a look.

Danny Goodall

Cloud gives ESBs a new lease of life

ESBs have become the cornerstone of many SOA deployments, providing a reliable and flexible integration backbone across enterprises. However, the Cloud Computing model has given ESBs a new lease of life as the link between the safe, secure world behind the firewall and the great unknown of the Cloud.

As ESB vendors look for more reasons for users to buy their products, the Cloud model has emerged at just the right time. Companies looking to take advantage of Cloud Computing quickly discover that because of key inhibitors like data location, they are forced to run applications that are spread between the Cloud and the Enterprise. But the idea of hooking up the safe, secure world of the enterprise, hiding behind its firewall, and the Cloud which lies out in the big, wide and potentially hostile world is frightening to many. Step forward the ESB – multi-platform integration with security and flexibility, able to hook up different types of applications and platforms efficiently and securely.

More and more ESB vendors are now jumping on the ‘Cloud ESB’ bandwagon. Cast Iron, now part of IBM, made a great name for itself as the ESB for hooking Salesforce.com with in-house applications; Open Source vendor MuleSource has been quick to point to the advantages of its Mule ESB as a cost-effective route to cloud integration; Fiorano has tied its flag to the Cloud bandwagon too, developing some notable successes. Recently, for instance, Fiorano announced that Switzerland’s Ecole hôtelière de Lausanne (EHL) had adopted the Fiorano Cloud ESB to integrate 70 on-premise applications with its Salesforce.com CRM system.

Over the next few months, we expect to see a growing number of these ‘cloud ESB’ implementations as more companies realize the potential benefits of combining ESBs and Cloud.

Lustratus sees 2011 as big year for Business Rules

Every year Lustratus digs out its crystal ball to identify the key trends in the global infrastructure market place for the next twelve months.

The latest set of predictions for 2011 can be found in this Lustratus Insight, available at no charge from the Lustratus store. However, one in particular deserves further mention. Lustratus predicts that in 2011 the use of Business Rules software (BRMS) will continue to grow rapidly.

To me, business rules represent the peak of business / IT alignment. For the uninitiated, the idea of Business Rules and Business Rules Management Systems is to enable a repository of rules to be created that control how the It implementation of business operates. These rules are written in non-technical (or at least non-IT technical) language, and can be authored and edited by business professionals. As a simple example, a bank might have a business rule governing how it charges its customers for their bank payments activities. This rule might say something along the lines of

“If payee is a personal customer, charge x per transaction. If payee is a business customer, charge y per transaction”.

Now, suppose the bank decides that it wants to have a marketing campaign to try to encourage more small businesses to start using its services. it might decide that as an incentive it will offer free payments processing for any business payments of less the £5,000. Most larger business clients would probably far exceed this number. Changing the IT systems to support this new initiative would involve no more than a business user editing the rule setting payment charges, and modifying it to

“If payee is a personal customer, charge x per transaction. If payee is a business customer and the amount is > £5000 then charge y per transaction. If payee is a business customer and the amount is <= £5,000 then set charge to zero.”

When the rule is altered, the BRMS would interpret this change into the necessary technical implementation to achieve the desired aims.

This is the root of Business Rules popularity. They provide the ultimate means for business users to change and adapt their business approach without having to involve heavy IT investment each time a change is made -efficient agility if you like. However, this business rules-based approach to IT implementation has another extremely useful by-product; it becomes much easier to demonstrate compliance with corporate or external policies and regulations. A compliance officer can review the easily-understandable business rules to validate that the company is correctly implementing regulatory requirements, without needing an IT translator.

I expect to see a lot of activity in 2001 in the area of Business Rules.

webMethods gets MDM with Data Foundations acquisition

Software AG, the owner of the popular webMethods suite of SOA and BPM products, has acquired Data Foundations, the US-based Master Data Management (MDM) vendor. This is a great acquisition, because the single version of the truth provided by MDM technology is often an essential component of business process management applications.

The only issue is that there is an element of catch-up here, since major BPM/SOA vendors like IBM and Oracle have had MDM capabilities for some time. But putting that aside, the fit between Data Foundations, Inc. and Software AG looks very neat. There is no product overlap to worry about, and the Data Foundations solution excels in one of the key areas that is also a strength for Software AG – that of Governance. Software AG offers one of the best governance solutions in the industry, built around its CentraSite technology, and Data Foundations has also made governance a major focus, which should result in a strong and effective marriage between the two technology bases. From a user perspective, MDM brings major benefits to business process implementations controlled through BPM technology, because the data accuracy and uniqueness enables more efficient solutions, eliminating duplication of work and effort while avoiding the customer relations disaster of marketing to the same customer multiple times.

Good job Software AG.

IBM reinforces its Appliance strategy with acquisition of Netezza

When IBM  acquired DataPower’s range of appliances in 2005, it caused some raised eyebrows; was IBM really serious about getting into the appliances game?. Subsequently the silence from IBM was deafening, and  people were starting to wonder whether IBM’s foray into the appliances market had fizzled out. However 2010 has been the year when IBM has made its strategic intent around appliances abundantly clear.

First it acquired Cast Iron, the leading provider of appliances for use in Cloud Computing, and now it is buying Netezza, one of the top suppliers of data warehouse appliances. Netezza has built up an impressive market presence in a very short time, dramatically accelerating time to value for data analytics and business intelligence applications. In addition, it has continued to extend its DataPower range, with the addition of a caching appliance and the particularly interesting ‘ESB-in-a-box’ integration appliance in a blade form factor. For any doubters, IBM has clearly stated its intentions of making appliances a key element of its strategic business plans.

This just leaves the question of why. Of course the cynical answer is because IBM must see itself making a lot of money from appliances, but behind this is the fact that this must indicate that appliances are doing something really useful for users. The interesting thing is that the key benefits are not necessarily the ones you might expect. In the early days of appliances such as firewalls and internet gateways, one key benefit was the security of a hardened device, particularly outside the firewall.  The other was commonly performance, with the ability in an appliance to customize hardware and software to deliver a single piece of functionality, for example in low-latency messaging appliances. But the most common driver for appliances today is much broader – appliances reduce complexity. An appliance typically comes preloaded, and can replace numer0us different instances of code running in several machines. You bring in an appliance, cable it up and turn it on. It offers a level of uniformity. In short, it makes operations simpler and therefore cheaper to manage and less susceptible to human error.

Perhaps it is this simplicity argument and its harmonization with current user needs that is the REAL driving force behind IBM’s strategic interest in Appliances.