Does the IT patent system deliver justice?

For some time now I have been following the ETL patent case where a small IT company called Juxtacomm is suing a whole host of IT companies for patent infringement.

Mark Madsen recently blogged on this, concluding that

What is most annoying about these types of patent lawsuits is that software companies don’t work together to reform the broken intellectual property systems in the US and Europe.

I have to say I am personally in full agreement with Mark about the patent system being badly broken, at least for IT-related patents. There are two main problems, nicely illustrated by the Juxtacomm case Mark cites but reinforced by countless other IT patent infringement actions. The first is somewhat endemic in litigation – those who know how to work the system can use it to bend it to their advantage, even to the point of ‘legally extorting’ money from others. The second is perhaps more to do with IT patents specifically; the reliance on using courts with a layman’s understanding of IT rather than IT experts can severely compromise ‘justice’.

Of course, it is a little difficult to discuss justice in this context. From the patent author’s point of view, justice is being fairly recompensed for the invention, while from the software vendor’s point of view it is not being persecuted unfairly. However, it seems to me that there are many examples of the two failings mentioned above, and these lead to the obvious conclusion that there really is something badly wrong with the system for IT-related patents today.

Take the first point. Mark points out that the plaintiff

..filed the patent infringement claim late last year in the Texas Eastern District Court, a favorite of patent trolls because the courts there favor patent trolls.

While the use of the term ‘patent troll’ is somewhat emotive, the real point is that statistics show that this particular Court has a history of coming down on the side of the plaintiff. The ability to select a venue for a trial of international companies from all over the world in the Court likely to be most favourable to the plaintiff seems to be a completely unfair bias towards the plaintiff and against the defendants. My very limited understanding of the generic legal process is that a defendant accused of a crime is usually the one that has the right to object to being tried in a particular location, but as far as I am aware in the case of patent infringement the defendant does not have recourse to this action.

The next point backing up the advantages possible from ‘playing the system’ is particularly evident in the specific ETL case Mark cites. The lawyers for the plaintiff are Akin Gump, a massive and fearsome Texas law firm with hugely deep pockets. The list of infringers is extensive, spanning behemoths like IBM all the way down to tiny companies like Fiorano. The inclusion of Fiorano is interesting and indicative – anyone with any background in ETL and wider integration would quickly point out that the patent requires the use of a ‘script processor’ that is used to manipulate the data transferred, and that Fiorano (which is not even an ETL company) does not have a script processor at all. So why include a company that appears not even to be infringing? Of course, I do not know the answer, but I can hazard a guess. There is no way a tiny company can go to trial – it just cannot afford the expense and cashflow, even if later it might be able to recoup costs. Therefore, once the gun is pointed at its head it has no choice but to cave in provided the settlement is a bit less than the cost of going to trial. Notice that this has nothing to do with whether the company infringes or not…on the day that the company is named in the suit its outcome is written in stone – it is going to have to cough up a few hundred thousand whether guilty of infringement or not. However, perhaps there is another reason for this play – having a group of smaller companies settle helps to convice a potential jury that ‘there is no smoke without fire’. When the trial comes around, the plaintiff’s lawers can point to the fact that others have settled (of course the terms are confidential so the jury will not know what the conditions were) and therefore this helps to prove the guilt of the big boys. But is this justice?

The other problem is the use of a lay judge and jury in IT patent infringement cases. Information Technology is a very complicated area – it is extremely hard for a lay person to grasp the implied meanings of words frequently used in the IT world such as ‘script’ and ‘metadata’. As a result, both legal teams have a bun-fight to push their own defintiions and the poor judge has to rule on which ones are to be used. Quite naturally, the judge usually goes for the definition that is simplest to understand, but the problem is that this will often be the wrong one. IT terms are often quite specific, and taking a simple interpretation often broadens the scope of a patent well beyond the more accurate anf precise IT interpretation. Actually, this is not just a problem for IT but for other specialist areas too – for example there are numerous examples of fraud trials where cases have collapsed because the lay jury just cannot understand the complexities of the financial trickery involved.

This post is absolutely not intended to be discussing the merits or otherwise of the specific patent case in Mark’s post. Instead, I am trying to highlight what seems obvious to me – that the patent system is badly broken, at least for IT patents. While I absolutely agree that a patent owner has the right to go after companies that are benefiting unfairly from his or her invention, it seems to me that the current state of affairs is that rather than leaning towards the defendant as criminal cases do, the current system gives the power to the plaintiff as long as a legal giant can be found who knows how to work the system. This is certainly what I interpret as justice, but then, I am no lawyer!

Steve

Microsoft and ESBs – what a shame!

I was recently doing some research into the latest state of play in the Enterprise Service Bus (ESB) market, and decided to take a look at Microsoft’s ESB – or rather its pretend ESB.

I had never been sure about Microsoft and SOA- it tends to focus instead on BizTalk and the Microsoft world. However, recently I have heard a lot of encouraging noises from Microsoft about its belief in SOA, and how it sees BizTalk as a key component in an SOA architecture for application design and deployment. But I must admit I had not realized that Mircosoft gave any credence to the ESB concept.

With an element of hope I delved into Microsoft’s ESB stuff – only to be disappointed to discover it is not an ESB product at all, but ‘ESB Guidance’, a collection of samples, templates and artifacts to deliver ESB functionality. In essence, Microsoft does not yet acknowledge the existence of the ESB class of product, preferring instead to take the old IBM line of a few years back pretending that an ESB is a style of implementation rather than a product. However, I thought, this doesn’t really matter as long as Microsoft offers ESB functionality, however it packages it.

But then sad reality dawned. Microsoft ESB Guidance is not even supported. It is a collection of samples and pieces offered on an ASIS basis, take it or leave it. Use it if you like, but don’t come to us with any issues or problems. How disappointing. See the Microsoft Guidance notes –

The Microsoft ESB Guidance for BizTalk Server R2 is a guidance offering, designed to be reused, customized, and extended. It is not a Microsoft product. Code-based guidance is shipped “as is” and without warranties.

So, it looks like Microsoft isn’t really on the ESB bandwagon yet. The new release of BizTalk Server this year may introduce a ‘real’ ESB, but at this point in time Microsoft appears to be paying lip-service to SOA compliance, but not actually doing much about it.

Steve

BPM’s time has come

Could 2009 finally be the year BPM comes into its own? My own opinion is – YES!

This may seem a bit odd – after all, in previous years I have been a bit hesitant about BPM adoption, finding instead that many users were working on lower level integration problems first and then ‘backing into’ BPM. On top of this, with all the trading uncertainty around surely no-one will be rushing to BPM?

In fact, Lustratus thinks that the current economic environment is EXACTLY the right time for BPM. My worries in the past have been to do with people trying to move completely over to a BPM model. This requires a heck of a lot of effort, thought, maturity in process engineering and resources, and can take some time to generate a payback although the eventual gains are admittedly great. However, the current economic situation is forcing people to be much more pragmatic, and it is here that BPM really starts to deliver.

Lustratus recently produced a paper discussing the Lustratus BPM Sweet Spots – five potential targeted uses of BPM technology sorted in terms of speed of return, ease of implementation and overall benefit. A number of these sweet spots represent quick ways to improve a particular process, increasing automation and hence providing the opportunity to reduce people costs. It is this improved efficiency and productivity that attracts companies in the current economic downturn – anything that makes use of what is already there but cuts the staffing bill is almost a no-brainer. In addition, the visibility BPM brings with it into process execution is of enormous use when trying to implement responsible risk and compliance management measures, something greatly desired in the current circumstances.

So, 2009 should be the year when companies turn to BPM – but note the distinction of pragmatic, targeted BPM as opposed to grand BPM strategies that will make everything better ‘sometime’.

Steve

Open source hiatus in 2009

It’s that time of year again, and Lustratus has just produced its annual predictions for the software infrastructure marketplace.

The Lustratus Insight containing the 2009 predictions is available free of charge here.

As might be expected, the predictions this year are heavily influenced by the current economic downturn and projections that it will continue throughout the year. It may be surprising, therefore, that one of our predictions is that there will be a hiatus in the open source (OSS) marketplace. At first glance, this seems counter-intuitive. After all, if companies are desperate to cut costs, then surely open source products that have no license fees must be an attractive option? Wont this drive OSS demand in 2009?

The Lustratus reading is slightly different. While it may be true that on the face of it having free software would be great in today’s constrained environment, the problems stem from the nature of OSS combined with the likely mandates under which companies are operating at the moment – that is the need to reduce staffing wherever possible.

Most open source software is by its very nature collaborative, and this tends to lead to software that is made available in kit form – that is, although the open source software may offer a framework or the basis for the desired functionality, it is expected that the user will put effort into customizing and extending the functionality for his or her own needs. Typically, therefore, embarking on an open source initiative involves a heavy investment of IT resources, at least at the beginning. Now while this will result in lower license costs, the problem is that in today’s climate companies are shying away from anything that requires anything more than minimal resource investment. Users are looking for products that work out of the box, or more likely are trying to generate more value from what is already in place.

As a result, Lustratus believes that although interest will remain in OSS, new OSS projects will be put on hold until economic conditions relax.

Steve

Software AG and a dramatic example of SOA success in government

When I hear from a vendor about massive reductions in processing time or cost savings associated with the use of its products, I must confess to getting deeply suspicious.

This is because when I dig a little deeper, the trumpeted project often turns out to be little more than a proof-of-concept or otherwise small scale solution.  Therefore, I was surprised to hear just such a claim when I recently spoke with Dr Peter Kurpick, Chief Product Officer of Software AG, about their SOA straegy and the business strategy and that the system in question was in fact a very significant one (the announcement of which I somehow missed earlier this year).

The project for UKvisas (the national agency responsible for issuing visitor visas) integrates multiple information sources to quickly filter out individuals who should be denied entry to the UK for various reasons.  (Using SOA to integrate multiple data sources owned by multiple agencies is a SOA-pattern which I have come across in a number of projects.)

In this case, the implementation (built on Software AG’s products) has reduced processing times from over 2 days to less than 30 minutes.  It is an excellent example of how government is successfully using SOA to target specific and high value problems: As well as hugely reducing the processing time, there is also a very tangible benefit as each deportation (in effect a failure to screen out the visitor at time of entry) costs £11,000 .

What is encouraging is that governments seem to be learning from its mistakes of a few years back when it spent 100s of millions on integration projects that fell apart.  This project appears to suggest that the UK government has both understood how to use SOA to extract very measurable benefits and how to focus on specific business objectives instead of getting lost in never ending programmes which can never deliver.  To do this requires sophistication about how SOA should be adopted by your organisation and the central role of SOA governance (both key themes of Software AG’s SOA strategy).

It is also a good example of how SOA (as well as BPM) can provide as much benefit from reduction in the risk of error as it does from efficiency improvement.  This is important for anybody wishing to justify an investment in SOA:  Unlike SOA benefits such as agility or even reuse which are hard to measure and can have a long lead time to pay-back, the value of reducing errors can be calculated easily as error rates are often already tracked in organisations and the cost of recovery from an error is often very significant.

Ronan

Linux v z/OS on IBM mainframes

Five or ten years ago, this sort of question would have been unthinkable, but now mainframe users are increasingly facing a choice between whether to use Linux on System z or z/OS to host new mainframe workloads.

These new workloads may be the result of a consolidation project, or simply taking advantage of flexible architectures like SOA to utilize spare mainframe capacity, but the decision is not an obvious one in either case.

On the one hand, long-time mainframe guys will say that z/OS has grown up with the mainframe and therefore must be the best choice. But IBM has done a lot to its version of Linux for the mainframe, and Linux bigots will be quick to point out that the license costs will be cheaper and there are strong advantages in standardizing on a portable and flexible operating system enterprise-wide. Worst of all, given the polarized nature of IT in general, the decision makers find it hard to get unbiased advice on such a divisive question.

In the end, the answer to the question of whether z/OS or Linux on System z is better is not surprising – “it depends”. This subject is discussed in much more detail in a free Lustratus report, “Choosing the right SOA platform on IBM System z”, available from the Lustratus web store. While this paper focuses particularly on developing or moving SOA workloads onto System z, the analysis applies to any new mainframe workload. Summarizing the arguments in the paper, the major differences that affect the decision are that Linux is designed to offer a common environment across many platforms, and is thus less attuned to individual platform capabilities by definition, and that whereas Linux has been designed for the ‘server’ model where it is used to operating one type of workload, z/OS has been built to handle multiple subsystems from the start.

The common environment aspect of Linux offers flexibility, helps to drive license costs down and leverages widely available skills. The multi-system capabilities of z/OS combined with its close linkage to the System z platform offer the greatest exploitation of System z facilities. But as always the devil is in the details.

Steve

Viable open source business models emerge

The problem with Open Source has always been to my mind that the myth of Open Source slowed down the development of mature business models.

The myth is that OSS is an almost altruistic endeavour when end-users cooperate to produce the software projects they require.   In this myth, the vendor role is one of coordination, packaging and support for which the end-users willingly pay maintenance and support fees.

Unfortunately, this virtuous circle was the exception rather than the rule when it came to enterprise OSS:  Most Open source projects rely almost entirely on vendors for code contributions and vendors have found it hard to get the maintenance fees from users who struggle to justify paying for something perceived as free.  This has made the path of any business following this model extremely difficult.

Therefore, I have been pleased to see that the myth is beginning to dissipate and what has been happening behind the scenes emerge.  The451, who have an excellent blog focused on the enterprise Open Source space, have recently published a report which is interesting because it rings true to my experience when it says:

The majority of open source vendors utilize some form of commercial licensing to distribute, or generate revenue from, open source software.

And

Ad hoc support services are used by nearly 70% of the vendors assessed, but represent the primary revenue stream for fewer than 8% of open-source-related vendors.

and

Most vendors generating revenue from open source software are reliant on direct sales staff to bring in the largest proportion of revenue.


The cynics among us might say that this is starting to sound very like the closed source model that OSS was meant to kill.  However that would still be a little unfair – OSS is still different but maybe not as different from a commercial perspective as originally advertised.

Ronan

Mistakes marketeers make – and how NOT to make them

I was reminded how easy it is to get marketing completely wrong today when I saw (on UK television) an advertisement for New York Bagels.

Bagels are not as heavily embedded in the UK psyche as in the US, I admit. Brits think of them as something you see people eating on US cop shows and sitcoms, often in New York. So what did the marketing company do? It spent the entire time showing the British market how delicious bagels could be … and then finished with a picture of three packs of New York bagels.

The point here is that the advertising company had missed its mark. Yes, there was a need to educate the British audience, but we know so little about bagels that it seems the advert is for ‘bagels’ as opposed to a particular brand name. When it refers to New York bagels, I and many Brits like me thought that that was the name of the food – like you might say Scottish salmon. My reaction was to pop down to my local store and by ITS OWN brand of bagels – the advert said ‘eat bagels’, not ‘buy New York bagels’. I guess in the US the advertisement or should I say commercial would work OK because the audience would be very familiar with bagels and realize the New York Bagel is a brand name.

So the marketing failed at one of the first hurdles – it had not attuned the message for the target audience.

In the software world, marketing is often extremely poor. I have always thought the main reason is that software companies have often grown around a particular technology, run by technology people, and to these people it is completely obvious why someone should buy its products – because they are technically wonderful! However, the same general marketing principles apply – you must know your target audiences, understand your key value proposition and how you are positioned in the software firmament and what your competition are up to. But a short look at marketing for virtually any software offering will quickly show this is rarely the case!

Lustratus uses its REPAMA Strategic Marketing analysis tools with clients to look at how products are being marketed, and the results can be quite eye-opening…so in order to increase awareness of this topic and further understanding, Lustratus has started a new blog to discuss issues around software marketing and its effectiveness. While this will be of obvious interest to software marketeers, I recommend it to buyers of software too – it is always useful to see how potential suppliers are tuning their messages and what markets they are really interested in.

Steve

The internal market approach to SOA investment

I was reading a blog post from my good friend John Schmidt, Chairman of the Integration Consortium

…and now with a day job at Informatica, about trying to get funding for integration initiatives (in his case he was focusing on funding for an integration competency centre, a personal hot button), and I was very taken with John’s view of using an ‘internal free market’ approach to getting funding approved.

John points out that while 70% of IT budgets are non-discretionary, just keeping everything running, most companies have at least some budget for investment, but that the problem is the investment portfolio is spread across many different parts of the business, greatly reducing any individual department budget to the point that walking in asking someone for $1M of their own budget is going to be a serious impact to that budget holder. But John advises a creative approach:

So why not look at the portfolio of internal projects in an enterprise as a “market”. Why not apply some of the concepts that have proven so successful in the free market economy to the internal operations of an organization. Since everyone needs integration, if you could simply get a good understanding of the demand in the internal market, you could build a business around it.

This made me think of the Lustratus report I wrote recently on justifying integration investment in an economic downturn, by putting a laser-beam focus on ROI. Adding John’s internal market approach seems to provide another dimension to the ROI focus I was recommending. In other words, while the ROI paper looks at how to justify operational budget investment for SOA, the same problem that John describes may rear its head, and it may be impossible to find someone to slice their own investment budget even though the business case is strong. But by combining the ROI focus with an internal market business case, success is much more likely. Effectively, the running costs can then be covered by a small chargeback to each project to reflect the improved productivity they will all experience, or whatever other gain each department saw as part if its internal market needs.

Steve

Data – the forgotten element of SOA

Now and then on this blog I have my ritual little moan about data – how it seems that SOA people just want to talk applications, and no-one cares about the data (apart from the USER of course!).

So I was delighted to see that the Integration Consortium is holding a webinar one week today (September 18th) specifically on data considerations for SOA. It should be a good session – I know John Schmidt (one of the speakers) well, and the experience he has built up from Best Buy, Wells Fargo and BofA should have given him lots of good insights into this important subject.

Steve