Refusing Proprietary Technology
The IPKat is (still) no expert when it comes to the software industry but, like many IP enthusiasts, he tries to keep up with the major issues and thought-trends as and when they crop up. One of the issues he has pondered about from time to time is the sometimes hostile, sometimes constructive tensions that exist between the business models that drive proprietary software and open source, as well as the continuing dialogue between the supporters of each.
Bearing this in mind, the Kat is running a series of four short pieces by his friend Keith Braithwaite, which he hopes will do two things. One is to seek to pinpoint the most significant issues as viewed from the industry side rather than from that of the academics and lawyers in whose company this Kat is most comfortable. The other is to place these issues within some sort of temporal continuum where that is possible. Here's the second piece in the series:
Refusing Proprietary Technology
Refusing Proprietary Technology
Prologue
The first post in this four-part series looked at the effect on users of removing a patent-encumbered technology from an existing product, rather than licensing it: while with much ingenuity there is usually a way of substituting a non-encumbered implementation for each patented technology, the resulting impact on users is usually not negligible and sometimes unexpected.
We also began to look at the kinds of technology that might be missing from some solutions because of IP issues. This time we will look at the potential impact on users of active refusal to use patented technology in new products. Avoiding patented new or improved technologies, either to avoid a licence fee or on ideological grounds, can impose a hidden cost on users.
Journaling file systems
Since the earliest days of computing, “secondary storage” media such as magnetic drums, tapes and disks have been used to make data persistent. More recent developments have added memory sticks and cards and writeable optical media like CDs and DVDs to the repertoire of storage technologies. All these media require some way of organising their contents. This organising principle is called the file system. These have evolved to include something like a map to show where the free space is and where the files are, along with some control information such as timestamps, ownership and permissions. This arrangement can be fragile and, after any system crash, it can take many minutes to recover any potentially corrupted data―sometimes without success, leaving the user with lost or unreadable files.
One solution to this problem is the journaling file system. There are many variants but the basic idea is to record every change made to all files in two widely separated places on the disk or card. The actual content of the file is in one place and the change is recorded somewhere else – in a “journal”. In the past decade, journaling file systems have come into widespread use, first in the business market and now also in the consumer market, where the most common OSs have a journaling file system.
With such a file system it is also easier to back up (which too few users do). By adopting a “journal based backup”, a system needs only to check the journal rather than trying to figure out what has changed itself. This is much faster.
If our OS provider were to decide to remove journaling because it is based on proprietary technology, we would in effect be forced to downgrade to a 10 year old OS in which every hardware problem, software glitch or power failure brings back the old nightmare of lost files and long computer boot-up times caused by the attempt to recover file system corruption.
The loss of this feature would also cause serious degradation of service in the server room, in which server downtime to recover from file system corruption would increase while daily maintenance operations, like backup, would take longer and become more expensive.
Storage virtualization
As the amount of disk space attached to computer systems grows, it becomes increasingly hard to manage.
Various ingenious solutions have been devised over the years, many of which are proprietary and subject to patent protection. The term “storage virtualization” results from a relatively recent attempt to identify the common features and differences of these approaches. This allows much improved utilization and capacity forecasting – storage can be held in reserve and granted to users as needed. The virtual storage pool can be easily expanded by adding more disk drives managed in a central location. Old drives can be removed more easily, without interrupting availability of the data on them. All these features make life easier for users—fewer rude emails for your IT centre asking you to archive your emails, no data lost through disk crashes, easier access to really large data sets.
All those benefits would be lost if the OS vendors decided not to include LVM technologies because they are proprietary. The main result of this choice would be that the IT departments of all the organizations using that OS would be forced to adopt more restrictive IT policy regarding, for example, user quota utilization, because managing the configuration and provisioning new hard disk space would become much more difficult.
Fixation on IP issues distorts engineering judgment
In an ideal world, engineers developing software solutions would be free to choose the technically “best” option to resolve every issue. In reality commercial and legal considerations often supervene.
Technology encumbered by intellectual property rights might be rejected because the licence fee would increase the price of the finished product by too much. This is a prime consideration for free open source software (FOSS) developed by teams of volunteers and given away to anyone who wants it – volunteer teams typically have no budget to buy in commercial technology. Not all OSS developers are in this category, however. Some high-profile OS products are well funded by commercial backers. Nevertheless, software patents are abhorred by most of the OS community with a quasi-religious fervour.
Although FOSS solutions are zero-cost the choice to incorporate OS software in a solution incurs the risk that the technology in question has already infringed someone’s intellectual property rights, even inadvertently. By using that technology, an organization could become party to the infringement. Even if open source technology is provided free of charge, it is still very much a case of “caveat emptor”.
Since 2005 the Open Invention Network (OIN) has been trying to reconcile these opposing forces by buying software patents on the open market and making them available royalty-free to licensees who promise in return not to bring patent lawsuits against the Linux environment. This strategy to defend Linux developers against the risk of being taken to court, although backed by big names such as IBM, Novell, Oracle, Google, Red Hat and NEC, has not yet succeeded entirely in shielding the Linux environment against patent infringement suits. A study carried out by Dan Ravicher for Open Source Risk Management in 2004 identified 283 patents that had been granted but not yet validated in court, any of which could potentially be used to support a patent claim against the Linux kernel. Many, but not all, of these patents have since been acquired by OIN or are held by one of its licensees.
All the intellectual activity in a development project ought ideally to be expended on solving the problem at hand, not on guarding against the risk of intellectual property infringement. Huge investments of money, time and energy have been diverted from the creation of software features that benefit users and towards these “displacement activities”. If users are to derive maximum benefit from the creative efforts of software architects, designers and developers, it may sometimes be more cost-effective to acquire the rights to proprietary software technology than to expend the energy to reinvent or “code around” it.
The first post in this four-part series looked at the effect on users of removing a patent-encumbered technology from an existing product, rather than licensing it: while with much ingenuity there is usually a way of substituting a non-encumbered implementation for each patented technology, the resulting impact on users is usually not negligible and sometimes unexpected.
We also began to look at the kinds of technology that might be missing from some solutions because of IP issues. This time we will look at the potential impact on users of active refusal to use patented technology in new products. Avoiding patented new or improved technologies, either to avoid a licence fee or on ideological grounds, can impose a hidden cost on users.
Journaling file systems
Since the earliest days of computing, “secondary storage” media such as magnetic drums, tapes and disks have been used to make data persistent. More recent developments have added memory sticks and cards and writeable optical media like CDs and DVDs to the repertoire of storage technologies. All these media require some way of organising their contents. This organising principle is called the file system. These have evolved to include something like a map to show where the free space is and where the files are, along with some control information such as timestamps, ownership and permissions. This arrangement can be fragile and, after any system crash, it can take many minutes to recover any potentially corrupted data―sometimes without success, leaving the user with lost or unreadable files.
One solution to this problem is the journaling file system. There are many variants but the basic idea is to record every change made to all files in two widely separated places on the disk or card. The actual content of the file is in one place and the change is recorded somewhere else – in a “journal”. In the past decade, journaling file systems have come into widespread use, first in the business market and now also in the consumer market, where the most common OSs have a journaling file system.
With such a file system it is also easier to back up (which too few users do). By adopting a “journal based backup”, a system needs only to check the journal rather than trying to figure out what has changed itself. This is much faster.
If our OS provider were to decide to remove journaling because it is based on proprietary technology, we would in effect be forced to downgrade to a 10 year old OS in which every hardware problem, software glitch or power failure brings back the old nightmare of lost files and long computer boot-up times caused by the attempt to recover file system corruption.
The loss of this feature would also cause serious degradation of service in the server room, in which server downtime to recover from file system corruption would increase while daily maintenance operations, like backup, would take longer and become more expensive.
Storage virtualization
As the amount of disk space attached to computer systems grows, it becomes increasingly hard to manage.
Various ingenious solutions have been devised over the years, many of which are proprietary and subject to patent protection. The term “storage virtualization” results from a relatively recent attempt to identify the common features and differences of these approaches. This allows much improved utilization and capacity forecasting – storage can be held in reserve and granted to users as needed. The virtual storage pool can be easily expanded by adding more disk drives managed in a central location. Old drives can be removed more easily, without interrupting availability of the data on them. All these features make life easier for users—fewer rude emails for your IT centre asking you to archive your emails, no data lost through disk crashes, easier access to really large data sets.
All those benefits would be lost if the OS vendors decided not to include LVM technologies because they are proprietary. The main result of this choice would be that the IT departments of all the organizations using that OS would be forced to adopt more restrictive IT policy regarding, for example, user quota utilization, because managing the configuration and provisioning new hard disk space would become much more difficult.
Fixation on IP issues distorts engineering judgment
In an ideal world, engineers developing software solutions would be free to choose the technically “best” option to resolve every issue. In reality commercial and legal considerations often supervene.
Technology encumbered by intellectual property rights might be rejected because the licence fee would increase the price of the finished product by too much. This is a prime consideration for free open source software (FOSS) developed by teams of volunteers and given away to anyone who wants it – volunteer teams typically have no budget to buy in commercial technology. Not all OSS developers are in this category, however. Some high-profile OS products are well funded by commercial backers. Nevertheless, software patents are abhorred by most of the OS community with a quasi-religious fervour.
Although FOSS solutions are zero-cost the choice to incorporate OS software in a solution incurs the risk that the technology in question has already infringed someone’s intellectual property rights, even inadvertently. By using that technology, an organization could become party to the infringement. Even if open source technology is provided free of charge, it is still very much a case of “caveat emptor”.
Since 2005 the Open Invention Network (OIN) has been trying to reconcile these opposing forces by buying software patents on the open market and making them available royalty-free to licensees who promise in return not to bring patent lawsuits against the Linux environment. This strategy to defend Linux developers against the risk of being taken to court, although backed by big names such as IBM, Novell, Oracle, Google, Red Hat and NEC, has not yet succeeded entirely in shielding the Linux environment against patent infringement suits. A study carried out by Dan Ravicher for Open Source Risk Management in 2004 identified 283 patents that had been granted but not yet validated in court, any of which could potentially be used to support a patent claim against the Linux kernel. Many, but not all, of these patents have since been acquired by OIN or are held by one of its licensees.
All the intellectual activity in a development project ought ideally to be expended on solving the problem at hand, not on guarding against the risk of intellectual property infringement. Huge investments of money, time and energy have been diverted from the creation of software features that benefit users and towards these “displacement activities”. If users are to derive maximum benefit from the creative efforts of software architects, designers and developers, it may sometimes be more cost-effective to acquire the rights to proprietary software technology than to expend the energy to reinvent or “code around” it.