Soctech brainstorming

From PublicWiki
Jump to: navigation, search

Possible CSE590 Topics

Vendor liability as a means of improving software quality in world of ubiquitous computing.

Technological trend: computers are becoming cheaper and smaller; computers are becoming increasingly networked; software quality is not improving at the same rate. The first two trends mean that computers are being deployed in roles that would have been unthinkable a few decades ago. They increasingly inhabit places and roles in our home and work: witness PDAs, home enterntainment systems, kitchen appliances, security systems, etc. As the cost of networking drops, and the availability of IP addresses increases (IPV6), these embedded computers will increasingly be connected to the network, thereby enhancing the value of both the network and the devices connected to it. However, low software quality may act as a limiting factor to this explosive growth in network value. Specificially, computer security defects will enable malicious mobile code to wreak havoc not just on some subset of the machine/software combinations on the network, but also on the availability of the network itself.

Hypothesis: Imposing liability on vendors will force vendors to bear the cost of software containing computer security defects, thereby providing incentives to improve software quality.

Currently, software vendors are not liable for damages caused by faults in the software they build and sell. The societal cost of computer security defects is in excess of $15 billion annually, and if the above predictions are correct, will continue to grow super-linearly. Software vendors currently externalize the cost of software containing security defects. That means that society pays the price for recovering from attacks that could have perhaps been avoided by building more secure systems in the first place. Arguably, this problem won't go away on its own for at least two reasons. First, we live in a state of market failure -- consumers do not have true choices between vendors. If we do live in a semi-monopoly, then the monopolist vendor has no incentive to build more secure software. Second, and more importantly, even if the market were not dominated by a single vendor, it is unlikely the problem would go away as long as there are some parties who will choose to purchase cheaper (and presumably less secure) software. The reason is that even a relatively small number of compromised hosts can cripple the network, thereby depriving other users of availability and value. As a dramatic demonstration of this effect, consider the January, 2003 SQLSlammer worm. While this worm targeted and infected only the Microsoft SQLServer program - representing a relatively small subset networked hosts - its scanning activity caused serious network outages resulting in ATM crashes, 911 disruptions, and flight cancellations, racking up total damages in excess of $1 billion.

From a social welfare perspective, we would like to provide the right incentives to encourage software vendors to produce secure software. Question: can the legal regime be altered to effectively provide such incentives without excessively negative second-order consequences? There are many options to explore, and they all have benefits and drawbacks. Legal solutions include creating a private cause of action in contract or tort; public (government) regulation (liability, certification, standards, etc); benefits taxes.

For some initial thoughts on the matter, see:

UI Design for a Better Click-through Licenses

Click-through licenses are, for better or worse, a large part of every consumer's life. This research would focus applying user interface design principles to implementing "better" click-through license agreements. As a straw-man definition, a "better" license would be defined as one where the user actually has some chance of understanding the terms of the license they are agreeing to. We can imagine all sorts of improvements that might enhance a consumer's ability to make an intelligent decision about whether they really want to accept the license or not. Trivial examples might include expressing the terms in plain English; more complicated examples might include "walking" the user through the license (term by term) and requesting assent to individual terms. Obviously, there are all sorts of trade-offs to be explored.

Why would this ever be interesting to anyone? Well, consumer groups would certainly be happy if they could point to real research that shows that companies can do a better job of presenting their license terms to consumers. Companies are, of course, seemingly unlikely to want to embrace new ways of presenting licenses -- especially if it will increase the number of e-commerce sales that do not get completed because the buyer gets "cold feet" upon grasping the implications of a purchase. On the other hand, some companies may use "humane licensing" to their competitive advantage, by attracting customers who like shopping somewhere that the company seems willing to actually explain terms to them, rather than bury them in 20 pages of fine print.

Finally, if history is any teacher, courts did not disapprove of click wrap licenses (ie. a license that is only reachable by following a link) until they had another technology - click through (ie. a license that is splashed onto the screen with an "I accept" button at the bottom) - came onto the scene. The point is that while courts aren't in the business of saying what sort of technology is required to create an enforceable license, they may bless one technology as sufficient when presented with a qualitatively "worse" technology. (I need to find the case on this, but there is at least one reported opinion where the court compared click wrap to click through and approved of click through...) In short, it may provide the (however tiny) opportunity to change the law by building something better.

From Caroline: I see potential here-interesting ideas. Thanks, Ben. On the first one, software vendor liability, taking off from your idea about methods for improving software quality, an interesting security/econ/policy angle is the idea that there is a market failure in the provision of security. If people could judge security for themselves, then companies would be more willing to compete on quality. In fact, consumers probably have no idea what they're buying so that companies have little incentive to invest in security. Methods to improve software quality then, might include having some sort of Underwriter's Lab for software that gives consumers something tangible to grasp (Software X got 1 star but Software Y got 3 stars, so I'll buy Y--assuming we're in a non-monopoly situation). Would this work? Isn't it even hard for computer scientists to judge how secure code is? And how do you account for users' security failings--insecure systems aren't totally the software's fault.

A couple other things I've been thinking about:

IP protection for software

One of the lessons of the open source model for the software industry is that transparency increases trust. To that end, Microsoft has made progress in sharing source code through the Shared Source Initiative and many other major software vendors have embraced source licensing as well. There is an impact to this though, and primarily that is the sacrificing of trade secret as one of the major underpinnings of commercial software vendors IP strategy. If one holds that increased transparency is critical, then software vendors are in essence going to be pushed to rely more heavily on patent rather than trade secret. This gets interesting especially since the Open Source community has historically been anti-patent.

Another note, maybe worth pondering: Managed code is more transparent than unmanaged code. Microsoft wants to carefully control access to its source code whether it shares code through Shared Source, or in components it ships with its development platforms, and so forth. Yet the way the .Net platform was implemented means that Microsoft code is more transparent than before. On a transparency scale of 1 to 10, with machine language (or whatever the most non-transparent form of code is called) being a 1 and source code being a 10, developers have told me Microsoft's Intermediate Language is a 7 or an 8. Is the higher transparency of its code of concern to Microsoft given its highly-controlled approach to open source?


Distribution as Property

From Joshuadf:

The advent of technology able to create virtually unlimited identical copies at low cost created the current "Copyright Crisis." While many possible methods of "Digital Rights Management" have entered the marketplace, the Open Source Software movement offers a different solution---a redefinition of intellectual property based on distribution instead of exclusion. Is this an unworkable system, or just the beginning step in constructing a new definition of what it means to create in an increasingly digital world?

Possible readings from Stallman's "Why Software Should Not Have Owners," Tiemann's "Future of Cygnus Solutions: An Entrepreneur's Account", and/or Weber's Success (Sorry for all the cititions; I was a liberal arts major. :)


Property in virtual worlds

The Terra Nova blog has been having some interesting discussions about property, both intellectual- and non-, in "virtual worlds" (Everquest and their ilk). Some relevant posts:

Definition of derivative works in software

The question of what is a "derivative work" in software is a technically interesting one. The traditional interpretation has been one of "binary linking", which used to have a precise technical definition, but that definition is increasingly being blurred by the growing popularity and importance of various forms of remote invocation, which permit one piece of software to invoke procedures in another piece of software without running in the same address space, or even the same machine.