The Internet a very special - and fragile - resource, and at some point humans will "consume" 100 percent of its capacity…
Hardin
(1968), in his article, “The Tragedy of the Commons,” examined the
impact and morality of self interest acting upon shared resources. These
shared resources (i.e. the commons), might be, as it was for Hardin,
the Earth’s natural resources, but they could also be applied to the
artificial collection of natural resources we think of as the Internet.
Because its infrastructure is finite, the Internet, like the natural
resources provided by the Earth, is a finite resource, and at some point
humans will “consume” 100 percent of its capacity (Greenemeier, 2013).
This makes the Internet a very special – and fragile – resource.
To paraphrase Hardin (1968), in a finite world, this means that the
per capita share of the Internet’s resources must steadily decrease. The
inference to be made is that each person who uses the commons affects
not only herself, but everyone (i.e. society). In economics, this cause
and effect principle is expressed in reference to “externalities,” that
is, that parties not privy to the decisions or actions of others will be
affected by those actions in terms of either benefit or cost, that is,
in positive or negative externalities (Anderson & Moore, 2006).
It is the negative externalities that concern us here; that malicious
Internet traffic is a danger is not debated, though the extent of that
danger is (Anderson & Fuloria (2009). Likewise, though the growing
cost of malicious activity on the Internet is not debated (Cashell,
Jackson, Jickling, & Webel, 2004), an equally heated debate centers
around the question of liability for that malicious traffic and its
negative externalities.
The Internet is a complex organism that poses difficult questions,
and it is no wonder that the search for answers has over time demanded a
multidisciplinary approach. It has been argued that cybersecurity is
not only an economic but a technical issue (Mead, 2004), and Anderson
(2009) emphasized the origins of cybersecurity as purely technological
and mathematic. Hardin (1968) maintained that to some problems there are
no technical solutions. But if technology is not the answer to
protecting the Internet and its users, then what is? Moore (2010), on
the other hand, has argued that because economics are so central to
cybersecurity, that policy and legislation are the means to incentivize a
solution. Even so, Rubens & Morse (2013 p. 183) caution that
legislation addressing liability “has not always been well-received or
fully understood.”
Anderson & Fuloria (2009) also pointed to an emerging consensus
that security economics rather than technology, better protects
infrastructure, while Hardin (1968) argued instead to an extension of
morality for the necessary solutions. To take the analogy further, it is
immoral, given the declining resources of the Internet, to use it as a
“cesspool” (Hardin, 1968, p. 1245). Morality, argues Hardin, is
“system-sensitive,” and we have developed administrative law to deal
with ambiguities and specifics, but administrative law is itself prone
to corruption and produces a government not of laws but of men, and
Hardin asks, “Who shall watch the watchers themselves?” (Hardin, 1968,
p. 1245-46). The recent revelation of the lawlessness of the NSA seems
to be a case in point (Witte, 2014).
Where Does Liability Lie?
We live in a world where Internet access is increasingly seen as a
human right, and a 2011 United Nations report stated just that (LaRue,
2011), repeating on a global scale an assertion first made in tiny
Estonia in 2000 (Woodard, 2003). It is easy to commit crimes on the
Internet; most malware is undetected (Moore & Anderson, 2009) and
most cyber-criminals also escape detection, let alone punishment
(Brenner, 2010). Understandably, as Anderson & Fuloria (2009, p. 8)
say, “security is hard.” Harder still is determining where
responsibility for that security lay. Software developers knowingly make
available to the public vulnerable software and worry about fixing
these vulnerabilities post-release. Of course, vulnerabilities can also
arise from insecure networks, lax security policies, back doors, and
through other causes and the argument has been made that liability
exists for insecure networks as well as for insecure software (Mead,
2004). Who is liable for malicious traffic on the World Wide Web, or, in
Hardin’s terms, for turning the Internet into a cesspool?
A simple answer is that security depends on many and therefore many
are liable for that security, which is, after all, only as strong as its
weakest link. Whereas in an enterprise network the weakest link is
liable to be an employee, in the overall scope of cybersecurity, that
weak link might as easily be an entity composed of many people, a dearth
of sound cybersecurity policies and procedures, or a absence of or
struggle over regulatory standards and laws (Anderson & Moore,
2006).
Nobody in the private sector seems eager to take responsibility for
the Internet they all profit from as each blames the other, and Anderson
& Fuloria (2009) rightly stress the relative powerlessness of end
users, especially in the mobile phone and PC markets (industries have
somewhat more clout). This is hardly a productive course when it is
considered that large-scale failings of cybersecurity can shake a
nation’s – or the world’s – economy, and an argument can be made for
government intervention (Anderson & Fuloria, 2009).
While in the private sector responsibility seems to roll down hill,
all the way to the end user or consumer, President Obama’s May 2009
decision to make the United States’ digital infrastructure a strategic
national asset heightens the federal role (and that of local and state
governments) in cybersecurity (White House, 2009). While promising not
to interfere in the private sector’s response to cybersecurity threats
in terms of standards, the president stressed closer cooperation between
public and private sector. This address was, in effect, an acceptance
of responsibility by government for the security of America’s
information and communication networks.
All society bears the cost of infrastructure attacks but only a few
can impact the security of these systems, including the public and
private sectors and the people who make the computers and the software
that runs them, including the operating systems. A great deal of debate
has centered around the assignment of blame, whether it be internet
service providers (ISPs), software developers (the people who design,
write, and test the software), operating system (OS) developers (the
people who develop the software required for applications to run), or
end users (the people who actually use the software or operating systems
in question). All of these can affect cybersecurity, but not equally.
Moore & Anderson (2009) have made the point that everyone who
connects an infected computer to the internet creates a negative
externality, but at the same time, an end user is generally only able to
use operating systems or software programs others have designed. She
thus lacks the expertise to make changes to it (for good or ill) and
thus bears the external cost or benefit of decisions made by the
industries that make her Internet activity possible.
The Lament of the End User
The end user, while complicit, is at the bottom of the externality
food chain. Lichtman (2004) has pointed out that end users who
inadvertently propagate malicious software are easy to track down and
suggested that they could pay their fare share for the damage done, but
these people are, more often than not, unwitting victims, not criminals,
and as he himself admitted, they lack the requisite sophistication to
be malicious users.
To better illustrate the lackadaisical approach of ISPs, consider at
the 2010 move by the Australian Internet Industry Association (IIA) to
come out with a voluntary code of conduct recognizing a shared
responsibility for cybersecurity by ISPs and consumers (Industry code,
n.d.). But such actions, being completely voluntary, push ISPs to punish
the consumer without holding the ISP accountable or providing incentive
for ISPs to clean up their networks. Should they be punished then for
the malicious activities of others because they are easier to catch?
That is what the Australian IIA’s solution seems to suggest.
Lichtman (2004) suggested that the only practical reason to not hold
these end users accountable is the issue of cost-effectiveness. On the
other hand, he argued, ISPs re well-placed to counter the quantity and
effectiveness of attacks and that indirect liability would force them to
act appropriately (Lichtman, 2004). The problem for the end user is
proving that they are not to blame for their own problems – Lichtman’s
lack of sophistication and Brenner’s “sloppy online behavior” (Lichtman,
2004; Brenner, 2010, p. 34). But how realistic is it to blame the end
user, who is ultimately caught between sophisticated criminals and
software developers, ISPs, and operating system developers who know
better but who, for a variety of reasons, don’t care?
Summary
That there is injustice in the system cannot be denied, and we might
ask if Hardin (1968, p. 1247) is right in his assertion that “Injustice
is preferable to total ruin.” As long as the dangers continue to be
debated, software developers, operating system developers, and ISPs will
continue to shy away from talk of total ruin. They are making money,
after all, and as has been shown here, they have no incentive to make
wholesale changes to how they do business. As has been shown here, those
responsible for defending the Internet disclaim any responsibility for
the failure of those defenses and spread the cost to society instead
(Moore, 2010). We cannot expect them to voluntarily bear that burden:
self-regulation is an oxymoron.
There is plenty of blame to go around, and it is clear that the
Internet security situation as it stands now cannot be allowed to
continue indefinitely. Passing the buck is not a substitute for actual
solutions, as it does nothing to make the Internet safe. Disclaimers may
(for now) protect corporations, but they do not protect the commons we
all depend upon. Needless to say, as long as the Internet is unsafe, not
only are individuals, end users and consumers, at risk, but so are
corporations, vital infrastructure, and even national security. No
matter how strident the protest, somebody must be held liable for the
cesspool our information networks have become.
It is clear that the major players in this regard are the industries
best placed to secure the Internet: the operating system developers,
software developers, and ISPs, rather than the end user, who is least
able to affect the safety of the products they use (Ryan, 2003).
Security, like blame, rolls downhill. Security at the top will mean
security at the bottom, at the level of the end user. This is not to
excuse the end user, who also bears responsibility for connecting an
infected computer to the Internet, but in aggregate, the weight of
responsibility must lie with those with the resources to combat the
problem, and that means the public and private sectors.
It has been argued that three factors drive change in the U.S.:
liability, the demands of the market, and government regulation
(Brenner, 2010). Mead (2004) stresses that a uniform approach to the
problem of liability is itself problematic, and this, as Moore (2010)
has argued, is a problem that can only be corrected through legislation.
Ryan (2003) went further, pointing to the threat to the country itself,
its infrastructure and economic well-being, as reason enough to
legislate software liability. In speaking of liability, it is a simple
fact that the government, or a single corporation, let alone an entire
industry, can far more greatly affect externalities than a single end
user, and here, the legal concept of downstream liability, where the
source is upstream of the recipient, must not be ignored (Hallberg,
Kabay, Robertson, & Hutt, 2009). The upstream waters are best
patrolled by those with the resources to do so.
Based on the foregoing, it would argue be reasonable to argue for a
mixed system of regulation and incentives. Regulation itself must
encourage incentives while not discouraging innovation, requiring a
careful balance of the two. The federal government has the most power to
affect change, not based simply on the power to regulate, but on
purchasing power. One might argue that banning USB devices from federal
workplaces would hurt memory stick manufacturers financially, but such a
move would also create an incentive for the industry to improve the
security of such devices, thus driving change without the need for
regulation.
The public, as has been argued, may lack sophistication but the
federal government is another entity altogether. The public may buy what
is there without a complete understanding of what they are getting in
terms of positive and negative externalities (and indeed, they have
little control over it), but the federal government, through its
purchasing power, can mitigate against hardware and software that
generate those negative externalities. Simply regulating itself would,
by virtue of this buying power, serve to (at least in part) regulate the
software industry. What incentive cannot be created through
non-regulatory means must, of necessity, be created through regulation.
There is no more reason to suppose these industries will voluntarily
regulate themselves than is Wall Street.