What really happens when data evaporates into ‘the cloud’

Ok, maybe thinking of attaching floppy disks to helium balloons in order to ascend data into ‘the cloud’ seems like something your grandmother would do. Obviously all you have to do is tell your MacBook to save your documents on iCloud, or drag your pictures inside the Dropbox folder… and POP! Your data is now on the cloud!

But then? Does it float out and linger somewhere in the space around your laptop or phone until you come back and drag it into your desktop again? The semantics of the term certainly imply so.

Of course, we all know that ‘the cloud’ is not actually a cloud; yet the illusion that it’s somehow essentially immaterial and ephemeral is widespread. Contrary to the distinct connotation of tangibility evoked by most IT terms – such as ‘network’, ‘node’ or ‘web’ – the metaphor of ‘the cloud’ alludes to something unknowable, borderline supernatural.

However, none of this is true. This cloud is not in the sky like all the others, but very much on the ground, as what cloud computing actually amounts to is vast quantities of remote computers that store and process data, so that individuals’ machines don’t have to. It looks like this:

Once we take into account the physical constitution of the cloud, questions of jurisdiction over data and of the sovereignty of nation-states become increasingly nuanced and, inevitably, problematic. But before I delve into how governments ought to administer these complex topics, it’s important we understand just how material cloud computing really is.


As polluting as a cloud can get

Large companies like Google, Amazon and Facebook all own numerous massive data centers. Many of these are located in the U.S., with several others scattered around the world in places where the price of energy is low, climate is ideal and the political situation is stable. Each of these data centers consumes tremendous amounts of energy: in 2014, this totaled 26GW globally. That’s 1.4% of worldwide electrical energy consumption. Moreover, many data centers in Silicon Valley are listed in the state government’s Toxic Air Contaminant Inventory, as, for example, Amazon accumulated twenty-four clean air violations in Northern Virginia over a three-year period. Simply put, the cloud is far from immaterial.


Materiality implies geographical location

From the recognition that the cloud is indeed tangible stems a logical follow-up: where is it tangible, and what does this imply? As I’ve mentioned, although most companies carrying out cloud computing are American, not all data centres are in the U.S.

Take Microsoft. Although much of its data is generated by American users, it is largely stored in data centres in Ireland. Because of this, the company was submitted to an almost three-year-long legal dispute with the U.S. government over the jurisdiction of the data. The government claimed that, given Microsoft could access the data while physically in the U.S., it didn’t matter where it was stored. Microsoft challenged the warrant on the grounds that a search doesn’t happen at the point of accessing the data, but where the data is stored.

In addition to the fact that even the U.S. government fell into the trap of regarding Microsoft’s data as disembodied, this example proves just how politically consequential the materiality of the cloud really is.

In fact, this is further demonstrated by German Chancellor, Angela Merkel, proposing the creation of a ‘European cloud’ to prevent U.S. scrutiny. This would entail a closed cloud system, with data centers located within the European boundaries, thus enabling higher degrees of control over the jurisdiction of data. This marks the onset of what analyst Daniel Castro calls ‘data nationalism’: an emerging phenomenon of wanting information to be stored within a state’s physical borders.


Clouds as de facto states
“Unseen but not placeless,” the cloud has become a new geopolitical layer; one whereby the line between national borders and national ownership becomes increasingly obscured. This is because, as illustrated by the examples above, the cloud simultaneously manifests itself in two orders: as abstract information that effortlessly crosses political borders, and as the geographically anchored data centres that store it.

In addition, the cloud as a geopolitical layer presents itself even more explicitly when it takes on assignments traditionally expected of the government, such as public cartography, legal identity, currency, protocol allegiance, etc.

A clear example was when, in 2010, Google Maps slightly shifted the line marking the border between Nicaragua and Costa Rica, with the result that “troops were summoned and war over the ambiguous territory seemed possible.” As described by Benjamin Bratton in The Stack: On Software and Sovereignty, this episode is one of several indicating that the relative positions between the cloud and the states could be (or is) switching. If kept unleashed, this could lead to geography becoming designated to states by the cloud, rather than vice versa. In an extreme scenario, as Bratton goes on to explain, “clouds become de facto states.”


Precaution in science policy

If all of the above is true, saying that governments around the world should feel threatened would be a colossal understatement, as the very survival of nation-states would be jeopardised.

But what should they do?

The question of whether science policy should turn to regulating cloud computing feeds into an abundant body of literature on the precautionary and proactionary principles. These are ways of structuring policymaking where there is a possibility of or scientific uncertainty about the harms and benefits resulting from a new technology. In the case of precaution, the technology is conceived as guilty until proven innocent, whereas with proaction the technology is innocent until proven guilty.

The cloud is unlike a new drug or climate change, in that it is much more difficult to give an empirical evaluation on what its repercussions might be. Hence, policy-making related to the cloud only truly relies on qualitative analyses and on the ‘possibilities’ of harms and benefits to society. Undeniably, this is subjective and largely political. If regulation were to be implemented on the basis of such precaution, there would be extensive popular upheaval, as the government would be accused of stifling innovation and fostering an anti-technology climate.

Besides, it is also worth considering how disruptive the implementation of such a regulation would be. In addition to fomenting the retrocession of trade liberalisation, regulation would disrupt a market that currently amounts to $246.8 billion. Arguably, our societies and industries are too far locked-into a cloud computing-based infrastructure to turn back without serious chaos.

It therefore becomes increasingly clear that this problem does not present straightforward solutions. Like with most issues in the sphere of science policy, whether we ought to regulate cloud computing boils down to ethical perspectives on innovation. However, I’m not here to propose how exactly governments should act, but rather to unveil the nuances – in particular the geopolitical implications – that should be critically assessed and accounted for in the decision-making process.


Blockchain and the neutrality of technology

Prior to last summer, I had literally never heard of blockchain, cryptocurrencies or Bitcoin. Now, in October, I find myself writing my dissertation on this very topic. Although, granted, I might personally have joined the hype a little late, I am not alone. As the Google trends graph below confirms, these terms have only started gaining momentum in popular discussion very recently.

Screen Shot 2017-11-12 at 15.02.15.png

Despite this snowballing talk of blockchain, we are still in a phase when IT geeks and fintech specialists are really the only predominant contributors to the conversation. Surprisingly enough, ethics and politics are still mentioned in the general literature, as questions such as ‘do cryptocurrencies enable fraud?’, ‘do miners act responsibly?’ and ‘does blockchain facilitate democracy?’ seem to prevail. Although interesting debates, what these questions all have in common is a concern for how people might use blockchain. In this way, they reduce the role of the technology to a mere passive tool; one which humans can decide for themselves to what ends they will be utilised. In other words, the general consensus is an instrumentalist view of technology.

But should we regard technology – and specifically blockchain – as a passive, ethically neutral object?

If, as some claim, the disruptive potential of blockchain is comparable to that of the Internet a couple decades ago, it’s probably worth exploring the flip side of this unquestioned, wide-spread viewpoint before the world actually morphs into a Black Mirror episode.

But first, let me take a step back.


A little intro to blockchain technology

If you have no idea what I’m talking about, I don’t blame you.

In essence, a blockchain is a digital record of transactions that is distributed to every computer connected to the network. Dating back from the 90s, blockchain on its own is not a new technology. It’s only since 2008, when someone known by the pseudonym Satoshi Nakamoto masterminded Bitcoin, that blockchain really took off and started bearing the functions we associate it with today.

With Nakamoto’s developments, digital transactions between individuals can now be authorised by being encrypted on a constantly updating blockchain, thus making the record of all transactions public and immutable.

A blockchain is therefore a decentralised digital system of transactions, as it relies on an automated technology for verification, rather than having to place trust on a human intermediary, or a ‘clearing house.’

Centralised traditional ledgers versus decentralised public ledgers (source)

For more in-depth explanations of blockchain and Bitcoin, take a look at the two videos below.


The financial crisis and the neutrality of Bitcoin

In light of what I mentioned about blockchain being viewed as merely a tool, it’s unsurprising that Nakamoto’s paper on Bitcoin was published in 2008, and therefore that the decentralized nature of blockchain materialized from the ashes of the global financial crisis. It represented a resolution for the widespread lack of confidence in financial institutions, as it introduced a way to cut away all the human fallibility, corruption and politics, otherwise an inexorable aspect in financial transactions.

In the words of Bitcoin developer Jeff Garzik, “when power is concentrated in the hands of a few powerful people there is a risk of catastrophe, corruption and chaos. Decentralizing a system hands power to immutable mathematics.”

Notice his wording: ‘immutable mathematics’. Garzik forwards the idea that the technology that underlies Bitcoin is more secure, and therefore superior, to any potentially corruptible human intermediary. The implication is therefore that the blockchain is politically and ethically neutral and infallible.

As I’ve already touched upon, this view is not confined to Garzik. The overarching consensus is that, just like a knife can be used to cook, kill or cure, blockchain can serve the intent of its users and act to fulfil their moral stance.

Indeed, it is often the case that finance ethics is concerned with “values such as privacy, democracy, autonomy, and with the behaviour of humans such as bankers, money traders, etc. and the fairness of financial institutions.” Although technology is considered, it is seen as normatively neutral.

It’s true, the behaviour of humans is highly political and must undergo ethical evaluation… but who is to say that the same shouldn’t be done for technology?

Jackson Watts’ iPod example is fitting: an iPod is only truly an iPod when it acts as one, and not a paperweight. Inherent to what iPods mean for society are the intentions of its creator and the possibilities and limits of its design. Ultimately, form cannot be divorced from function.

In the same way, blockchain is more than just a neutral intermediary, as it, too, is inseparable from its human creation. In fact, intrinsic to the development of blockchain is the objective of a libertarian utopia, where transactions can take place freely between individuals without the interference of a central authority.

This is reflected in the possibilities and limits of the design: for example, a lack of centralised authority for financial transactions implies that remittances are cheaper and access to money is less bureaucratic and ‘freer’… but any form of income redistribution becomes impossible, as only those with money can have access to it. This marks the first normative bias of the technology.

A further limitation, which ironically goes against this libertarian dream of decentralisation, is the fact that the process of verifying transactions and expanding the blockchain requires massive computational power. This creates an incentive for people who undertake this process to do it in places where energy is particularly cheap. The result, which is already materialising, is the centralisation of the process in places like China. This, as well as the fact that computational power ultimately requires energy consumption, once again demonstrates how the very structure of the technology bears political and ethical implications.

We are slowly coming to unveil how blockchain is not just a means to an ethically-charged end, but rather that this charge is intrinsic to the technology.

If we turn again to the skepticism of the financial crisis, we can now say that people took their trust away from biased humans, only to feel better when placing it in the hands of a biased technological system.

And this is not just limited to blockchain. This example sheds light on how our society, and the tech industry in particular, is deeply invested in the belief that technology is ethically neutral. Upholding this consensus enables those who create the technology to not be held responsible, and at the same time reaffirms the personal autonomy of users. But, as we have seen, this view is highly problematic.


What this all means…

You’re probably thinking, ‘ok, blockchain is not a neutral intermediary… so what?’ Ultimately, this analysis gives us a framework for how we ought to view blockchain, and other technologies, when developing science policy.

Given that blockchain is a representation of a political system, and assuming that it does indeed have Internet-like disruptive potentials, the implication of it remaining unregulated is that it will affect the way in which inter-personal transactions work – possibly to an extent where the changes become irreversible. Hence, one science policy consideration, as prompted by Van de Poel, is whether the development of blockchain technology should be regulated so as not to force non-consenting individuals to be subjected to it without the possibility to ‘opt out.’

However, we shouldn’t even leave this reflection at the realisation that technology is ethically non-neutral, as, don’t forget, ours is an ethically diverse society. We must therefore ask ourselves: which ethical vision should the design, development and deployment of potentially disruptive technologies be guided by? Indeed, as Michael Sacasas points out, this is perhaps exactly why we are “invested in the myth of technology’s neutrality in the first place,” as it simplifies the reality of having to live with competing ethical principles.

Although it’s not within the scope of this post for me to argue which ethical standpoint is correct, or to construct a method on how we ought to go about deciding, I have nonetheless debunked an important prevailing assumption: that of technological neutrality. The aim is that this will prompt popular discourse and science policy to appreciate that considering blockchain, and technology in general, as ethically neutral is an oversight and a simplification of reality.