Slide structure / pinpoint source code (sans pictures): http://files.plomlompom.de/vortraege/2012/2012-06-22_Munich/munich.pin
We are moving towards a post-privacy world: Privacy in the sense of personal secrecy disintegrates as ever greater parts of our life are absorbed by the internet, by the digital age and all its engines and devices. Privacy as a life securely sheltered against unsolicited outside curiosity becomes ever harder to maintain.
Many play an active, conscious part in this development. Those, for example, who upload their thoughts, activities and locations in detail into services such as Twitter and Facebook willingly expose their life, often in-depth, towards commercial data harvesters, towards growing lists of online contacts and often, actually, to the whole internet public. Today, almost a billion active accounts populate Facebook; they correspond to a seizable part of the world's population, and a major part of western countries.
Those who do not pro-actively expose themselves still get tracked and analyzed in detail as they use services such as Google or Amazon.com. Their every move, click and pause is written down and added to growing mountains of data harvested by data-mining algorithms to answer all kinds of questions about the collective and the individual.
These algorithms continuously grow in intelligence, in their ability to conflate far-away data points in unexpected ways, to deduce and predict interests, behaviour and relationships. Thus, even data seemingly harmless and innocent more and more betrays hidden secrets. Any data set, no matter how harmless or reduced it may seem at first, becomes a potential source for unexpected revelations of the most critical sort -- such as the sexual orientation of individuals.
Even if individually we decided to boycott all kinds of data-collecting internet services, we would not get off the hook. As social beings, our shape is recorded in our environment, and that environment stays online. Even if we abstain from Facebook, our friends keep talking about us there, keep uploading their address books with our contact data, or group pictures that include our faces, subject to future and present face recognition algorithms. Facebook is knowledgable not just about its members, but also about its non-members as long as those live near.
The big data collectors, such as governments and internet companies, often promise some degree of privacy, of data secrecy, but they just as often prove untrustworthy in their claims. Not just for conscious deception, but also for the sheer difficulty of keeping the lid on sensitive data, as numerous security breaks of immense proportions in the last years have proven; and sensitive data once leaked onto the internet is impossible to retract.
All of this adds up: masses of people willingly putting more and more data about themselves and their environment online; the growing number of devices and services tracking and analyzing humanity's behavior; the voluntary or involuntary openness of databases and the growing statistics and profiling knowledge contained therein; growingly intelligent algorithms; and most of this powered by a trend for exponentially cheaper, faster, smaller, more ubiquitous data storage and processing machinery called Moore's Law, which has held up for decades now and will certainly continue steadily for years to come. All of this adds up -- to a growing pressure of analysis, exposure, revelation against what is not known yet about our lives, against unknowns, against secrets -- against privacy.
This post-privacy trend certainly is not greeted by everyone. But it is closely tied to a lot of what we consider good nowadays. For do we not live today in a world where ability is more and more defined by interconnectedness, by the reach of one's communication and by access to global data resources? For our self-empowerment, we expect all kinds of information to be available at our finger-tips, and to be able to filter the world according to our own criteria; the other side of this bargain is that as much information as possible about everything and everyone must be widely available, and that of course includes information about ourselves.
People who see such benefits in the data deluge are often quite willing to contribute to it: the movements of "the quantified self", of "lifelogging" and "data love" attest to that. Here, people record all kinds of data about themselves and share them widely, as a sort of common public resource to be collectively scrutinized, connected and used for science and improvement in matters like health, productivity or artificial intelligence.
The success of Facebook and Twitter may be explained by their augmenting basic human behaviors like sociality, communication, networking to new heights -- and thereby the human culture produced from these traits. This explains the masses behind these services much better than theories of insufficient consciousness of the dangers of data sharing or of the manipulative power of data mining businesses.
This is why fighting this development is so difficult: The impotence of regional data protection laws against a decentralized, global internet certainly is one factor, but to on top of that effectively oppose the unleashed human desire to share, network, communicate must be considered impossible. Laws originally aimed to control individual corporations and bureaucracies growingly meet activities of many millions of users that are just as consequential. To counter that, massive interventions against the open nature of the net and the free association of individuals would seem neccessary; that data protectionism seems to weak to effectively follow such a path may be considered a feature rather than a bug.
Nevertheless, legitimate worries lurk behind the fight for data protection. If it is to be endurable, a post-privacy world demands strong strategies against social intolerance and authoritarian control. Post-privacy may not offer complete solutions against these dangers, but maybe some tools that could be helpful.
Obviously, a post-privacy world needs to be more tolerant if it is not to become massively more repressive towards minorities with a different background or lifestyle than what is dominant in society. Even in quite liberal societies, privacy serves as a safe space for practices and aberrations discriminated against in public. Privacy offers safety from outside intervention in exchange for keeping the public clean from what is considered taboo or improper; certain things may be allowed only if secluded into the private sphere.
In a post-privacy world, such contracts would break down: Nothing could be hidden from the public anymore, every otherness would leak out. This would certainly provoke resistance; but it could also lead to a relaxation of standards and taboos. Society would break down if it would not find ways to accept a large part of what was previously hidden. To avoid civil wars, society would need to become more inclusive and tolerant. Forcing otherness into publicity and thereby moving society to confront and accept it does have powerful historical precedents -- such as the use of coming out and outing strategies in the gay liberation movement.
A different tolerance strategy for a post-privacy world may involve the principle of "filter sovereignty" -- the improvement of methods to individually ignore what disturbs one's preferences. Consumption of internet media already offers strong examples how filtering out that which does not interest him is becoming one of the most important faculties of the digital age's citizen.
Naturally, the masses of data available on us in a post-privacy world would also be available to enemies and potential oppressors, such as authoritarian dictatorships. What separates a post-privacy world from an Orwellian surveillance state?
Actually, the surveillance state portrayed in George Orwell's "Nineteen Eighty-Four" hinges on many factors of which Big Brother knowing a lot about his citizens is just one. In parallel to surveillance theories like Michel Foucault's Panoptism, Big Brother's power over his people also depends heavily on separating them and keeping them ignorant. If we cannot avoid becoming transparent to the eyes of wannabe Big Brothers, we must instead focus on other such factors of authoritarian power to fight its emergence. With privacy, we may lose one lever against Big Brother; the more we need to strengthen others.
One such lever promised by a post-privacy world is heightened "transparency" -- a key word for modern democracies even today. Visibility of power processes and their internals is a necessary ingredient of their democratic accountability. The power's surveillance must be countered by "sous-veillance" of the power by its subjects, by a symmetry of transparency bottom-up to transparency top-down. Just as the police scrutinizes us, we should be able to scrutinize the police.
A concept for a complete social symmetry of transparency is most elaborately expounded in David Brin's book "The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?" Assuming that we cannot stop surveillance technology to grow ever more available, cheap, ubiquitous, Brin advocates its total democratization. If one Big Brother develops, let his knowledge power be canceled out be everyone else becoming Big Brother as well. Brin develops different scenarios for countering increased surveillance by increased sous-veillance and how power relationships may be influenced by that; in some of his ideas, he goes quite into detail on how political and financial systems could find new balances under conditions of complete transparency.
In the end, Brin even finds new opportunities for personal privacy in his world of complete transparency: If everything can be seen, that "everything" includes each act of seeing. No spying is secret anymore, and every voyeur identified. Thus, if society decides that certain acts of seeing are immoral, it is easy to catch and punish perpetrators; thus privacy conventions could be defended.
Brin's ideas heavily depend on a grand vision of ubiquitous and cheap (i.e. democratically available, broadly distributed) technology forcing down all counter-strategies for secrecy nowadays available to the powerful. In such a post-privacy world, the powerful cannot hide better than the powerless. This expectation may be too optimistic. Today, transparency is much easier forced on the powerless than on the powerful, who have more means to enforce their privacy interests. To correct this imbalance, political struggle may prove necessary; the powerful will certainly not give away transparency that is meaningful (i.e. lessens their power) as a present. Broad deployment of sous-veillance tools such as video-recording mobile phones, or future hacks of secret databases in the WikiLeaks tradition, may be seen as elements of such a political struggle.
Such are the long-term challenges on the way into a post-privacy world, and a few, probably unsatisfying, ideas on how a post-privacy trend itself may be harvested for tools to work on these challenges. But many questions caused by the post-privacy trend are already experienced today and therefore need to addressed much more quickly. In this second part of my talk I want to outline some short-term strategies available to us, and used, right now to handle our post-privacy path.
Information privacy may be on the retreat, but it's still available in many contexts and often still provides effective counter-measures to certain threats. We must not confuse these islands of privacy with long-term solutions, but as long as they exist, they may be worthwile for their tactical short-term benefits. Let us therefore explore some strategies to enable and protect such islands of privacy as long as possible in the years to come.
The most important privacy-enabling factor may be education on where privacy still works and dependent on what conditions. No promise of absolute privacy can be taken serious; but some promises of relative privacy may be more realistic and honest than others. To weed out the valid promises from the invalid ones, a thorough understanding and experience of the technical and social circumstances of online communication and communities is necessary. Education campaigns by data protection agencies may help, as may public demonstration of technological insecurities by hacker groups, as is regularly done in Germany by the Chaos Computer Club.
More generally, knowledge about the internals of information technology and the way the internet works is fundamental and ought to be a major part of any school curriculum in the information age. The understanding of how a service such as Facebook works can lead to innovative and highly specialized privacy strategies such as the "super log-off" recently captured in the behavior of Facebook-using teenagers. But since such platforms change quickly, in part to counter new privacy strategies to escape from their grip, a general understanding of their technological background is much more useful than any set of hard-coded privacy tricks.
Anonymity on the internet is another privacy-enabling factor that deserves recognition. Traditionally, the internet has been seen as a paradise for anonymous participation and expression in the public discourse. Non-disclosure of one's personal identity, and the difficulty in nailing it down for outside observers, have long gone hand-in-hand with the internet's decentralized, open structure where it is easy to participate from anywhere and from any device, without having to pass through identity checks.
Cultures of anonymity have also long been part of the internet's maverick image and have provoked criticism for the incivility and criminality allegedly fueled by them. Whereas data protection advocates often call for regulation of the internet's anarchy in the name of privacy, privacy through internet anomymity is often identified as a danger to be curbed by the law; see the German "Impressumspflicht" for websites as an example.
But law and state can also play a role helpful to privacy through anonymity on the internet. The development of anonymization technologies may be sponsored by the state, as was originally the case with the Tor project in the United States or the JAP (Java Anon Proxy) project in Germany. And law could force internet companies to offer their services not just to personally identified users, but to anonymous users, too; such a mechanism came into discussion when many users protested in 2011 against Google forcing users of its "Google+" network to identify themselves by their real name.
An often discussed potential third privacy-enabling strategy is the promotion and sponsorship of privacy-friendly alternatives to services deemed privacy-unfriendly, such as the Diaspora social networking project as an alternative to Facebook. There is certainly value in countering the massive power agglomeration of such a structure with alternatives, just as there is great value in free software alternatives to proprietary software. But on the privacy front, the privacy focus of Facebook alternative Diaspora or the state-certified privacy-friendliness of the German StudiVZ network has so far failed to attract a visible user influx from Facebook; quite to the contrary: originally stronger than Facebook in Germany, StudiVZ lost its entire userbase to its rival just as it gained its privacy credentials.
Some find ways to use their own post-privacy as a protective device. A popular example is media artist Hasan Elahi who somehow got onto an American terrorist watch list and due to these suspicions was detained by the FBI. To counter future suspicions, he started to publicize his life in an extreme way, including details about all of his financial transactions and his whereabouts 24 hours a day, which are available to anyone on the web to scrutinize, including of course government officials.
Hasan Elahi's behavior could be labeled "preemptive obedience". Does he not assist the state's surveillance apparatus just to prove that he is innocent, that he has nothing to hide? But self-publication is also used as a survival strategy of dissidents elsewhere. Visibility by publicity ensures that injustices are harder to perpetrate against those watched by the world. In the context of the Arab Spring, protesters and journalists have used social media to broadcast their detainment and the conditions of it. At the same time, the disappearance of someone who tweets regularly in a high frequency to a large number of followers may quickly attract attention.
Sous-veillance has strongly developed since the early 1990s, when the brutal beating of Rodney King by police officers was videotaped by a civil bystander and caused public outrage when the footage was shown on television. Nowadays, in the age of ubiquitous mobile phones capable of recording pictures and videos, large numbers of offences by state and company employees happen under the camera eyes of a growing number of "mere" citizens outside the field of professional journalism. Such recordings are shared and discussed widely on the internet, where they do not have to pass traditional gatekeepers first and can hardly be censored.
WikiLeaks provides examples for a different kind of sous-veillance: Privileged knowledge of the state (such as internal documentation of military actions, or more-or-less secret diplomatic information) is wrestled from its closed archives and made public, forcing formerly intransparent government policies and decisions into a new light of public scrutiny.
Controversies abound about the legitimacy of such acts of transparency: In Germany, police officers accused of improper conduct feel inappropriately targeted by wanted posters exposing their identities. Internationally, WikiLeaks was strongly criticized for exposing information dangerous to individuals, such as informants. Furthermore, public shaming by the techniques described nowadays targets civilians, too. A recent prominent case in Germany involved athlete Ariane Friedrich who on Facebook outed an alleged sexual harasser of hers, an act that led both to public approval and to public refusal in its ethical implications.
Considering Brin's idea of transparency counterbalancing improper intrusions into privacy, what's most interesting is the growing number of cases where leakers and exposers themselves become subject to public scrutiny, where leakers get leaked and exposers exposed. Famously, WikiLeaks momentarily collapsed under the weight of its own unflattering internals leaking out. Hackers united under the "Anonymous" label got into a sort of ping pong game with tech security company HBGary wherein both sides publicly exposed secrets of each other. And western technology companies well respected at home increasingly face exposure of less well respected activities as surveillance providers for dictatorships in other parts of the world.
Such platforms' power lies in their monopoly over data accumulated under their roof. People are forced to join Facebook and communicate under its control and surveillance because social data such as contacts and events are only available to logged-in users. If such data is available outside, Facebook can exert less pressure to conform to its rules and wishes. Broad sharing of data on the open web also devalues this data as a commercial product, as an ersatz currency dependent on its scarcity.
The shareability and public availability of data is the topic of "open data" and "data portability" initiatives. "Open data" initiatives focus mostly on the unrestricted public accessibility of science and government data, areas easily shaped by institutional decisions and laws.
"Data portability" initiatives focus on the users' ability to harvest their own data from social platforms so as to re-use it elsewhere. Such pressure has led platforms like Facebook and Google to offer ways for users to download large parts of data associated with them from the respective platforms -- although the visibility and quality of these offers remain contested. Google deserves a special mention for its "Data Liberation Front", an attractive platform solely dedicated to the question of how to harvest one's "own" data from Google's various services.
The post-privacy trend is powerful, and there is seemingly no effective long-term countermeasure to it in our option list -- or rather none that does not carry with it strong harmful side effects such as closing down the internet as we know it. This considered, we should probably make our peace with the coming of a post-privacy age. Post-privacy comes with a lot of problems and threats in tow, but also with many opportunities for good; some of these opportunities may help to counter the new problems and threats.
Longterm, post-privacy forces challenges onto society that can only be countered with initiatives towards more tolerance and more checks on power. Post-privacy elements such as heightened transparency may support these initiatives, but they will not suffice all by themselves.
Short-term, post-privacy already creates problems and already is harvested for solutions to these; but as of right now, certain privacy strategies also still work in many contexts. Among these strategies, some -- namely government regulation of online communication and data flow -- try to curb the internet's anarchy. But it is this very anarchy that makes the internet a powerful antidote against authoritarianism, and if we move towards a post-privacy world, we need every antidote against authoritarianism that we can get our hands on to avoid Orwellian nightmares. Therefore, other privacy strategies that harmonize with the internet's open and decentralized nature are to be prefered -- namely the use and promotion of cryptography and anonymization tools and of decentralized alternatives to centralized platforms like Facebook.
 Michel Foucault, Surveiller et punir: Naissance de la prison, Paris: Gallimard, 1975.
 David Brin, The Transparent Society: Will Technology Force Us to Choose Between Privacy and Freedom?, New York: Perseus Books, 1998.
Keine Kommentare zu dieser Seite.
Kommentar-Schreiben derzeit nicht möglich: Kein Captcha gesetzt.