When Bertolt Brecht returned to (East) Germany in 1949 he wrote a short poem in which he reflected upon his own situation. The immediate fight against fascism had been won, but the more daunting task still lay ahead: building a new society. Many believed that the road ahead would be clear and easy from here on, but Brecht doubted this. He writes:
When the difficulty
Of the mountains is once behind
That’s when you’ll see
The difficulty of the plains will start
Of course, historical situations are always unique, singular and ultimately uncomparable. Nevertheless, I want to use Brecht’s metaphor of “the difficulties of the plains“ to talk about the stage we currently find ourselves in regarding open access. We have crossed a mountain range full of difficulties, we have mastered many challenges, but also full of amazing adventures and fascinating prospects. A lot has been achieved and while many thought the mountains were the only obstacle we had to overcome, we find ourselves in a plain devoid of perceivable obstacles or dangers, at least at first sight. The challenge has changed because we now rather face the problem of drudgery and orientation—how can we keep going without the large and overpowering challenges. To put it in even more emphatic terms: From a heroic struggle, we have to adapt to the problems of the everyday; it is as if an action blockbuster has given way to a Berlin school reflection on the mundane. The questions to be asked now are: Which direction do we take; How can we keep going when an immediate aim is no longer visible; How can we motivate ourselves and others? I want to illustrate my point in the form of five and a half theses.
1. We ain’t achieved nothing yet!
By now, most large European scientific organisations and funding bodies have, in one way or another, implemented Open Access into their funding policy and operational routines. The EU Commission states on its homepage regarding its flagship programme Horizon 2020: “The global shift towards making research findings available free of charge for readers, so-called ‘Open access’, has been a core strategy in the European Commission to improve knowledge circulation and thus innovation. It is illustrated in particular by the general principle for open access to scientific publications in Horizon 2020 and the pilot for research data.“ This shift in political thinking at the European level has trickled down to national funding agencies, to universities and to many individuals who have embraced the basic principles, including myself. Most disciplines have adopted open access as (at least) one way of publishing; many academics have gathered practical experience and have a basic knowledge of what OA stands for. A study conducted by the university library at the University of Utrecht in 2015/2016 found out that 86.8% of scholars support the general aims of open access, 9.3% were indecisive, and only 3.9% had a negative view of OA. In a way, OA has been thoroughly internalised into the academic system and it seems as if the struggle for open access has been won across the board.
At this point in time, one might be tempted to propose: Can we all go back to our “real“ duties (which are always urgent and pressing): do research, write papers and books, teach seminars and give lectures, counsel students, organise conferences and so on? I believe that this would be a serious misunderstanding, because we have not really achieved anything yet—if we are not careful to follow up with developing our own tools and to taking the power back. This is not some extra-work that we might want to do if we have a bit of extra time. It should, instead, be a central part of any scientific daily routine. Open Access, open data, open science—whichever term you prefer—if understood in its full complexity, is set to restructure the entire scientific process: from the way we develop questions and gather data through publishing and access all the way to the long-term strategies of safeguarding and archiving sources, material and publications. The whole cycle of knowledge production has to be integrated and restructured. Therefore, open access remains a constant task to develop and adapt in relation to the current tools and methods.
Another way to conceptualise the relationship would be to see open access and open science as a supplement in the way proposed by Jacques Derrida—something, allegedly secondary, that serves as an aid to something “original“ or “natural“. When we follow the endless game of references and links, we might want to reach a stable denoted reference, but that is ultimately impossible. It is at this missing origin, at the imaginary point of stability, that the supplement appears. The supplement is an add-on and a substitute, something that completes another thing and something that may replace it and therefore pose a potential threat. In this way, open access is a supplement to the existing scholarly ecosystem of editing, reviewing and publishing—it is meant to replace it, while also adding on to it. It is both an accretion (Hinzufügung) and a substitution. In this way, open access might also be a means to highlight the artificial and arbitrary nature of the publication system, which might have appeared natural and normal to many. And in this way, open access is also something that is never finished, that continues to elope and abscond us, a marker which reminds us of the unfinished business of circulation and knowledge production in the academic world. Everything remains open to revision; final stability can never be achieved.
2. The Empire Strikes Back – …for the benefit of humanity?
One way to put the argument of the supplement into more concrete terms would be to look at the dynamic restructuring of the publication system. If we look around, there is a mixture of the old system which open access was meant to overcome and new players and tools trying to take advantage of the rapidly transforming ecosystem. As it appears at the moment, it is the old players that seem to profit most. Look, for example, at Elsevier, our very own behemoth: Elsevier’s profits has swelled again last year, to €900 million in 2017, a profit margin of 37%—higher than Monsanto or Goldman Sachs. In a recent article, The Guardian put the business model of the publishing giants in provocative terms: “It is as if The New Yorker or The Economist demanded that journalists write and edit each other’s work for free, and asked the government to foot the bill. Outside observers tend to fall into a sort of stunned disbelief when describing this setup.“And this situation is far from over—the quote is from last year—so the old stakeholders are coming back with a vengeance if we are not careful.
In many European countries, new open access funds were introduced in the last five to ten years because the transformation to open access was not meeting the expected goals. These well-intentioned initiatives have led in many cases to what is now known as double dipping: There are still overpriced subscriptions, but there are also OA funds available for those at wealthy institutions to pay for overpriced APCs. When I wrote a review (not even an article) for a Taylor & Francis journal some years ago, they asked me for more than €1000, to make the article available in open access. Obviously, this has nothing to do with the real costs to the publisher, even if a creative bookkeeper can always come up with endless overhead which has to be covered. It is, rather, meant to kill two birds with one stone: If people agree to pay, it wins a handsome profit to the company; at the same time, it also gives the enterprise the opportunity to claim that they support (or at least allow) open access. With approval rates of more than 90% among academics, publishers would be crazy to oppose open access. But they have to fit it into their business model, so most take the path of least resistance.
Apart from the generally problematic nature of APCs, there is another effect of openness that we might have underestimated in the past—that openness is also open to businesses using the data for their own ends. Openness means transparency, and transparency—at least potentially—means control. This can be seen both on a political and an economic level. Let me turn again to Elsevier: The company is no longer calling itself a publishing company but, instead, a “global information analytics business that helps institutions and professionals advance healthcare, open science and improve performance for the benefit of humanity.” And, to make things even more ironic—indeed this is a punchline that you could not make up—the EU is in the process of implementing an Open Science Monitor, a “full-fledged monitoring system“ of open access and transparent research across countries, which is being developed in conjunction with none other than Elsevier. So one of the companies that profits most from the current set-up is gathering data and preparing policy decisions in the very field where many criticise its dominance. Open access has, at least partly, created a system in which the monopolists have been able to reap even more profit than before. And inadvertendly, we might help to shape an ecosystem in which science is being monitored even more closely by monopolists because output will be measurable in ever smaller amounts of data. Those who own and control the tools will be those who decide how to use them and who will profit from them.
Another example of how the old system has adapted to the new challenges and chances is the venture capital–funded platform academia.edu. Here, control of data and profit-seeking has given rise to a platform that masquerades as a scholar-friendly social network. There are more examples to give, but instead of preaching to the choir, let me move on.
3. Get out – of the Filter Bubble
Because “preaching to the choir“ exemplifies one of the key problems: We—as OA advocates—are largely talking to ourselves. There is, by now, a sizable community of people active in the field of open access; we meet at conferences such as this one, but often in designated slots and under specific headings. We are not a small group, but we are also, let’s face it, not the majority. Most colleagues would say they embrace open access (and I gave you the numbers), and I believe they do in principle; but in reality, they do not change their modus operandi. They still follow an opportunistic strategy in publishing and they are far from changing their practice. In fact, this is partly true for myself and—I believe—for many others. Habits are slow to change, and when you are invited to an established journal, it is hard to reject such an offer. It is hard to expect individuals to radically change their behaviour in an existing system, when the immediate effect of such changed behaviour has direct negative consequences.
We have to change the environment in such a way that open access becomes the normal way of doing things. And we have to change the environment in such a way that it is not the big players that are able to profit even more than before. If you go back and read what was written five or ten years ago, it was all about implementing open access and changing people’s minds. But mentalities are slow to change. Since the process was not fast enough for the ambitious aims of the large stakeholders (the European Commission, large funding agencies, governments), money was pumped into the system that went towards those at the crucial junctions. Something similar might happen again and again, if we don’t make ourselves heard.
4. The Paradoxical Self-Evidence of Openness
Openness appears to be a self-explanatory concept. In almost all discussions I hear about open science, open access, open source and so on, it is always assumed that openness is inherently, yes almost ontologically, a good thing. Now, I consider myself an OA-advocate and therefore I am standing here before you as a proponent of such a view. Yet we should, at least for the sake of argument, step back for a moment and reconsider if openness is always already good, no matter what the social, political or cultural circumstances are. First of all, we have to ask ourselves what openness really means. It is a relational term that is dependent on perspective and context. Crucial questions have to be asked here because very seldomly are all aspects open in the same way: What exactly is open to whom under what circumstances? And also: Who can use the openly available materials to what ends? Who has control and power (both in legal and economic terms) to use material published under the sign of open access, open data or open source? Openness and privacy—which is an ideal that most of us, I assume, would not want to give up lightly—are more often than not in conflict with each other because privacy is always balancing out access of others with control of the self.
I have found a welcome and sober antidote to the ideological positioning of openness as inherently good in the work of danah boyd. boyd has repeatedly and forcefully argued that data is never neutral, that opening up data and making it available for third parties always has political implications and possible side effects that might not be apparent at first sight. It is not enough to give people access to data; we need to also make tools available to understand that data, and we need to construct an environment in which the circulation of data can be monitored and registered. One example that boyd gives is the open availability of information on schools, which usually has the directly observable effect of higher ethnic and social segretation—because specific groups might (and will) read this data in a certain way. Certain individuals and collectives have resources available (knowledge or capital to pay someone to use a certain knowledge) that allows them to make use of data in a specific way. boyd even demonstrates how algorithms might inadvertently contribute to inequality, because the way certain elements correlate makes it more likely to include discriminatory elements of another level into the equation. Therefore, data about your family or your place of residence might influence your credit score or the likelihood of getting parole without any conscious discrimination, but rather as ripple effects of algorithms. This kind of algorithmic discrimination will be an important topic of discussion in the next couple of years.
5. Sustainability Means Discrimination
I still occasionally encounter colleagues who believe that something is open access because it can be found on the internet. Often, they have built a project website, some WordPress structure that a student assistant programmed and another one renovated after the first one has left. Some years later, money on that specific project has run out and the research interest has shifted elsewhere; no one is caring for the dilapitated structure anymore, and some browser generations later, the website will be unusable, inaccessible, or simply gone. In the old days of what McLuhan has called the “Gutenberg galaxy”, books were delivered to libraries, where they could still be accessed hundreds of years later—even if no one in the meantime had cared to look at them. The sources and structures of the digital age have a radically reduced half-life period. This puts a much higher pressure on the infrastructure that we need to build and maintain. The speed of production and reproduction, the sheer amount of data being produced today, means that we have to discriminate what we want to keep and what we risk to lose.
I am consciously using the term discrimination here because creating data and making research always means making distinction, drawing a line, identifying something as meaningful (and, by implication, something else as not meaningful). We discriminate, too, if we decide to archive something, to build structures that are meant to keep and safeguard material for a longer period of time. Most of us still grew up in McLuhan’s “Gutenberg galaxy“—it has a five-hundred-year history and we did not need to think too much about how it functioned. We did research and wrote, we handed it in, we got a review and if we were good (or lucky) enough, we got published. There were specific roles in the process: advisors and editors, publishers and reviewers, printers and librarians—a whole system constructed towards quality control (discrimination of the first order) and long-term preservation (discrimination of the second order). Currently, we begin to understand how the shape of a new system might look like or at least what the stakes are: It is an accelerated system in which the temporal cycle of discovery, publication and discussion has been radically shrunk; this might not be visible in the same degree for the humanities and social sciences as for the natural and life sciences, but it is still beginning to be felt. At the same time, we have large private companies which aggressively enter into some parts of this system with the aim of making profit. While profit has always been part of the system, it has now taken on a very different function.
Discrimination is a key concept that is built into the nature of information. Whenever we create data that has a structure that is machine-readable, we make specific kinds of distinctions. This is again danah boyd on data analysis and discrimination: “discrimination as a concept has mathematical and economic roots that are core to data analysis. The practices of data cleaning, clustering data, running statistical correlations, etc., are practices of using information to discern between one set of information and another. They are a form of mathematical discrimination. The big question presented by data practices is: Who gets to choose what is acceptable discrimination? Who gets to choose what values and trade-offs are given priority?” And this is, again, why I believe we have to obtain a certain degree of data literacy, because it is only if we understand the tools we are using that we also understand what forms of discrimination they entail.
6. From Collecting to Curating
If we look at the natural sciences, we can see that the line between what counts as data and what counts as a publication is increasingly blurry. The difference between research data and a journal article is currently a hot topic of discussion, just as research data management as a strategic field has taken the place of OA in the minds of big funding organisations. The large grants and strategic attention that were devoted to open access ten years ago are now geared towards research data management. Of course, this dynamic movement (just like open access fifteen years ago) originates with the STEM crowd, but it will inevitably reach and transform the humanities and social sciences as well. The larger and more established disciplines in our field such as history, art history, philosophy or literary studies, will have the reputation and the power to eventually build their own platforms or participate in larger infrastructures. If media studies wants to be more than an appendix to one of those disciplines, we have to move fast and decisively, because our only reasonable alternative is to be attractive as an innovative pathfinder and as an experimental field. If we do not react at all, the bandwagon will pull ahead without stopping. No one is waiting for media studies to get moving. Therefore, we need to build our own infrastructure,n which is actually much more fun than some of you might believe.
Let me sidetrack a little to tell you what I actually do in a project that has only recently made its public appearance. We have launched in September a repository with funding from the German research association DFG. Because funding is still largely a national concern, we have started in the first phase with mostly German-language sources. But the larger idea for the future is an infrastructure for the sustainable archiving and publication of research within the larger field of film and media studies, regardless of language or origin. We try to be as inclusive as possible, but we also have to make distinctions as to what belongs to media studies and what does not. It is naive to assume that collecting is some natural flow of things and that collections have a systematic logic that is beyond individual decision-making processes. We should be aware of the fact that archival collections, be they analog or digital, are always curated. And in this sense, I consider MediaRep to also be a curated collection—but we want to make the decisions and the process visible to users. I believe that openness in this sense is more than online collections without paywalls, but a transparent way of decision-making.
I hope that I have been able to show you some of the challenges and dangers that I would see as the “difficulties of the plains”. The difficulties of the mountain are a thing of the past ten years: we just had to rally behind the term open access and convince people of its value. This mission has been accomplished, but now we have to do many different tasks at once: We need to understand—and make it understandable to others—that publishing, editing and reviewing is an ethical decision and that our actions have consequences for a larger field. We need to be aware of the ripple effects and collateral damages of specific actions in specific situations—or even the consequences of the lack of actions. Maybe this could be a topic for professional associations such as NECS: to formulate best practice models for publishing.
We need to talk to librarians and funders, to presidents and politicians; we need to make our voice heard beyond our immediate circle of friends and supporters. We need to build alliances in order to be able to formulate and follow long-term goals. We need to construct infrastructure in the way that magazines and libraries, repositories and social networking platforms are infrastructure. We need to run and oversee this crucial infrastructure. We also need to understand that all these practices are not inherently good just because we mean well—therefore, we need to constantly monitor the effects, because in a complex and dynamic system an effect can never be directly determined.