Open Access and the Issue of Quality: Beall or not to Beall

Like everywhere else where money can be earned, also in the field of open access publishing there are pirates around. In the past 10 to 15 years, there have been many parties that abused the open access publishing model – in which the author must pay for the publication of his / her article, for the sole purpose of making money. These ‘publishers’ ask an Article Publication Charge (APC) for vague platforms or journals with a questionable or often non-existent reputation.

I’ve written about the Jeffrey Beall’s list before in Dutch here and eventually I wasn’t planning to write about his endeavors anymore, since it has been discussed a lot in the last few years. It’s a blacklist and for some it works as a point of reference, and for some it is only a biased view by one person; Jeffrey Beall himself.

Because the issues of academic quality in relation to open access almost always pops up in discussions about scholarly publishing I thought it still might serve the media studies community anyway to give some more context to this debate.

Since 2008 Jeffrey Beall, librarian at the Auraria Library, University of Colorado Denver, has been engaging with the quality issues of open access journals. In 2010 he began to draw up a list (the Beall’s List) of so-called “predatory journals.[1] Publishers / journals featured as ‘predatory’ are often identified by aggressive marketing strategies (spam) mailings and calls to submit an article for a journal that after a critical review is no more than a hollow vessel. The Beall’s List has been able to build the necessary authority in recent years, and rightly so in some extent.

But since his list acquired a lot of both positive and negative attention in the last 8 years, and it gained a certain amount of prestige as being the one and only curated list of ‘malpractice’ in the landscape of open access journals, it is newsworthy that only two days ago the Beall’s list suddenly got offline and all activities related to his blog seem to be erased from the internet (if that is ever possible anyway). Up till now it’s unclear why this has happened and one can only speculate about the reasons behind it.

Beall was one of the first to systematically map the abuses in open access journal publishing. He also formulated criteria, by which he assesses the journals and publishers. These criteria are published online, and up to last week he constantly provided updates to it. Beall writes in this document (latest version 2015 ):

“Evaluating scholarly open-access publishers is a process that includes closely, cautiously, thoroughly, and at times skeptically examining the publisher’s content, practices, and websites: contacting the publisher if necessary, reading statements from the publisher’s authors about their experiences with the publisher, and determining whether the publisher commits any of the following practices (below) that are known to be committed by predatory publishers, examining any additional credible evidence about the publisher, compiling very important “back-channel” feedback from scholarly authors, and taking into account counter-feedback from the publishers themselves.”[2]

But a growing amount of criticism to his work has appeared in the past two or three years. For example, it was not always clear why a publisher or journal made it to his list. Beall almost always made such a decision by himself, without having a clear and visible procedure for the outside world to judge. Transparency is an obvious problem; exactly what he is trying to denounce in the above quote.

It turns out that there is almost never ‘back-channel feedback, or if it is there, it is not often made public.

As the publisher, Frontiers, in October 2015 made it to the Beall’s list:

schermafbeelding-2017-01-19-om-18-07-31

A huge online discussion started to get to know the rationale behind this decision. Many academics (who at that time also were partly involved as an editor at Frontiers) fell over this decision.[3] It would be too one-sided, would place too much emphasis on what is not going well in the editorial processes, etc.. Moreover, Beall had issued a warning in 2013, but this has been quite brief. But, with this brief announcement his action came not completely out of nowhere.[4] Despite the comments in favor of Frontiers, the publisher has taken steps to improve certain editorial processes. Beall’s action has therefore indeed lead to action/reaction, whether it is justified or not.

One thing is clear and that is that there is much to do about the list itself. There has been a formal accusation submitted by a publisher who thought that it was unlawfully added to the Beall’s list.[5] Recently some people even suggested that Beall would ask for money in order to do an assessments to get a journal or publisher off the list.[6] It is difficult to really determine the authenticity of such an allegation and we should therefore be careful about it. In the last few years some fanatics began a crusade against everything Jeffrey Beall is doing.[7] Even bullying him. And maybe this, as we’ve seen just this week has apparently lead to the end of his blog? Anyway, such practices, however, contribute nothing to the search for a workable (global) system where we can separate the bad apples from the healthy fruit.

We must not forget that even in the ‘old’ world of traditional (closed access) publishing, there were (and still are) rotten apples in the fruit basket. For several years Beall has increasingly positioned himself  in the opposite camp of open access. That in itself is not wrong, but since there always has been a lack of transparency, he undermined his (hopefully) good intentions.

So, he, like any other person or company who claims authority, needed to be held under a magnifying glass constantly. However, it does not make sense to put his work on the list completely sidetracked. Despite his, sometimes very negative, statements about open access, he ensured that we have to have a critical stance towards the quality of open access publishing, and more specifically the revenue models that come with it. The fact that at this time the focus is on pay-to-publish (APC-driven), it means that there needs to be a careful consideration how we can ensure quality without letting the financial incentives prevail. Moreover, the financial incentive in the old subscription model has always been very decisive. It is not for nothing that the big publishers continually defend’their’ citation indexes and impact factors with tooth and nail…

Are there other ways to assess the quality of open access journals and publishers? Yes there are. In the last few years a number of national and international initiatives have been developed to assess the quality of an open access journal (or publisher).

doaj_logoFirst, the Directory of Open Access Journals (DOAJ) greatly aggravated its access requirements in the past two years. A journal must meet a set of requirements to be admitted to the index. If a journal is indexed in the DOAJ, one can assume that it is a decent journal and that it meets the technical (infrastructure, storage, distribution) requirements and industry standards. How editorial boards are formed and whether the output is relevant to the field, remains a matter between scholars themselves.

banner3In the Netherlands the Quality Open Access Market website launched in 2014. By inviting the academic community to provide an assessment for open access journals  (so-called journal scorecards), they aim for a balanced review coming from the community itself. An addition to the review is that the website mentions whether there is an APC applicable, and if so, what costs are involved. Something can be said about the value-for-money of those open access journals.

QOAM makes use of the metadata database of the DOAJ. QOAM is heavenly depending on the contributions from the academic world as in scholars are asked to crowd-source the database. Until now, it seems, there is not enough critical mass of reviews in order to provide a complete overview.

cl_lswkwkaa1uobThe Think CheckSubmit website was presented in September 2015, and it’s supported by the OASPA | Open Access Scholarly Publishers Association. The aim of this website is to create awareness among scholars about open access journals and their quality. What are the important aspects that reflect quality? What is important for a proper distribution of your publication? And is the journal doing what it promises? By submitting a series of answers to specific questions, a researcher can evaluate the open access journal.

DOAJ is around for a long time now. The last two mentioned websites above are fairly new initiatives, but one can also search in existing systems, like Scopus and the Web of Science. Open access journals are indexed in those systems for a number of years now. These journals must meet high standards and will not simply appear in these indices.

So, to conclude, there are different roads leading to Rome when it comes to assessing whether a journal has quality or not. Many university pages and open access advocacy websites worldwide do mention Beall’s list as the place to separate the wheat from the chaff. And the advice is to avoid (most of) these open access journals or publishers. But we must realize that this is only an advise. For example, the publisher Multidisciplinary Digital Publishing Institute (MDPI) got on the Beall list in 2014, but several Dutch universities have an open access publishing deal with MDPI anyway.[8]

As a result of this weeks closure (an archived webpage can still be accessed) of Beall’s blog those university pages need to be updated soon I guess.

By Jeroen Sondervan

Update January 19, 7:32pm: I just came across this post in the Inside Higher Education where they say Jeffrey Beall declined to comment on the removal of his website.

Update September 12, 2:25pm: Here is an article from Jeffrey Beall where he among other things, explains his decision to cease the Beall’s list: doi: 10.11613/BM.2017.029

Notes

[1] Beall, Jeffrey “Predatory” Open-Access Scholarly Publishers. The Charleston Advisor, 2010, vol. 11, n. 4, pp. 10-17. http://hdl.handle.net/10760/14576

[2] Criteria for Determining Predatory Open-Access Publishers, p. 1, Jeffrey Beall 3rd edition / January 1, 2015. https://scholarlyoa.files.wordpress.com/2015/01/criteria-2015.pdf. (accessed January 19, 2017 – still available).

[3] See for example: https://forbetterscience.wordpress.com/2015/10/28/is-frontiers-a-potential-predatory-publisher/ and https://forbetterscience.wordpress.com/2016/01/07/frontiers-christmas-carol/.

[4] Beall, ‘I get complaints about Frontiers’. https://scholarlyoa.com/2013/11/05/i-get-complaints-about-frontiers/ (checked 29 April 2016 – still looking for an archived version).

[5] http://www.npr.org/sections/thetwo-way/2013/05/15/184233141/publisher-threatens-librarian-with-1-billion-lawsuit (accessed: January 19 2017).

[6] Open Access Publishing – USD 5000 is enough to remove your publisher’s name from Beall’s list (accessed January 19,  2017) https://sfoap.wordpress.com/2016/03/21/open-access-publishing-usd-5000-is-enough-to-remove-your-publishers-name-from-bealls-list/

[7] For example: http://www.scholarlyoa.net/ (accessed January 12 2017).

[8] For a more extensive comment about the addition of MDPI: https://en.wikipedia.org/wiki/MDPI#Inclusion_in_Beall.27s_list (accessed May 2nd 2016).

Image credit: Designed by Starline – Freepik.com

 

Open Science: which tools are you using?

Writing, researching, publishing. it’s all part of the larger scholarly communication cycle. Open Access to publications is part of a larger movement, which is the transition towards Open Science. On the FOSTER (Facilitate Open Science Training for European Research is a 2-year EU-FP7 project with the aim to produce a European-wide training programme that will help academics, librarians and other stakeholders to incorporate Open Access approaches into their existing research methodologies) web-portal, which can be used as learning tool in order to train stakeholders on the topics of Open Access and Open Science, the following definition of Open Science can be found:

“Open Science is the practice of science in such a way that others can collaborate and contribute, where research data, lab notes and other research processes are freely available, under terms that enable reuse, redistribution and reproduction of the research and its underlying data and methods.”[1]

More and more the debate on Open Access and access to research data is shifting to the larger discussion on how we can move to an open and transparent scholarly communication system. The main ideas behind the Open Science movement is that it makes science more reproducible and transparent and above all it has more impact on research and the society at large. This also implies that software and tools used for research, writing and publishing purposes are preferably freely available or developed in open source in order to ensure this reproducibility as much as possible.

In the research and writing phase, scholars are using a lot of specific tools. Colleagues at the Utrecht University Library, Bianca Kramer and Jeroen Bosman, started their 101 Innovations in Scholarly Communication project in 2015 and commenced a survey amongst more than 20.000 scholars worldwide. The landscape of scholarly communication is constantly changing and the changes are driven by technology, policies, and culture. But in the end the researchers themselves are the ones using tools and software in order to produce science and they are adapting constantly to new standards. Kramer and Bosman started the survey in order to create an overview of all these tools used for research, writing and publishing. The survey ran from May 10, 2015 to February 10, 2016.

What is really interesting are the results (data, publications, scripts, etc.), which have been widely disseminated in different channels. The one I find really great is the dashboard that has been created out of the available survey data. In this dashboard you can play around with the data and see what tools are used for specific activities in the scholarly communication cycle.

Here are just a few examples:

Reading: http://dashboard101innovations.silk.co/page/Read

Writing: http://dashboard101innovations.silk.co/page/Write

Archive/Share publications: http://dashboard101innovations.silk.co/page/Archive-share-publications

Outreach: http://dashboard101innovations.silk.co/page/Outreach

And there is much more to explore in the available datasets and visualizations.you van still look at the survey’s question here: https://101innovations.files.wordpress.com/2016/02/101-innovations-survey-english.pdf

If you want to share any information about specific tools that you are using in your daily media researching practices, I’m curious to hear it. You could leave a message using the box below.

Notes

[1] https://www.fosteropenscience.eu/foster-taxonomy/open-science-definition

Copyright notice: Scholarly Communication image published under CC-BY: Bianca Kramer, Jeroen Bosman.