Behind the scenes of “Data sharing in PLOS ONE: An analysis of Data Availability Statements”

Recently some colleagues and I published a paper in PLOS in which we analyzed about 47,000 Data Availability Statements as a way of exploring the state of data sharing in a journal with a pretty strong data availability policy. The paper has gotten a good response from what I’ve seen on Twitter, and I’m really happy with how it turned out, thanks in part to some great feedback from the reviewers. But I also wanted to tell a few more things about how this paper came about – the things that don’t make it into the final scholarly article. A behind the scenes look, if you will.

The idea for this paper arose out of a somewhat eye-opening experience. I needed to get a hold of a good dataset – I forget why exactly, but I think it was when I was first starting to teach R and wanted to some real data that I could use in the classes for the hands-on exercises. Remembering that PLOS had this data availability policy, I thought to myself, ah, no problem, I will find an article that looks relevant to the researchers I’m teaching, download the data, and use it in my demo (with proper attribution and credit, of course). So I found an article that looked good and scrolled down to the Data Availability Statement.  Data available upon request.  Huh. I thought you weren’t allowed to say that, but okay, I guess this one slipped through the policy.  Found another one – data is within the paper, it said, except the only data in the paper were summary tables, which were of no use to me (nor would they be of use to anyone hoping to verify the study or reanalyze the data, for example).

What a weird fluke, I thought, that the first two papers I happened to look at didn’t really follow the policy. So I checked a third, and a fourth. Pretty soon I’d spent a half hour combing through recent PLOS articles and I had yet to find one with a publicly available dataset that I could easily download from a repository. I ended up looking elsewhere for data (did you know that baseball fans keep surprisingly in-depth data on a gazillion data points?) but I was left wondering what the real impact of this policy was, which was why I decided to do this study.

I’ll let you read the paper to find out what exactly it is that we found, but there’s one other behind-the-scenes anecdote that I’ll share about this paper that I hope will be encouraging. Obviously if you’re going to write critically about data availability, you’re going to look a little hypocritical if you don’t share your own data. I fully intended to share our data and planned to do so using Figshare, which is how I’d shared a dataset associated with another publication I’d previously published in PLOS. When I shared the data from the first article, I set it to be public immediately, though I didn’t expect anyone to want to see it before the paper was out. Unexpectedly, and unbeknownst to me, someone at Figshare apparently thought this was an interesting dataset and decided to tweet it out the same day I submitted the paper to PLOS, obviously well before it was ever published, much less accepted.

While the interest in the dataset was encouraging, I was also concerned about the fact that it was out before the paper was accepted. I figured I was flattering myself to think that someone would want to scoop me, but then, I got an email from someone I didn’t know, who told me that she had found my dataset and that she would like to write an article describing my results, and would I mind sharing my literature review/citations with her to save her the trouble? In other words, “hi, I would like to write basically the paper that you’re trying to get accepted using all of the work you did.” I want to be clear that I am all for data sharing, but this situation bothered me. Was I about to get scooped?

Obviously our paper came out, no one beat us to it, and as far as I know, no one has ever written another paper using that dataset, but I was thinking about it when I was uploading the data for this most recent paper.  This dataset was way more interesting and broadly applicable than the first one, so what if someone did get a hold of it before our paper came out? So what I decided to do was to upload it to Figshare, have it generate a DOI, but keep the dataset listed as private rather than publicly release it. Our data availability statement included the DOI and was therefore on the surface in compliance, but I had a feeling that, if you went to the DOI, it would tell you that the dataset was private or wasn’t found. Obviously I could have checked this before I submitted, but to be totally honest, I just left it as it was because I was genuinely curious whether any of the reviewers would try to check it themselves and say something.

To their credit, all three of the reviewers (who by the way, were incredibly helpful and gave the most useful feedback I’ve ever gotten on peer review, which I think significantly improved the paper) did indeed point out that the DOI didn’t work. In our revisions, our Data Availability Statement included a working link to not only the data, but also the code, on OSF. I invite anyone who is interested to reuse it and hope someone will find it useful. (Please don’t judge me on the quality of my code, though – I wrote it a long time ago when I was first learning R and I would do it way better now.)

 

If data sharing is difficult, what can it tell us? An Actor-Network Theory approach

In my ongoing adventures in science and technology studies readings, this week I’ve been reading The Social Construction of Technological Systems.  It diverges a little bit from my interests, strictly speaking, and focuses more on development of technologies rather than more of the laboratory and clinical science that I’m interested in, but I’m still glad I read it because it sparked some thoughts and ideas that I think could be interesting to pursue.

The portions of the collection that I read were rooted in social constructivist theory (as you might guess from the title of the book), specifically Actor-Network Theory (ANT).  The preface to the 25th anniversary edition explores some new developments in the field since the original edition, including “posthuman” approaches that consider nonhuman actants within social systems (xxv).  Scientific researchers operate within a complex system – not only because scientific research is itself often complicated, but also because science happens within a social system involving things like grant funding and scholarly articles and citations and so on.  Data play important roles in that system, as the raw product of scientific research, as evidence for scientific claims, and, now that many researchers operate in fields where data sharing is becoming more expected, something of a commodity.  In ANT, actants can be nonhuman, so I think it would be reasonable to consider data an actant in the social network of scientific research, and potentially one of the more interesting parts of that network, even more so than the humans.

The other avenue this collection sent my mind down had to do with data repositories.  At the start of the chapter “Society in the Making: The Study of Technology as a Tool for Sociological Analysis,” Michael Callon argues that “the study of technology itself can be transformed into a sociological tool of analysis” (77).  To summarize his thesis, essentially he argues that technological systems are created by what he calls “engineer-sociologists,” the designers or creators of the technology, who have had to essentially transform themselves into sociologists to study the intended users in order to develop technologies that will meet their needs.  If this is true, then these new technologies should be able to tell us something about their intended users.

This chapter got me thinking about some of the systems that are in place for data sharing, like some of the major data repositories.  I won’t name any names, but there are a couple of very well-known data repositories that people often complain to me about when it comes to submitting their data.  In some labs, researchers have mentioned that they have one person who knows how to submit the data, and they all have to bug that person because they can’t figure out how to do it properly.  I’ve read some of the help documentation for some of these repositories, and those people weren’t complaining for nothing.  Many of these systems are a big pain – opaque in many of their requirements and onerous to use, yet many researchers are specifically required to put their data there because of grant or journal requirements.

So if we take Callon’s approach and view the system as a tool for sociological analysis, what does it say about the state of data sharing that some of these repositories are so difficult to use?  I can think of possibilities:

  • that the engineers haven’t really been in all that close of contact with the users, so they’ve built a system that doesn’t actually meet their users’ needs;
  • that the needs of the system administrators (good quality data with a minimal amount of effort on their part) are directly at odds with the needs of the data submitters (also a minimal amount of effort on their part) and the administrators’ needs won out;
  • that the engineers are aware of issues but there just isn’t money/time/resources to make the system easier to use.

Another possibility is that sharing data isn’t really that much of a priority for most researchers, so they go along with a hard-to-use system because it’s not worth the trouble to try to get it to change.  It’s sort of like how I feel like it’s really a huge pain to have to deal with the DMV, but I only have to go there once every few years, so I’m not about to start a huge campaign to reform the DMV, especially when there are bigger problems our elected officials should be dealing with.  Maybe sharing your data in some of these systems is like that – an annoyance you deal with because you have to.

This is all entirely speculation on my part, but I do think it’s an interesting approach to take.  It would be interesting to sit down with some of the people who built or who currently run some of these systems and get the story on why things are the way they are.