My probably not-very-good R 23andme data analysis

My mom got the whole family 23andme kits for Christmas this year, and I’ve been looking forward to getting the results mostly so I could play with the raw data in R. It finally came in, so I went back to a blog post I’d read about analyzing 23andme data using a Bioconductor package called GWASCat. It’s a really good blog post, but as it happens, the package has been updated since it was written, so the code, sadly, didn’t work. Since I of course have VAST knowledge of bioinformatics (by which I mean I’ve hung around a lot of bioinformaticians and talked to them and kind of know a few things but not really) and am super awesome at R (by which I mean I’m like moderately okay at it), I decided to try my hand and coming up with my own analysis, based on the original blog post.

Let me be incredibly clear – I have only a vague notion of what I’m doing here, so you should not take any of what I say to be, you know, like, necessarily fact. In fact, I would love for a real bioinformatician to read this and point out my errors so I can fix them. That said, here is what I did, and what you can do too!

To walk you through it first in plain English – what you get in your raw data from 23andme is a list of RSIDs, which are accession numbers for SNPs, or single nucleotide polymorphisms. At a given position in your genetic sequence, for example, you may have an A, which means you’ll have brown hair, as opposed to a G, which means you’ll have blonde hair. Of course, it’s a lot more complicated than that, but the basic idea is that you can link traits to SNPs.

So the task that needs to be done here is two-fold. First, I need to get a list of SNPs with their strongest risk allele – in other words, what SNP location am I looking for, and which nucleotide is the one that’s associated with higher risk. Then, I need to match this up with my own list of SNPs and find the ones where both of my nucleotides are the risk allele. Here’s how I did it!

First I’m going to load my data and the packages we’ll need.

d <- read.table(file = "file_name.txt",
 sep="\t", header=T,
 colClasses=c("character", "character", "numeric", "character"),
 col.names=c("rsid", "chrom", "position", "genotype"))

Next I need to pull in the data about the SNPs from gwascat. I can use this to match up the RSIDs in my data with the ones in their data. I’m also going to drop some other columns I’m not interested in at the moment.

trait_list <- merge(gwdf_2014_09_08, d, by.x = "SNPs", by.y = "rsid")
trait_list <- trait_list[, c(1, 9, 10, 22, 27, 37)]

Now I want to find out where I have the risk allele. This is where this analysis gets potentially stupid. The risk allele is stored in the gwascat dataset with its RSID plus the allele, such as rs1423096-G. My understanding is that if you have two Gs at that position (remembering that you get one copy from each of your parents), then you’re at higher risk. So I want to create a new column in my merged dataset that has the risk allele printed twice, so that I can just compare it to the column with my data, and only have it show me the ones where I have two copies of the risk allele (since I don’t want to dig through all 10,000+ genes to find the ones of interest).

trait_list$badsnp <- paste(str_sub(trait_list$`Strongest SNP-Risk Allele`, -1), str_sub(trait_list$`Strongest SNP-Risk Allele`, -1), sep = "")

Okay, almost there! Now let’s remove all the stuff that’s not interesting and just keep the ones where I have two copies of the risk allele. I could also have it remove ones that don’t match my ancestry (European) or gender, but I’m not going to bother with it, since keeping the ones where I have a match gives me a reasonable amount to scroll through.

my_risks <- trait_list[trait_list$genotype == trait_list$badsnp,]

And there you have it! I don’t know how meaningful this analysis really is, but according to this, some traits that I have are higher educational attainment (true), persistent temperament (sure, I suppose so), migraines (true), and I care about the environment (true). Also I could be at higher risk of having a psychotic episode if I’m on methamphetamine, so I’ll probably skip trying that (which was actually my plan anyway). Anyway, it’s kind of entertaining to look at, and I’m finding SNPedia is useful for learning more.

So, now, bring on the bioinformaticians telling me how incorrect all of this is. I will eagerly make corrections to anything I am wrong about!

Living my best academic life: 2018 resolutions for getting that PhD done

I didn’t feel very optimistic going into 2017. I had recently lost my father and grandfather in the same week, and I was feeling anxious and depressed about what seemed like a pretty disastrous outcome to the 2016 elections. I don’t think I made any resolutions that year because I was so disheartened by the whole situation that I figured, who cares? My focus in 2017 was basically, do what it takes to get through it, eat some good food and drink some good wine because possibly the world will end pretty soon, etc.

But I feel different going into 2018, more motivated and invigorated. Yeah, 2017 was pretty shitty in some ways, but there were also some good things about it, actually some really great things! I know it’s very silly, but it also feels like there’s something to wiping the slate clean and starting over. At this point, I’ve worked out 100% of the days in 2018! I’ve eaten healthy, and put my shoes away and all those other things I aim to do EVERY SINGLE DAY OF THIS YEAR.

More importantly to my motivation, there’s a chance that this year could be when I finish my PhD, if I can manage to do my dissertation work in three semesters (i.e. 12 months). Maybe this is a ridiculous goal, but I’m kind of a ridiculous person, and it sure would be nice to finish. To that end, I’m deciding to make the goal for this year to live my best academic life. What does that mean?

  • read (something academic, that is) every day. My former advisor, who I still keep in touch with on Twitter, very usefully recommended #365papers – i.e. read a scholarly paper every day of the year. I probably need to read around that much for my dissertation anyway, and I do also have a huge backlog of interesting articles I’ve filed away to read “one day.” So far I’m one down, 364 to go! (But again, I’ve read an academic paper 100% of the days this year)
  • write every day. It doesn’t have to be a lot. A blog post (this counts for today!), a bit of a paper, part of my dissertation, something for work, even an academic related tweet. I know that doing a dissertation will involve way more writing than I’d been doing for the other parts of my PhD work, so I want to get into the habit now.
  • keep working on open science. I’m finally getting to the point in my coding skills that I don’t feel horrendously embarrassed for other people to see my code, but I still often think, eh, who’s going to want to see this? That’s totally the wrong idea, especially for someone whose scholarly research focuses on data sharing and reuse! I’m going to try to make a lot more commits to GitHub, even if it’s just silly stuff that I’m working on for my own entertainment, because who knows how someone else might find it useful.

So there you go! I’ll be tweeting out the papers I read on my Twitter account (@lisafederer) using #365papers, putting stuff up on my GitHub account, and I’ll probably (hopefully?) be writing more here, so watch this space!

A Method to the Madness: Choosing a Dissertation Methodology (#Quant4Life)

Somehow, shockingly, I’ve arrived at the point where I’m just a few mere months from finishing my coursework for my doctoral program (okay, 50 days, but who’s counting?), which means that next semester, I get down to the business of starting my dissertation. One of the interesting things about being in a highly interdisciplinary program like mine is that your dissertation research can be a lot of things.  It can be qualitative, it can be quantitative. It can be rigorously scientific and data-driven or it can be squishy and social science-y (perhaps I’m betraying some of my biases here in these descriptions).

If it weren’t enough that I had so many endless options available to me, this semester I’m taking two classes that couldn’t be more different in terms of methodology.  One is a data collection class from the Survey Methodology department.  We complete homework assignments in which we calculate response and cooperation rates for surveys, determining disposition for 20 different categories of response/non-response/deferral, and deciding which response and cooperation rate formula is most appropriate for this sample.  My other class is a qualitative methods class in the communications department.  On the first day of that class, I uncomfortably took down the notes “qual methods: implies multiple truths, not one TRUTH – people have different meaning.”

I count myself lucky to be in a discipline in which I have so many methodological tools in my belt, rather than rely on one method to answer all my questions.  But then again, how do I choose which tool to pull out of the belt when faced with a problem, like having to write a dissertation?

I came into my doctoral program with a pretty clear idea of the problem I wanted to address – assessing the value of shared data and somehow quantifying reuse. I envisioned my solution involving some sort of machine learning algorithm that would try to predict usefulness of datasets (because HOW COOL WOULD THAT BE?).  Then, halfway through the program, my awesome advisor moved to a new university, and I moved to a new advisor who was equally awesome but seemed to have much more of a qualitative approach.  I got very excited about these methods, which were really new to me, and started applying them to a new problem that was also very close to my heart – scientific hackathons, which I’ve been closely involved with for several years.  This kind of approach would necessitate an almost entirely qualitative approach – I’d be doing ethnographic observation, in-depth interviews, and so on.

So now, here I find myself 50 days away from the big choice. What’s my dissertation topic?  The thing I like to keep in mind is that this doesn’t necessarily mean ALL that much in the long run.  This isn’t the sum of my life’s work.  It’s one of many large research projects I’ll undertake.  Still, I want it to be something that’s meaningful and worthwhile and personally rewarding.  And perhaps most importantly of all, I want to use a methodology that makes me feel comfortable.  Do I want to talk to people about their truth?  I’ve learned some unexpected things using those methodologies and I’m glad I’ve learned something about how to do that kind of research, but in the end, I don’t think I want to be a qual researcher.  I want numbers, data, hard facts.

I guess I really knew this was what I would end up deciding in the second or third week of my qual methods class.  The professor asked a question about how one might interpret some type of qualitative data, and I answered with a response along the lines of “well, you could verify the responses by cross-checking against existing, verified datasets of a similar population.”  She gave me a very odd look, and paused, seemingly uncertain how to respond to this strange alien in her class, and then responded, “You ARE very quantitative, aren’t you?”


A constructionist in an objectivist world

I was having a conversation with a scientist friend today about my doctoral research and explaining how part of my dissertation would involve grounding my research in a particular theoretical stance.  He gave me a very perplexed look, which is not really a surprising reaction from someone who is not a social scientist.  I’m sure it sounds very odd to some people to think that one’s findings and data are subject to interpretation based on one’s particular theoretical bent, but I do think it makes some sense when you’re dealing with social phenomena.

Anyway, I had written a blog post for one of my first semester doctoral classes that very much addresses this topic, so I’ve decided to repost it here.  This blog was a response to one of our class readings, so it makes some assumptions about shared vocabulary – in case you’re not familiar, here are some terms you’ll need to understand this:

  • epistemology: an understanding of knowledge and how it is constructed, which informs a theoretical basis, which in turn informs research methodologies and therefore specific methods.  It’s sort of the scaffold on which knowledge is built
  • objectivism: an epistemology that suggests that meaning and knowledge exist apart from human interpretations of things.  In other words, there is one objective Truth that can be discovered.
  • constructionism: an epistemology that suggests that meaning is constructed by human interpretation, in which case there may be multiple valid ways of constructing meaning from the same observations.

So with that in mind, here we go!

As I read Crotty’s description of epistemologies in his Introduction to The Foundations of Social Research, I naturally found myself wondering which epistemological stance most closely fit my natural approach to research and knowledge. On the one hand, objectivism deeply appeals to the scientist in me. Not only that, but I also spend my days surrounded by objectivist research in my work at the NIH. In biomedical research, there’s no room for constructing meaning about research. Either a drug works or it doesn’t; a bacterium is present in the culture or it isn’t; a reaction occurs or it doesn’t. Granted, there are plenty of ways to “game the system” to ensure you get your desired answer out of your data – outcome switching and p-hacking come to mind – but by and large, the professional world I currently inhabit is pretty strictly objectivist.

Nonetheless, I think that constructionism might be a better fit epistemologically speaking when it comes to my own research. The phenomena I’m interested in are highly social – how do researchers’ communities of practice, ways of constructing knowledge, and attitudes and experiences shape the ways that they interact with data, specifically in terms of sharing and reusing data? In fact, I argue that the social factors are possibly the most important barriers to the problem of data sharing and reuse. While it’s true that some technological barriers exist in this problem, at least work is already being done to address some of those problems. On the other hand, the social issues are much harder to quantify and therefore harder to address. I think when it comes to questions like the ones I’m asking, it’s really difficult to talk about objective approaches, so I find myself feeling drawn to constructionism.

So what’s a constructionist to do in an objectivist world? I guess partly I’m trying to situate myself within my appropriate research community and figure out how to interact with other research communities that take different epistemological approaches. Biomedical researchers may not be my research “tribe” insofar as our methods and epistemologies are concerned, but I’m keenly interested in how my research will eventually be perceived by this community because I expect my work will have implications for them, such as in terms of how they train their next generation of researchers to interact with data successfully, how they incentivize sharing and reward certain types of academic labor and research work, and even how they approach their work of data gathering at a very fundamental level.

I was on a conference call at work last week with a group that is charged with evaluating usability of a data catalog. It was easy to tell which of the participants were scientists by training. They were essentially trying to brainstorm how they could come up with a randomized controlled trial and determine some sort of gold standard for data discovery that they could use to compare to the new data catalog. It was obvious from the discussion that they were uncomfortable with the thought of employing what they considered “non-scientific” methods and skeptical about what kind of meaningful results could arise from qualitative approaches. They wanted hard, numeric data, and other types of evidence were not part of their approach.

Moving forward, I will be interested in seeing how I can reconcile the qualitative methods I may take and the constructionist approach I may adopt with the objectivist epistemologies commonly adopted with the biomedical research community.

Some real talk from a very tired PhD student

This post is going to be different from what I normally write.  It’s going to seem a little bleak for awhile, but stick with me, because it’s going to have a happy ending.

You know the way that some girls dream of their wedding day for their whole lives? That’s kind of like me, but instead of with getting married, it was with getting my PhD (I know, I was a weird kid). Starting almost 15 years ago when I was an adjunct professor, and to this day, people will sometimes send me emails that begin “Dear Dr. Federer,” and I think, not yet, but one day.

Eventually that day did come, and I got into this PhD program, working on a topic I’m really fascinated by and I think is pretty timely and relevant.  It was great.  There was the one little catch that I also had a full-time job that I love and a lease on an apartment that was well beyond grad student means, but I’m a pretty motivated person and I figured I could handle working full-time and doing the PhD program part-time.

This plan went fine the first semester.  So fine that I figured, well, why not just go ahead and do a third class in the spring?  Being a full-time PhD student with a high-pressure, full-time job?  Sure!  WHY NOT.  The semester is halfway through now, and I’m not dead yet. So this weekend, when I was looking at the PhD student handbook and I realized that after this semester, I’ll need just 4 more classes to complete my coursework, a cockamamie plan popped into my head.  I had this little conversation with myself:

evil Lisa: what if you did all four classes over the summer?
regular Lisa: I don’t know, while working full-time? That sounds like a bit much.
evil Lisa: but then once you’re done you could advance to candidacy.  Maybe you could finish the whole thing in two years!  I bet no one has ever even done that!
regular Lisa: but this sounds like torture
evil Lisa: why don’t you at least check the summer schedule and see if there are any interesting courses?regular Lisa: hmm, well, some of these do look pretty good.  And they’re online.  Maybe it wouldn’t be so bad.

And I did.

To my credit, a part of me knew this plan was not my greatest idea, so today, when I had a meeting with a potential new advisor, since my advisor is leaving for a new position, I said, “I had this idea, but I think it might be a little crazy,” and I told her and she looked at me very patiently, the way you look at a person who has lost all touch with reality and said, “yes, that’s crazy.”

After that conversation, I came back to the graduate student lounge to wait for my class to start, and I looked at the draft of a paper I’m working on, I looked at my slides for a presentation I’m giving in class this afternoon, I looked at my Outlook calendar for work, and I hated all of it.  The presentation looked like garbage and the paper seemed to be going nowhere.  I’d spent hours working on this paper, and it really had seemed like an interesting idea at the time, but now it seemed like a completely pointless waste of time.  The more I thought about data sharing and reuse, the more I hated it.

How could this be?  I love data!  I could talk about data all day!  How could it be that I suddenly hated data?  That was when I realized that I’ve been going about this all wrong and my ridiculous approach was actually ruining the entire experience.  It’s like if you love ice cream and you have a gallon and you try to jut devour the entire thing in one sitting.  Of course it would be a horrible experience.  You’d be sick and you’d hate yourself, and you’d definitely hate ice cream.  On the other hand, if you had a little bit of the ice cream over several days, you’d enjoy it a lot more.

I have this instinct from my days of long-distance running: when I’m many miles in and tired, and I want to slow down, that’s when I push myself to run even faster.  The slower I run, the longer it’ll take me to finish, but if I just run as hard as I can, the run will be over sooner.  I’m not sure about the validity of this approach from a distance running perspective, but I think it’s fair to say it’s a completely stupid idea when it comes to a PhD.

People warned me when I started this program that everyone gets burned out at some point, and I thought, not me, I love my topic, there’s no way I could ever get tired of it.  That’s why it was especially confusing when I sat there looking at my paper draft yesterday and just hating the guts out of data sharing and reuse.  Fortunately, I don’t hate data.  I hate torturing myself.

So, that’s why I’m not going to!  Could I take four courses over the summer?  I suppose.  Could I finish a PhD in two years while working full-time? I guess it’s possible.  But what would be the point, if I emerged from the process angry and tired and hating data?  Time to slow down and enjoy the ride, and de-register for at least two of those summer classes.

Scientific “artifacts” – #overlyhonestmethods and indirect observation

This week I’ve been reading the first half of Bruno Latour and Steve Woolgar’s book Laboratory Life: The Construction of Scientific Facts.  Like many of the other pieces I’ve been reading lately, this book argues for a social contructivist theory of scientific knowledge, which is a perspective I’m really starting to identify with.  What I’m finding most interesting about this book is the ethnographic approach that was taken to observe the creation of scientific knowledge.  Basically, Bruno Latour spent two years observing in a biology lab at the Salk Institute.  Chapter 1 begins with a snippet of a transcript covering about 5 minutes of activity in a lab – all the little seemingly insignificant bits of conversation and activity that, taken together, would allow an outside observer to understand how scientific knowledge is socially constructed.

The authors emphasize that real sociological understanding of science can only come from an outside observer, someone who is not themselves too caught up in the science – someone who can’t see the forest for the trees, as it were.  They even suggest that it’s important to “make the activities of the laboratory seem as strange as possible in order not to take too much for granted” (30).  Why should we need someone to spend two years in a lab watching research happen when the researchers are going to be writing up their methods and results in an article anyway, you may ask?  The authors argue that “printed scientific communications systematically misrepresent the activity that gives rise to published reports” and even “systematically conceal the nature of the activity” (28).  In my experience, I would agree that this is true – a great example of it is #overlyhonestmethods, my absolute favorite Twitter hashtag of all time, in which scientists reveal the dirty secrets that don’t make it into the Nature article.

I’ve been thinking that an ethnographic approach might be an effective way to approach my research, and I’m thinking it makes even more sense after what I’ve read of this book so far.  However, this research was done in the 1970s, when research was a lot different.  Of course there are still clinical and bench researchers who are doing actual physical things that a person can observe, but a lot of research, especially the research I’m interested in, is more about digital data that’s already collected.  If I wanted to observe someone doing the kind of research I’m interested in, it would likely involve me sitting there and staring at them just doing stuff on a computer for 8 hours a day.  So I’m not sure if a traditional ethnographic approach is really workable for what I want to do.  Plus, I don’t think I’d get anyone to agree to let me observe them.  I know I certainly wouldn’t let someone just sit there and watch me work on my computer for a whole day, let alone two years (mostly because I’d be embarrassed for anyone else to know how much time I spend looking at pictures of dogs wearing top hats and videos of baby sloths).  Even if I could get someone to agree to that, I do wonder about the problem of observer effect – that the act of someone observing the phenomenon will substantively change that phenomenon (like how I probably wouldn’t take a break from writing this post to watch this video of a porcupine adorably nomming pumpkins if someone was observing me).

This thought takes me back to something I’ve been thinking about a lot lately, which is figuring out methods of indirect observation of researchers’ data reuse practices.  I’m very interested in exploring these sorts of methods because I feel like I’ll get better and more accurate results that way.  I don’t particularly like survey research for a lot of reasons: it’s hard to get people to fill out your survey, sometimes they answer in ways that don’t really give you the information you need, and you’re sort of limited in what kind of information you can get from them.  I like interviews and focus groups even less, for many of the same reasons.  Participant observation and ethnographic approaches have the problems I’ve discussed above.  So what I think I’m really interested in doing is exploring the “artifacts” of scientific research – the data, the articles, the repositories, the funny Twitter hashtags.  This idea sort of builds upon the concept I discussed in my blog last week – how systems can be studied and tells us something about their intended users.  I think this approach could yield some really interesting insights, and I’m curious to see what kind of “artifacts” I’ll be able to locate and use.

If data sharing is difficult, what can it tell us? An Actor-Network Theory approach

In my ongoing adventures in science and technology studies readings, this week I’ve been reading The Social Construction of Technological Systems.  It diverges a little bit from my interests, strictly speaking, and focuses more on development of technologies rather than more of the laboratory and clinical science that I’m interested in, but I’m still glad I read it because it sparked some thoughts and ideas that I think could be interesting to pursue.

The portions of the collection that I read were rooted in social constructivist theory (as you might guess from the title of the book), specifically Actor-Network Theory (ANT).  The preface to the 25th anniversary edition explores some new developments in the field since the original edition, including “posthuman” approaches that consider nonhuman actants within social systems (xxv).  Scientific researchers operate within a complex system – not only because scientific research is itself often complicated, but also because science happens within a social system involving things like grant funding and scholarly articles and citations and so on.  Data play important roles in that system, as the raw product of scientific research, as evidence for scientific claims, and, now that many researchers operate in fields where data sharing is becoming more expected, something of a commodity.  In ANT, actants can be nonhuman, so I think it would be reasonable to consider data an actant in the social network of scientific research, and potentially one of the more interesting parts of that network, even more so than the humans.

The other avenue this collection sent my mind down had to do with data repositories.  At the start of the chapter “Society in the Making: The Study of Technology as a Tool for Sociological Analysis,” Michael Callon argues that “the study of technology itself can be transformed into a sociological tool of analysis” (77).  To summarize his thesis, essentially he argues that technological systems are created by what he calls “engineer-sociologists,” the designers or creators of the technology, who have had to essentially transform themselves into sociologists to study the intended users in order to develop technologies that will meet their needs.  If this is true, then these new technologies should be able to tell us something about their intended users.

This chapter got me thinking about some of the systems that are in place for data sharing, like some of the major data repositories.  I won’t name any names, but there are a couple of very well-known data repositories that people often complain to me about when it comes to submitting their data.  In some labs, researchers have mentioned that they have one person who knows how to submit the data, and they all have to bug that person because they can’t figure out how to do it properly.  I’ve read some of the help documentation for some of these repositories, and those people weren’t complaining for nothing.  Many of these systems are a big pain – opaque in many of their requirements and onerous to use, yet many researchers are specifically required to put their data there because of grant or journal requirements.

So if we take Callon’s approach and view the system as a tool for sociological analysis, what does it say about the state of data sharing that some of these repositories are so difficult to use?  I can think of possibilities:

  • that the engineers haven’t really been in all that close of contact with the users, so they’ve built a system that doesn’t actually meet their users’ needs;
  • that the needs of the system administrators (good quality data with a minimal amount of effort on their part) are directly at odds with the needs of the data submitters (also a minimal amount of effort on their part) and the administrators’ needs won out;
  • that the engineers are aware of issues but there just isn’t money/time/resources to make the system easier to use.

Another possibility is that sharing data isn’t really that much of a priority for most researchers, so they go along with a hard-to-use system because it’s not worth the trouble to try to get it to change.  It’s sort of like how I feel like it’s really a huge pain to have to deal with the DMV, but I only have to go there once every few years, so I’m not about to start a huge campaign to reform the DMV, especially when there are bigger problems our elected officials should be dealing with.  Maybe sharing your data in some of these systems is like that – an annoyance you deal with because you have to.

This is all entirely speculation on my part, but I do think it’s an interesting approach to take.  It would be interesting to sit down with some of the people who built or who currently run some of these systems and get the story on why things are the way they are.

Flip flop: my failed experiment with flipped classroom R instruction

I don’t know if this terminology is common outside of library circles, but it seems like the “flipped classroom” has been all the rage in library instruction lately.  The idea is that learners do some work before coming to the session (like read something or watch a video lecture), and then the in-person time is spent on doing more activities, group exercises, etc.  As someone who is always keen to try something new and exciting, I decided to see what would happen if I tried out the flipped classroom model for my R classes.

Actually, teaching R this way makes a lot of sense.  Especially if you don’t have any experience, there’s a lot of baseline knowledge you need before you can really do anything interesting.  You’ve got to learn a lot of terminology, how the syntax of R works, boring things like what a data frame is and why it matters.  That could easily be covered before class to save the in person time for the more hands-on aspects.  I’ve also noticed a lot of variability in terms of how much people know coming into classes.  Some people are pretty tech savvy when they arrive, maybe even have some experience with another programming language.  Other people have difficulty understanding how to open a file.  It’s hard to figure out how to pace a class when you’ve got people from all over that spectrum of expertise.  On the other hand, curriculum planning would be much easier if you could know that everyone is starting out with a certain set of knowledge and build off of it.

The other reason I wanted to try this is just the time factor.  I’m busy, really busy.  My library’s training room is also hard to book because we offer so many classes.  The people I teach are busy.  I teach my basic introduction to R course as a 3-hour session, and though I’d really rather make it 4 hours, even finding a 3-hour window when I and the room are both available and people are likely to be able to attend is difficult.  Plus, it would be nice if there was some way to deliver this instruction that wasn’t so time-intensive for me.  I love teaching R – it’s probably my favorite thing I do in my job and I’d estimate I’ve taught close to 500 researchers how to code.  I generally spend around 9 hours a month teaching R, plus another 4-6 hours doing prep, administrative stuff, and all the other things that have to get done to make a class function.  That’s a lot of time, and though I don’t at all mind doing it, I’d definitely be interested in any sort of way I could streamline that work without having a negative impact on the experience of learning R from me.

For all these reasons, I decided to experiment with trying out a flipped classroom model for my introduction to R class.  I had grand plans of making a series of short video tutorials that covered bite-sized pieces of learning R.  There would be a bunch of them, but they’d be about 5 minutes each.  I arranged for the library to get Adobe Captivate, which is very cool video tutorial software, and these tutorials are going to be so awesome when I get around to making them.  However, I had already scheduled the class for today, February 28, and I hadn’t gotten around to making them yet.  Fortunately, I had a recording of a previous Intro to R class I’d taught, so I chopped the relevant parts of that up into smaller pieces and made a YouTube playlist that served as my pre-class work for this session, probably about two and a half hours total.

I had 42 people were either signed up or on the waitlist at the end of last week.  I think I made the class description pretty clear – that this session was only an hour, but you did have to do stuff before you got there.  I sent out an email with the link to the video reminding people that they would be lost in class if they didn’t watch this stuff.  Even so, yesterday morning, the last of the videos had only 8 views, and I knew at least two of those were from me checking the video to make sure it worked.  So I sent out another email, once again imploring them to watch the videos before they came to class and to please cancel their registration and sign up for a regular R class if this video thing wasn’t for them.

By the time I taught the class this afternoon, 20 people had canceled their registration.  Of the remaining 22, 5 showed up.  Of the 5 that showed up, it quickly became apparent to me that none of them had watched the videos.  I knew no one was going to answer honestly if I asked who had watched them, so I started by telling them to read in the CSV file to a data frame.  This request is pretty fundamental, and also pretty much the first thing I covered in the videos, so when I was met with a lot of blank stares, I knew this experiment had pretty much failed.  I did my best to cover what I could in an hour, but that’s not much, so instead of this being a cool, interactive class where people ended up feeling empowered and ready to go write code, I got the feeling those people left feeling bewildered and like they wasted an hour.  One guy who had come in 10 minutes late came up to me after class and was like, “so this is a programming language?  What can you do with it?”  And I kind of looked at him like….whaaaat?  It turned out he hadn’t even registered for the class to begin with, much less done any of the pre-class work – he had been in the library and saw me teaching and apparently thought it looked interesting so he decided to wander in.

I felt disappointed by this failed experiment, but I’m not one to give up at the first sign of failure, so I’ve been thinking about how I could make this system work.  It could just be that this model is not suited to people in the setting where I teach.  I am similar to them – a busy, working professional who knows this is useful and I should learn it, but it’s hard to find the time – and I think about what it would take for me to do the pre-class work.  If I had the time and the videos were decent enough quality, I think I’d do it, but honestly chances are 50-50 that I’d be able to find the time.  So maybe this model just isn’t made for my community.

Before I give up on this experiment entirely, though, I’d love to hear from anyone who has tried this kind of approach for adult learners.  Did it work, did it not?  What went well and what didn’t?  And of course, being the data queen that I am, I intend to collect some data.  I’m working on a modified class evaluation for those 5 brave souls who did come to get some feedback on the pre-class work model, and I’m also planning on sending a survey out to the other 38 people who didn’t come to see what I can find out from them.  Data to the rescue of the flipped class!

Delocalizing data – a data sharing conundrum

This week I’ve been reading the second half of Sergio Sismondo’s An Introduction to Science and Technology Studies and I have been finding myself interested in the question of the universality of scientific knowledge and data.  A single sentence that I think captures the scope of the problem I’m finding interesting: “scientific and engineering research is textured at the local level, that it is shaped by professional cultures and interplays of interests, and that its claims and products result from thoroughly social processes” (168).  That is to say, the output of a scientific experiment is not some sort of universal truth – rather, data are the record of a manipulation of nature at a given time in a given place by a given person, highly contextualized and far from universally applicable.

I was in my kitchen the other day, baking a mushroom pot pie, after reading Chapter 10, specifically the section on “Tinkering, Skills, and Tacit Knowledge.”  That section describes the difficulties researchers were having in recreating a certain type of laser, even when they had written documentation from the original creators, even when they had sufficient technical expertise to do so, even when they had all the proper tools – in fact, even when they themselves had already built one, they found it difficult to build a second laser.  As I was pulling my pie out of the oven, I was thinking about the tacit knowledge involved in baking – how I know what exactly is meant when the instructions say I should bake till the crust is “golden brown,” how I make the decision to use fresh thyme instead of the chipotle peppers the recipe called for because I don’t like too much heat, how I know that my oven tends to run a little cold so I should set the temperature 10 degrees higher than called for by the recipe.  Just having a recipe isn’t enough to get a really tasty mushroom pot pie out of the oven, just as having a research article or other scientific documentation isn’t enough to get success out of an experiment.

These problems raise some obvious issues around reproducibility, which is a huge focus of concern in science at the moment.  Obviously scientific instruments are hopefully a little more standardized than my old apartment oven that runs cold, but you’d be surprised how much variation exists in scientific research.  Reproducibility is especially a problem when the researcher is herself the instrument, such as in the case of certain types of qualitative research.  Focus group or interview research is usually conducted using a script, so theoretically anyone could pick up the script and use it to do an interview, but a highly experienced researcher knows how to go off-script in appropriate ways to get the needed information, asking probing questions or guiding a participant back from a tangent.

More relevant to my own research, thinking about data not as representations of some sort of universal truth, but as the results of an experiment conducted within a potentially complex local and social context, can shared data be meaningfully reused?  How do we filter out the noise and get to some sort of ground truth when it comes to data, or can we at all?  Part of the question that I really want to address in my dissertation is what barriers exist to reusing shared data, and I think this is a huge one.  Some of the problem can be addressed by standards, or “formal objectivity” (140).  However, as Sismondo notes, standards are themselves localized and tied to social processes.  Between different scientific fields, the same data point may be measured using vastly different techniques, and within a lab, the equipment you purchase often has a huge impact on how your data are collected and stored.  Maybe we can standardize to an extent within certain communities of practice, but can we really hope to get everyone in the world on one page when it comes to standards?

If we can’t standardize, then maybe we can at least document.  If I measured in inches but your analysis needs length input in centimeters, that’s okay, as long as you know I measured in inches and you convert the data before doing your analysis.  That seems fairly obvious, but how do I necessarily know what I need to document to fully contextualize the data for someone else to use it?  Is it important that I took the measurement on a Tuesday at 4 pm, that the temperature outside was 80 degrees with 70% humidity, that I used a ruler rather than a tape measure, that the ruler was made of plastic rather than wood?  I could go on and on.  How much documentation is enough, and who decides?

The concepts of reproducibility, standardization, and documentation are nothing new, but the idea of data being inextricably caught up in local and social contexts does get me thinking about the feasibility of reusing shared data.  I don’t think data sharing is going to stop – there are enough funders and journals on board with requiring data sharing that I think researchers should expect that data sharing will be part of their scientific work going forward.  The question then is what is the utility of this shared data.  Is it just useful for transparency of the published articles, to document and prove the claims made in those publications?  Or can we figure out ways to surmount data’s limited context and make it more broadly usable in other settings?  Are there certain fields that are more likely to achieve that formal objectivity than others, and therefore certain fields were data reuse may be more appropriate or at least easier than others?  I think this requires further thought.  Good thing I have a few years to spend thinking about it!


Who owns science? Some musings on structural functionalism

This week I’ve been reading Sergio Sismondo’s An Introduction to Science and Technology Studies, which has given me a lot to think about in terms of theoretical backgrounds for understanding how science creates knowledge.  In fact, it’s almost given me too much to think about.  There are so many different theoretical bases brought into the mix here, and I can see the relative merits of each, so I find myself wondering how to make sense of it all, but also what it means to adopt a theoretical underpinning as a social scientist.  Is it like a religion, where you accept one and only one dogma, and all parts of it, to the exclusion of all others?  Or is it more like a buffet, where you pick a little bit of the things that seem appealing to you and leave behind the things that don’t catch your eye?  I’m hoping it’s the latter, and I’m going to go on that assumption until the theory police tell me I can’t do it. 🙂  So, on that assumption, here are some ideas I’ve put on my plate from Sismondo’s buffet.

Structural Functionalism and Mertonian Norms

My favorite theoretical framework I picked up here was structural functionalism, and in particular, Robert Merton’s four guiding norms.  Structural functionalism, as I understand it, argues that society is composed of institutional structures that function based on guiding norms and customs.  Merton suggests that science is one such institution, the primary goal of which is “the extension of certified knowledge” (23).  Merton also outlined four norms of behavior that guide scientific practice, suggesting that those who follow them will be rewarded and those who violate them will be punished.  The norms are universalism (that the same criteria should be used to evaluate scientific claims regardless of the race, gender, etc of the person making them), communism (that scientific knowledge belongs to everyone), disinterestedness (that scientists place the good of the scientific community ahead of their own personal gain), and organized skepticism (that the community should not believe new ideas until they have been convincingly proven).

Of those four norms, communism and disinterestedness speak the most to my interest in data sharing and reuse.  Communism seems the most obviously related.  It’s very interesting to think about what parts of science are typically thought to belong to the community and which are thought to be privately owned.  For example, the Supreme Court unanimously ruled in 2015 that human genes could not be patented, a decision that seems in line with Merton’s communism norm.  On the other hand, plenty of scientific ideas can be and are patented.  While many scientific journals are becoming open access and making their articles freely available, many more work on a subscription model, suggesting that the ideas shared within are available for common consumption – if you are willing and able to pay the fee.

Although this example comes from an entirely different realm than science, thinking about these ideas has reminded me of the case of the artist Anish Kapoor, who purchased the exclusive rights to paint with the world’s “blackest black” so that no other artist can use it.  In retaliation, another artist designed the “pinkest pink” paint and made it available for sale – to any artist except Anish Kapoor. While this episode is somewhat entertaining, it does bring up some interesting ideas about ownership in communities that are generally dedicated to the common good.  Art and science are very different, but they’re also quite alike in some ways that are very relevant to the work I’m doing.  They’re both activities carried out by individuals for their own reasons (artistic expression, scientific curiosity) for the common good (to share beauty with the world, to further scientific knowledge).  We are outraged when we hear of a rich artist laying exclusive claim to the raw materials of art so that no one else can use them.  It feels somehow petty, and it also seems like a disservice to not just the art world, but to all of is.  What could others be creating for us if they had access to that black?  I don’t know if we feel that same outrage when we hear of a scientist trying to lay exclusive claim to data.  Of course this isn’t a perfect analogy – a big part of the work of science is gathering or creating the data, which confuses the concept of ownership.  Still, I think there are some interesting ideas here to explore about how scientists think about common ownership of science – not just the ideas, but the data as well.

I started out this entry saying I was going to dip into some other theories – I have some things to say about social constructionism and actor-network theory, but now I’ve spent a long time going on and on about art and science and this is getting a bit long, so I think I’ll stop here for today. 🙂