A Method to the Madness: Choosing a Dissertation Methodology (#Quant4Life)

Somehow, shockingly, I’ve arrived at the point where I’m just a few mere months from finishing my coursework for my doctoral program (okay, 50 days, but who’s counting?), which means that next semester, I get down to the business of starting my dissertation. One of the interesting things about being in a highly interdisciplinary program like mine is that your dissertation research can be a lot of things.  It can be qualitative, it can be quantitative. It can be rigorously scientific and data-driven or it can be squishy and social science-y (perhaps I’m betraying some of my biases here in these descriptions).

If it weren’t enough that I had so many endless options available to me, this semester I’m taking two classes that couldn’t be more different in terms of methodology.  One is a data collection class from the Survey Methodology department.  We complete homework assignments in which we calculate response and cooperation rates for surveys, determining disposition for 20 different categories of response/non-response/deferral, and deciding which response and cooperation rate formula is most appropriate for this sample.  My other class is a qualitative methods class in the communications department.  On the first day of that class, I uncomfortably took down the notes “qual methods: implies multiple truths, not one TRUTH – people have different meaning.”

I count myself lucky to be in a discipline in which I have so many methodological tools in my belt, rather than rely on one method to answer all my questions.  But then again, how do I choose which tool to pull out of the belt when faced with a problem, like having to write a dissertation?

I came into my doctoral program with a pretty clear idea of the problem I wanted to address – assessing the value of shared data and somehow quantifying reuse. I envisioned my solution involving some sort of machine learning algorithm that would try to predict usefulness of datasets (because HOW COOL WOULD THAT BE?).  Then, halfway through the program, my awesome advisor moved to a new university, and I moved to a new advisor who was equally awesome but seemed to have much more of a qualitative approach.  I got very excited about these methods, which were really new to me, and started applying them to a new problem that was also very close to my heart – scientific hackathons, which I’ve been closely involved with for several years.  This kind of approach would necessitate an almost entirely qualitative approach – I’d be doing ethnographic observation, in-depth interviews, and so on.

So now, here I find myself 50 days away from the big choice. What’s my dissertation topic?  The thing I like to keep in mind is that this doesn’t necessarily mean ALL that much in the long run.  This isn’t the sum of my life’s work.  It’s one of many large research projects I’ll undertake.  Still, I want it to be something that’s meaningful and worthwhile and personally rewarding.  And perhaps most importantly of all, I want to use a methodology that makes me feel comfortable.  Do I want to talk to people about their truth?  I’ve learned some unexpected things using those methodologies and I’m glad I’ve learned something about how to do that kind of research, but in the end, I don’t think I want to be a qual researcher.  I want numbers, data, hard facts.

I guess I really knew this was what I would end up deciding in the second or third week of my qual methods class.  The professor asked a question about how one might interpret some type of qualitative data, and I answered with a response along the lines of “well, you could verify the responses by cross-checking against existing, verified datasets of a similar population.”  She gave me a very odd look, and paused, seemingly uncertain how to respond to this strange alien in her class, and then responded, “You ARE very quantitative, aren’t you?”

#Quant4Life

Can you hack it? On librarian-ing at hackathons

I had the great pleasure of spending the last few days working on a team at the latest NCBI hackathon.  I think this is the sixth hackathon I’ve been involved in, but this is the first time I’ve actually been a participant, i.e. a “hacker.”  Prior to working on these events, I’d heard a little bit about hackathons, mostly in the context of competitive hackathons – a bunch of teams compete against each other to find the “best” solution to some common problem, usually with the winning team receiving some sort of cash prize.  This approach can lead to successful and innovative solutions to problems in a short time frame.  However, the so-called NCBI-style hackathons that I’ve been involved in over the last couple years involve multiple teams each working on their own individual challenge over a period of three days. There are no winners, but in my experience, everyone walks away having accomplished something, and some very promising software products have come out of these hackathons.  For more specifics about the how and why of this kind of hackathon, check out the article I co-authored with several participants and the mastermind behind the hackathons, Ben Busby of NCBI.

As I said, this time was the first hackathon that I’ve actually been involved as a participant on a team, but I’ve had a lot of fun doing some librarian-y type “consulting” for five other hackathons before this, and it’s an experience I can highly recommend for any information professional who is interested in seeing science happen real-time.  There’s something very exciting about watching groups of people from different backgrounds, with different expertise, most of whom have never met each other before, get together on a Monday morning with nothing but an often very vague idea, and end up on Wednesday afternoon with working software that solves a real and significant biomedical research problem.  Not only that, but most of the groups manage to get pretty far along on writing a draft of a paper by that time, and several have gone on to publish those papers, with more on their way out (see the F1000Research Hackathons channel for some good examples).

As motivated and talented as all these hackathon participants are, as you can imagine, it takes a lot of organizational effort and background work to make something like this successful.  A lot of that work needs to be done by someone with a lot of scientific and computing expertise.  However, if you are a librarian who is reading this, I’m here to tell you that there are some really exciting opportunities to be involved with a hackathon, even if you are completely clueless when it comes to writing code.  In the past five hackathons, I’ve sort of functioned as an embedded informationist/librarian, doing things like:

  • basic lit searching for paper introductions and generally locating background information.  These aren’t formal papers that require an extensive or systematic lit review, but it’s useful for a paper to provide some context for why the problem is significant.  The hackers have a ton of work to fit in to three days, so it’s silly to have them spend their limited time on lit searching when a pro librarian can jump in and likely use their expertise to find things more easily anyway
  • manuscript editing and scholarly communication advice.  Anyone who has worked  with co-authors knows that it takes some work to make the paper sound cohesive, and not like five or six people’s papers smushed together.  Having someone like a librarian with editing experience to help make that happen can be really helpful.  Plus, many librarians  have relevant expertise in scholarly publishing, especially useful since hackathon participants are often students and earlier career researchers who haven’t had as much experience with submitting manuscripts.  They can benefit from advice on things like citation management and handling the submission process.  Also, I am a strong believer in having a knowledgeable non-expert read any paper, not just hackathon papers.  Often writers (and I absolutely include myself here) are so deeply immersed in their own work that they make generous assumptions about what readers will know about the topic.  It can be helpful to have someone who hasn’t been involved with the project from the start take a look at the manuscript and point out where additional background or explanation might be beneficial to aiding general understandability.
  • consulting on information seeking behavior and giving user feedback.  Most of the hackathons I’ve worked on have had teams made up of all different types of people – biologists, programmers, sys admins, other types of scientists.  They are all highly experienced and brilliant people, but most have a particular perspective related to their specific subject area, whereas librarians often have a broader perspective based on our interactions with lots of people from various different subject areas.  I often find myself thinking of how other researchers I’ve met might use a tool in other ways, potentially ones the hackathon creators didn’t necessarily intend.  Also, at least at the hackathons I’ve been at, some of the tools have definite use cases for librarians – for example, tools that involve novel ways of searching or visualizing MeSH terms or PubMed results.  Having a librarian on hand to give feedback about how the tool will work can be useful for teams with that kind of a scope.

I think librarians can bring a lot to hackathons, and I’d encourage all hackathon organizers to think about engaging librarians in the process early on.  But it’s not a one-way street – there’s a lot for librarians to gain from getting involved in a hackathon, even tangentially.  For one thing, seeing a project go from idea to reality in three days is interesting and informative.  When I first started working with hackathons, I didn’t have that much coding experience, and I certainly had no idea how software was actually developed.  Even just hanging around hackathons gave me so much of a better understanding, and as an informationist who supports data science, that understanding is very relevant.  Even if you’re not involved in data science per se, if you’re a biomedical librarian who wants to gain a better understanding of the science your users are engaged in, being involved in a hackathon will be a highly educational experience.  I hadn’t really realized how much I had learned by working with hackathons until a librarian friend asked me for some advice on genomic databases. I responded by mentioning how cool it was that ClinVar would tell you about pathogenic variants, including their location and type (insertion, deletion, etc), and my friend was like, what are you even talking about, and that was when it occurred to me that I’ve really learned a lot from hackathons!  And hey, if nothing else, there tends to be pizza at these events, and you can never go wrong with pizza.

I’ll end this post by reiterating that these hackathons aren’t about competing against each other, but there are awards given for certain “exemplary” achievements.  Never one to shy away from a little friendly competition, I hoped I might be honored for some contribution this time around, and I’m pleased to say I was indeed recognized . 🙂

It's true, I'm the absolute worst at darts.

There is a story behind this, but trust me when I say it’s true, I’m the absolute worst at darts.

Radical Reuse: Repurposing Yesterday’s Data for Tomorrow’s Discoveries

I’ve been invited to be speaker at this evening’s Health 2.0 STAT meetup at Bethesda’s Barking Dog, alongside some pretty awesome scientists with whom I’ve been collaborating on some interesting research projects.  This invitation is a good step toward my ridiculously nerdy goal of one day being invited to give a TED talk.  My talk, entitled “Radical Reuse: Repurposing Yesterday’s Data for Tomorrow’s Discoveries” will briefly outline my view of data sharing and reuse, including what I view as five key factors in enabling data reuse.  Since I have only five minutes for this talk, obviously I’ll be hitting only some highlights, so I decided to write this blog post to elaborate on the ideas in that talk.

First, let’s talk about the term “radical reuse.”  I borrow this term from the realm of design, where it refers to taking discarded objects and giving them new life in some context far removed from their original use.  For some nice examples (and some cool craft ideas), check out this Pinterest board devoted to the topic.  For example, shipping pallets are built to fulfill the specific purpose of providing a base for goods in transport.  The person assembling that shipping pallet, the person loading it on to a truck, the person unpacking it, and so on, use it for this specific purpose, but a very creative person might see that shipping pallet and realize that they can make a pretty cool wine rack out of it.

The very same principle is true of scientific research data.  Most often, a researcher collects data to test some specific hypothesis, often under the auspices of funding that was earmarked to address a particular area of science.  Maybe that researcher will go on to write an article that discusses the significance of this data in the context of that research question.  Or maybe that data will never be published anywhere because they represent negative or inconclusive findings (for a nice discussion of this publication bias, see Ben Goldacre’s 2012 TED talk).  Whatever the outcome, the usefulness of the dataset need not end when the researcher who gathered the data is done with it.  In fact, that data may help answer a question that the original researcher never even conceived, perhaps in an entirely different realm of science.  What’s more, the return on investment in that data increases when it can be reused to answer novel questions, science moves more quickly because the process of data gathering need not be repeated, and therapies potentially make their way into practice more quickly.

Unfortunately, science as it is practiced today does not particularly lend itself to this kind of radical reuse.  Datasets are difficult to find, hard to get from researchers who “own” them, and often incomprehensible to those who would seek to reuse them.  Changing how researchers gather, use, and share data is no trivial task, but to move toward an environment that is more conducive to data sharing, I suggest that we need to think about five factors:

  • Description: if you manage to find a dataset that will answer your question, it’s unlikely that the researcher who originally gathered that data is going to stand over your shoulder and explain the ins and outs of how the data were gathered, what the variables or abbreviations mean, or how the machine was calibrated when the data were gathered.  I recently helped some researchers locate data about influenza, and one of the variables was patient temperature.  Straight forward enough.  Except the researchers asked me to find out how temperature had been obtained – oral, rectal, tympanic membrane – since this affects the reading.  I emailed the contact person, and he didn’t know.  He gave me someone else to talk to, who also didn’t know.  I was never able to hunt down the answer to this fairly simple question, which is pretty problematic.  To the extent possible, data should be thoroughly described, particularly using standardized taxonomies, controlled vocabularies, and formal metadata schemas that will convey the maximum amount of information possible to potential data re-users or other people who have questions about the dataset.
  • Discoverability: when you go into a library, you don’t see a big pile of books just lying around and dig through the pile hoping you’ll find something you can use.  Obviously this would be ridiculous; chances are you’d throw up your hands in dismay and leave before you ever found what you were looking for.  Librarians catalog books, shelve them in a logical order, and put the information into a catalog that you can search and browse in a variety of ways so that you can find just the book you need with a minimal amount of effort.  And why shouldn’t the same be true of data?  One of the services I provide as a research data informationist is assisting researchers in locating datasets that can answer their questions.  I find it to be a very interesting part of my job, but frankly, I don’t think you should have to ask a specialist in order to find a dataset, anymore than I think you should have to ask a librarian to go find a book on the shelf for you.  Instead, we need to create “catalogs” that empower users to search existing datasets for themselves.  Databib, which I describe as a repository of repositories, is a good first step in this direction – you can use it to at least hopefully find a data repository that might have the kind of data you’re looking for, but we need to go even further and do a better job of cataloging well-described datasets so researchers can easily find them.
  • Dissemination: sometimes when I ask researchers about data sharing, the look of horror they give me is such that you’d think I’d asked them whether they’d consider giving up their firstborn child.  And to be fair, I can understand why researchers feel a sense of ownership about their data, which they have probably worked very hard to gather.  To be clear, when I talk about dissemination and sharing, I’m not suggesting that everyone upload their data to the internet for all the world to access.  Some datasets have confidential patient information, some have commercial value, some even have biosecurity implications, like H5N1 flu data that a federal advisory committee advised be withheld out of fear of potential bioterrorism.  Making all data available to anyone, anywhere is neither feasible nor advisable.  However, the scientific and academic communities should consider how to increase the incentives and remove the barriers to data sharing where appropriate, such as by creating the kind of data catalogs I described above, raising awareness about appropriate methods for data citation, and rewarding data sharing in the promotion and tenure process.
  • Digital Infrastructure: okay, this is normally called cyberinfrastructure, but I had this whole “words starting with the letter D” thing going and I didn’t want to ruin it. 🙂  If we want to do data sharing properly, we need to build the tools to manage, curate, and search it.  This might seem trivial – I mean, if Google can return 168 million web pages about dogs for me in 0.36 seconds, what’s the big deal with searching for data?  I’m not an IT person, so I’m really not the right person to explain the details of this, but as a case in point, consider the famed Library of Congress Twitter collection.  The Library of Congress announced that they would start collecting everything ever tweeted since Twitter started in 2006.  Cool, huh?  Only problem is, at least as of January 2013, LC couldn’t provide access to the tweets because they lacked the technology to allow such a huge dataset to be searched.  I can confirm that this was true when I contacted them in March or April of 2013 to ask about getting tweets with a specific hashtag that I wanted to use to conduct some research on the sociology of scientific data sharing, and they turned me down for this reason.  Imagine the logistical problems that would arise with even bigger, more complex datasets, like those associated with genome wide association studies.
  • Data Literacy: Back in my library school days, my first ever library job was at the reference desk at UCLA’s Louise M. Darling Biomedical Library.  My boss, Rikke Ogawa, who trained me to be an awesome medical librarian, emphasized that when people came and asked questions at the reference desk, this was a teachable moment.  Yes, you could just quickly print out the article the person needed because you knew PubMed inside and out, but the better thing to do was turn that swiveling monitor around and show the person how to find the information.  You know, the whole “give a man a fish and he’ll eat for a day, teach a man to fish and he’ll eat for a lifetime” thing.  The same is true of finding, using, and sharing data.  I’m in the process of conducting a survey about data practices at NIH, and almost 80% of the respondents have never had any training in data management.  Think about that for a second.  In one of the world’s most prestigious biomedical research institutions 80% of people have never been taught how to manage data.  Eighty per cent.  If you’re not as appalled by that as I am, well, you should be.  Data cannot be used to its fullest if the next generation of scientists continues with the kind of makeshift, slapdash data practices I often encounter in labs today.  I see the potential for more librarians to take positions like mine, focusing on making data better, but that doesn’t mean that scientists shouldn’t be trained in at least the basics of data management.

So that’s my data sharing manifesto.  What I propose is not the kind of thing that can be accomplished with a few quick changes.  It’s a significant paradigm shift in the way that data are collected and science is practiced.  Change is never easy and rarely embraced right away, but in the end, we’re often better for having challenged ourselves to do better than we’ve been doing.  Personally, I’m thrilled to be an informationist and librarian at this point in history, and I look forward to fondly reminiscing about these days in our data-driven future. 🙂

Why Data Management is Cool (Sort Of)

“She told me the topic was really boring, but that you made it kind of interesting,” the woman said when I asked her to be honest about what our mutual acquaintance had said after attending a class I’d taught on writing a data management plan.  This is not the first time I’d heard something like this.  The fact is, I’m pretty damn passionate and excited about a topic that most people find slightly less boring than watching paint dry: data.  Now, I’m not going to try to convince you that data is not nerdy.  It is.  Very nerdy.   I have never claimed to be cool, and this is probably one of my least cool interests.  However, I think I have some very good reasons for finding data rather interesting.

I remember pretty much the exact moment when I realized the very interesting potential that lives in data.  I was in library school and taking a class in the biomedical engineering department about medical knowledge representation, and we were spending the whole quarter on talking about the very complicated issue of representing the clinical data around a very specific disease (glioblastoma multiforme or GBM, a type of brain cancer).  It’s very difficult with this disease, as with many others, to arrange and organize the data just about a single patient in such a way that a clinician can make sense of it.  There’s genetic data, vital signs data, drug dosing data, imaging data, lab report data, genetic data, doctor’s subjective notes, patient’s subjective reports of their symptoms, and tons of other stuff, and it all shifts and changes over time as the disease progresses or recedes.  Is there any way to build a system that could present this data in any sort of a manageable way to allow a clinician to view meaningful trends that might provide insight into the course of disease that could help improve treatment?  Disappointingly, at least for now, the answer seems to be no, not really.

But the moment that I really knew that I wanted to work with this stuff was when we were talking about personalized medicine and genetic data.  In the case of GBM, as with many other diseases, certain medicines work very well on some patients, but fail almost completely in others.  Many factors could play into this, but there’s likely a large genetic component for why this should be.  Given enough data about the patients in whom these drugs worked and in whom they didn’t, then, could we potentially figure out in advance which drug could help someone?  Extrapolating from that, if we have enough health data about enough different patients, aren’t there endless puzzles we could solve just by examining the patterns that would emerge by getting enough information into a system that could make it comprehensible?

Perhaps that’s oversimplifying it, but I do think it’s fair to conceive of data as pure, unrefined knowledge.  When I look at a dataset, I don’t see a bunch of numbers or some random collection of information.  I imagine what potential lives within that data just waiting to be uncovered by the careful observation of some astute individual or a program that can pick out the patterns that no human could ever catch.  To me, raw data represents the final frontier of wild, untamed knowledge just waiting to be understood and explained, and to someone like me who is really in love with knowledge above all, that’s a pretty damn cool thing.

Yes, I know that writing a data management plan or figuring out what kind of metadata to use for a dataset is pretty boring.  I’m not denying that.  But sometimes you have to do some boring stuff to make cool things happen.  You have to get your oil changed if you want your Bugatti Veyron to do 0 to 60 in 2.5 seconds (I mean, I’m assuming those things have to get oil changes?).  You have to do the math to make sure your flight pattern is right if you want to shoot a rocket into space.  And you can’t find out all the cool secrets that live in your dataset if it’s a messy pile of papers sitting on your desk.  So the way I see it, my job is to make data management as easy and as interesting as possible so that the people who have the data will be able to unlock the secrets that are waiting for them.  So spread the word, my fellow data nerds.  Let’s make data management as cool as regular oral hygiene.  😉

A Week in the Life: Tuesday

Tonight, your friendly research informationist almost didn’t get around to posting a blog because I just now finished getting caught up on some work (but to be fair, there were a lot of interruptions from the resident pup, who never gets tired of playing Squirrelly or chasing the ball, even when mom is working).  However, I promised a full week of updates, and I’m not about to stop after only one day.  So, for those inquiring minds who want to know, here’s what I got up to today.

  1. Attended the weekly meeting for my department, which is called Research, Instruction, and Collection Services.  Basically we catch each other up on the various goings-on in our department.  Though there are only 6 of us, we are all crazy busy fiends, so it’s nice to have an hour a week in which we find out what everyone is up to.
  2. Gave an orientation and overview of library services to first year students in the psychology graduate program.  It was a small group, but they were very interested in what I had to say, which is always nice, and had lots of questions.
  3. Went to a meeting about the UCLA Library’s Affordable Courseware Initiative, which is a program in which we’re offering grants to professors who update their course syllabi to offer free/open access/low cost alternative to textbooks and other paid course materials.  Rather shockingly and disconcertingly, the price of college textbooks has risen 812% since 1978.  By comparison, the consumer price index has risen around 250%.  With tuition also increasing significantly in the last few years, particularly in California, students are being hit pretty hard financially.  This initiative is designed to help mitigate some of those costs.  A similar program at UMass Amherst resulted in $750,000 savings for students from a $20,000 initial investment, which is a pretty good ROI if you ask me.  So it will be interesting to see how this all goes at UCLA.
  4. I’m the chair of the committee for speakers for the Medical Library Group of Southern California and Arizona/Northern California and Nevada Medical Library Group Joint Meeting that is coming up in July, so today I worked on getting together some information and sending some emails for that.
  5. Continued more work on NIH Public Access Policy as described yesterday.  Every time I send an email to the NIH Manuscript Submission System help desk, I feel like starting it “hello, it’s ME AGAIN!!!”  But the nice thing about doing this work is that people are genuinely happy to have the help and the results are pretty immediate.
  6. Continued the work on the NCBI course as described yesterday.
  7. Answered a gazillion more emails.
  8. Finished some ordering for my public health funds (yay!), but I still have a lot to do on my other stuff.
  9. The whole department cornered one of our coworkers who was celebrating a birthday today and sang Happy Birthday to him.  🙂
  10. Filled out paperwork for upcoming travel, of which there is quite a bit.  I never knew librarians traveled so much, but I have been on the road pretty often this year.  I think between September 2012 and August 2013, I will have taken about 12 business trips.  And there is a LOT of paperwork that goes along with all of it.  But I’m super lucky to be able to go to some very interesting meetings and take some very cool courses.

A Week in the Life of a Research Informationist: Monday

So recently my job title changed from Health and Life Sciences Librarian to Research Informationist, which is pretty cool, except that now instead of people assuming I spend my day shelving books and thinking about the Dewey Decimal System, they basically have no idea what it is I do.  I’m pretty sure my friends and family have absolutely no idea what I do for a living.  In fact, I’m not sure my co-workers even really know for sure.  One of my colleagues suggested I ought to write about what a research informationist does, and since I haven’t blogged here in ages, I thought this would be a good time to spread the word of what a research informationist is/does.  Right around the time I thought I should write this blog series, another research informationist, the lovely and talented Sally Gore, beat me to it by writing about it on her blog.   But hey, you can never have too many research informationists talking about their awesome jobs, right?

With that, I give you the activities of my Monday.

  1. I spent a lot of time helping several people trying to figure out the NIH Public Access Policy.  To vastly simplify, I would summarize the policy by saying if you get NIH grant money, you have to make your articles that come out of that funding available in PubMed Central (PMC), the open access repository of the National Library of Medicine.  In truth, the policy and the myriad different things you have to do to comply with it are quite complex.  NIH has recently announced that they would start enforcing the policy by delaying grant renewals to researchers who aren’t in compliance, so this means that I’m getting a lot of calls from people who are having to catch up on five years’ worth of article submissions.  In theory, I like this policy and I think it’s really important in getting medical literature to clinicians and researchers who wouldn’t be able to afford it otherwise, but in practice, it’s really confusing for people because there are so many different ways you can comply and also lots of ways things can go wrong.  I would like for it to be a lot easier for researchers to get their work into PMC so they and their staff don’t have to spend a lot of time freaking out about this.  However, in the meantime, I help a lot of people who need to figure this stuff out and in so doing have become more of an expert on the policy than I ever wanted to.
  2. I’m working on a couple of search strategies for researchers who are writing systematic reviews.  These are articles that essentially summarizes the body of literature on a particular question.  This is nice because a busy clinician can then just read one article instead of having to go find the hundreds or thousands that are relevant to the question. Plus, when you gather a lot of data and consider it all together, you can get a better sense of what’s really going on than if you just had a small sample.  However, identifying all of the relevant literature is pretty challenging, so it’s useful to have a librarian/research informationist help out as an “expert searcher” or as I like to think of it, a “PubMed whisperer.”  Putting these searches together is pretty time-consuming, plus I help the researchers manage the workflow of analyzing the articles that my searches turn up.  So today I helped out some of the researchers I’m working with on those articles, including getting them set with using Mendeley, a very cool citation management program.
  3. I’m a member of the Medical Library Group of Southern California and Arizona and the chair of their blog committee, so today I had to do some work with getting some entries up on the blog.
  4. Another one of my responsibilities is collection development, or buying stuff for the departments to whom I am the liaison librarian, which include public health, psychology, and some others.  I’ve been so busy that I’ve kind of been putting off my ordering, so I have to find a lot of stuff to buy in the next couple weeks.  You’d think getting to spend lots of money on books would be great, but it is less so when it’s in the context of work.  Plus, I can never find exactly what I want.  For example, my public health students ask a lot of questions about two fairly obscure and relatively specific topics: water consumption and usage in the context of health care, and food deserts (urban areas where it’s hard to find healthy food so people end up eating junk food and whatever they can get at convenience stores).  So I wanted to buy some books that would help them out with this, but it’s harder than you’d think!  This project will be carried over to tomorrow.
  5. I’m taking a very cool online/in-person course called Librarian’s Guide to NCBI.  The course covers some bioinformatics tools that are particularly relevant to people doing work in genetics and molecular biology.  As a research informationist, I think it’s important to be able to provide a high level of specialized assistance to researchers, so learning more about these tools is essentially adding some more stuff to my toolbox. I did the first week’s module today (although it’s the second week, so I’m already behind).  Most of the material in this first lecture was stuff I pretty much already knew, but I played around a little bit with some of the tools and searched around a bit in NLM’s Gene database.
  6. I manage our four library school graduate students who work on our reference desk, and today we had our monthly training session.  There’s really a lot you need to know to work at the reference desk of a busy biomedical library, and these students do a fantastic job, but the learning is never really over.
  7. Email.  I answered a gazillion emails.  The email never ends.

I did some other random stuff, but that’s the main stuff I did today.  Phew.  🙂