Skip to content
risk-taking in grad school

Yesterday was a lunch bunch Friday, and as usual when I do not have other commitments, I attended. Dr. Edward Cokely gave a talk on decision-making that was geared at grad students, particularly the first-years, but certainly applicable to all.

As a bit of background, Dr. Cokely came to OU a few years ago, and has set up a very nice lab (Decision Analytics Lab). His job talk was all about risk literacy and numeracy (I incorporated some of the measures he mentioned in subsequent research). I haven’t had a class with him, partly because by the time he showed up, I was pretty much done with classes; by all reports though, he’s a good teacher. He’s certainly a brilliant researcher.

His talk today was enjoyable. Numeracy skills are a better predictor of good decision-making than intelligence (not sure how well you can separate the two, but okay). Decision-making is an extremely important skill… and you can learn it. In fact, according to Dr. Cokely (and plenty of other folks), if you dedicate 10,000 hours of challenging, productive practice to just about anything, you can become an expert at it. (Those 10,000 hours explained by one of the guys who published the original study.)

It’s always 10,000 hours

We’ve all heard this before (ever since Outliers). NPR goes on about it on a regular basis. My youngest son heard the same 10,000 hours talk a few weeks back at school (he’s a sophomore in high school). He and Dr. Cokely would agree on this fail-safe road to expertise. Put in enough time, and you’ve got it. (Never mind that Malcolm Gladwell says that he chose that number just to give people an idea of how much went into expertise–including luck and the people around you.)

What people always forget is that intelligence and talent do play a role; yeah, you can become an expert. But the person sitting a few chairs down, who started out with more natural talent (and maybe just genetic luck in various ways: her parents played jazz all day long from the day she was conceived, his parents put him on skis when he was two years old), and ALSO put in 10,000 hours, is going to have 10,000 hours plus what he or she brought to the table at hour zero. (Gladwell explains why this argument applies more to some activities than to others.)

An expert you may well be, but that doesn’t mean you’ll be Yo-Yo Ma.
When it comes to academia, Dr. Cokely said, it takes 10,000 hours after you publish your first good paper; not the first paper necessarily, but the first you are proud of.

I spent a minute or two deciding which one this would be for me, and another few minutes trying to calculate how many hours had passed since a midpoint between papers of dubious prideworthiness. And then Dr. Cokely said you couldn’t really count more than four hours a day so I figured I might as well give up. At best, I was about one-third of the way there. Although I am fairly proud of my very first paper, and it’s been nearly five years… can we say halfway to 10,000?

But here’s the thing. I might not have revealed the theory of everything, but I’ve got papers published, and all of the ones for which I am first author are in Q1 journals. (About the academic me.) And yeah, I’m proud of ALL of them, because even if they do not solve the problems of the world, they are all sound contributions to the literature. Most of them went through more than one rejection; all of them were revised many times.

None of them are the Final Word.

And that’s why I am writing this blog. At one point, Dr. Cokely put up a slide relevant to grad students: it was important, he said, to be able to decide which research project to pursue. Which one, we might ask, is important? Which one really contributes to the literature? How can we tell which of our ideas is good?

(He had said previously that it was a good idea to do what your advisor told you to do…)

But there is was, hanging in the air: as a grad student, you need to be able to recognize a worthy project.

(Otherwise you may fail. You might be wrong. Your idea might be trivial. You might end up throwing away a lot of data.)

And this is a problem. Because a lot of grad students get stuck trying to find a worthy project, or waiting for a good idea.  Or they have a great idea, and it’s huge, far too big for a dissertation (and certainly too big for their first or second or tenth paper).  Of course, they can still do it (if they can figure out how to test it in its vastness). It will probably add several years to the doctoral process, but they can do it.

But it’s not a practical idea.

Actually, as a grad student, you don’t need to propose the theory of everything. In fact, you probably shouldn’t.

(Exception: if you are a brilliant scholar, you are well-advised to propose that theory. Most of us are not brilliant though.)

All you need to do is propose testable hypotheses. And you need to test them. Then pre-register, if you haven’t already, and test again. And then write up the results and submit the paper to a journal. Do it again when it gets rejected. Collect more data. Revise and resubmit.

The single most important thing you need to be good at to succeed as a grad student is…. being wrong.  Or at least dealing with the fact that other people are going to think you are wrong, and tell you.

This is why it’s not a good idea to wait for “a good idea” if by that you mean an exceptional idea. Ten to one what you think is a brilliant idea is something a journal editor doesn’t think worthy of being sent out for review. On the other hand, sometimes you will submit a paper believing it’s entirely trivial, and it will get published, and get a lot of attention in the popular press (ok, so maybe it was rather trivial).

I suppose it really is about risk-taking, in the end. You have to be willing to risk being wrong. (or worse, being neither right nor wrong…)  Dr. Cokely estimated that his record of rejections:accepted papers was about 3:1.  But that’s just journal submissions, and it’s a good record (I’m not about to tally up my record, but it’s a lot worse than 3:1). I should have asked him about data collections/ hypotheses tested. Because I’ve got far more data that showed null effects than data that supported (or even disproved) my hypotheses. Granted, some of those data may well be salvageable. Sometimes I simply test the primary hypotheses and move on, without bothering to test the secondary ones (or the ones that require coding). But most of it is simply something that will have to be reported as null results in the context of a bigger paper.

And that’s okay. It’s perfectly acceptable to be wrong. That’s part of the game. A far more difficult part of it occurs when you believe you are right, and lots of other people (e.g., your advisor, collaborators, reviewers, editors) are just as certain you are wrong.  Or maybe they don’t think you are wrong, per se, but that you need to change a lot of things before they will concede you are right.

It boils down to being willing to take a risk. As a grad student, you may need to take lots of risks. Even if you are really lucky, and work with an advisor who incorporates you in an established research program that has already sorted out what will work, sooner or later you’ll be on your own, and having to take risks. You will be wrong sometimes. Probably a lot of the time, but that should just give you more ideas for future research.*

Maybe Dr. Cokely meant “knowing how to decide which research paths, amongst many, are worthy” rather than knowing which one was worthy. In that case, I would agree. But the implication that there is a best idea, or that the others are unworthy, is counterproductive. Yes, all ideas need to be subject to scrutiny; hypotheses need to be testable, research designs need to be feasible.** But they don’t need to be earth-shattering.

Of course, there is an element of risk to every study; I don’t know much about the science of decision-making and risk-taking, but I think of my program of research similar to the way I think about a stock portfolio. When it comes to investing,*** you want to balance high risk/high return with low risk/low return (low risk/high return really doesn’t exist). You will also be thinking about time; if you can wait 20 years, you’ve got a lot more options.

How does this translate to research? As my advisor loves to explain, studies come in two broad categories: fast vs slow and success vs failure.  “Fast success” is the best option, and slow failure is the worst case scenario; slow success and fast failure are about equal in my book (always acknowledging that some designs will of necessity be slow), because more often than not, what I want is an answer. Yes, it works or No it doesn’t (especially for pilot studies).

But of course there are myriad other categories: large vs small effects (as Dr. Cokely pointed out yesterday, large effect sizes are always preferable); tiny brick in large literature; first brick in new literature (comes with lots of risk but potential of being the expert); safe and mundane vs. uncertain and exciting; likely to be published in Science vs. boring and likely to replicate; part of years-long multi-study paper that might or might not be published in JPSP vs single study paper likely to be published in niche journal or as a brief report. Your favorite theory vs something that’s going to be published before you are tenured.

And when it comes to journals, think of them in terms of dividends. The ones with the largest dividends probably require the most work (e.g., the five study monster destined for JPSP)… and you can probably get, let’s see, six published in lesser journals for every one in JPSP (ignoring for the moment the element of luck).  The six lesser journal articles will probably have a total of at least 10 studies in them, so perhaps those well-planned 5 studies are a good idea… Except! They may take two years.

So what I do to balance my research time investment portfolio is have multiple projects.  The one I hate but is my dissertation. The multi-study JPSP project. The specialty journal cross sectional study. The fun undergraduate RA collaboration. The project I don’t really believe in but needs doing. The project I love but everyone else suspects is bogus. The several I’m trying to get other people to do. The one I set aside a year ago and needs to be sorted. The five that need writing up.

What’s missing is the project that is going to prove my Great Theory of Narrative Moral Agency. I’m building that monster from the ground up. That way I’ll have learned from all the times I’ll be wrong, because that’s going to happen a lot. By that time, I’ll probably have passed the 10,000 hour threshold, and maybe someone will believe me when I claim to be the expert.

In the meantime, I’m going to keep testing hypotheses and balancing research risks.

Caveat: What Dr. Cokely meant to say and what I understood him to say (when it comes to grad students deciding upon research paths) might be two different animals. He is the expert on decision-making, and I got distracted by the 10,000 hours reference. But it’s always worth repeating that you don’t need to wait for a good idea to start making hypotheses and collecting data… that you’re going to be wrong no matter how great an idea you come up with. In some ways, it’s best to start with little, mediocre ideas. That way you won’t be as upset when they don’t pan out, or everyone tells you how terrible they are.

(Does choosing mediocrity count as risk-reduction? intelligent decision-making? Ever? As part of a balanced research portfolio?)

*Here’s where the Grand Theory of Everything comes in. All those little, achievable goals that you achieved (and failed to achieve) should work towards your Grand Theory. And the really nice thing about little goals not panning out is that they are informative: it’s all about information. What doesn’t work can tell you as much about your Theory as what does.

**I must admit that some of my ideas needed to be vetted for ease-of-testing. My first paper published with my advisor at OU featured a crossover design tested with a mixed model. There is something beautiful about t-tests (okay, so even a mixed model boils down to t-tests… I mean simple t-tests).

***More investing advice from a line-up of men and one woman sigh