Research Funding, Risks and Metrics

In an earlier article, I briefly mentioned the two basic points of view from which funding for scientific research can be justified: funding for specific scientists on the grounds that one wants a vibrant research culture; or funding for specific research projects on the grounds that one wants answers to scientific questions. I was therefore pleased to discover this draft paper: Incentives and Creativity:  Evidence from the Academic Life Sciences by Pierre Azoulay, Joshua Graff Zivin and Gustavo Manso in which the authors compare the effects of these finding policies on scientists’ behaviour. The study compared scientists funded by the Howard Hughes Medical Institute (HHMI) with a comparable group of “Early Career Prize Winners” (ECPWs) whose research was supported by National Institutes of Health (NIH) funding. HHMI funds “Investigators” as individuals. In the words of the HHMI website: “HHMI urges its researchers to take risks, to explore unproven avenues, and to embrace the unknown—even if it means uncertainty or the chance of failure.” According to the paper, HHMI is tolerant of failure in the early stages and provides its recipients with access to peer group feedback on their research throughout. The ECPW control group was chosen as having similar overall research accomplishments to those in the HHMI group prior to their receiving HHMI support, but who subsequently went on to perform research with NIH funding. In contrast to HHMI, NIH awards grant funding in support of specific research project proposals and researchers who fail to achieve project goals are unlikely to have their grants renewed.

The authors found that HHMI-funded scientists were more strongly represented than those in the ECPW control group as authors of both the most highly cited publications and of the most rarely cited. In other words, HHMI-funded scientists had both more successes and more failures than the control group. Moreover, there was a greater proliferation of keywords associated with the publications of the HHMI-funded researchers after their appointment than in controls. Altogether, these observations are taken as an indication that because the consequences of short-term failure were ameliorated for them, HHMI-funded scientists were more willing to take the risks associated with a more exploratory and serendipitous approach than their project-funded counterparts. To put it another way, HHMI scientists were able to be more “creative”. That is, they were freed from the constraints of having to answer questions contractually agreed at the outset and allowed to answer questions of their own choosing; preferably, one presumes, questions that no-one else had before thought of asking.

I suspect that a lot of scientists will enjoy hearing this, which might be taken as some kind of justification of the “Haldane Principle“. However, the authors stress that their study should not be taken as a criticism of the NIH or of project-oriented funding. They point out the difficulties in making investigator-oriented funding work on a larger scale and of the need for ready political accountability in decisions involving the distribution of public funding. Analogous constraints apply in corporate R&D where accountability to shareholders has to be maintained.

There are interesting implications here for the criteria (“metrics”) used to evaluate individual scientists’ performance in academia and in industry. In academia, there are grumblings about the limitations of using publications and “impact factors” as measures of a researcher’s worth. In industry, there are alternating waves of enthusiasm for either “blue skies” or nose-to-the-grindstone project-managed research. However, consideration of successful and unsuccessful attempts at innovation (see for instance Why Innovation Fails by Carl Franklin or How Breakthroughs Happen by Andrew Hargadon) suggest that no matter how good they are as scientists, researchers who give birth to innovation that actually changes how people live have their thinking realistically attuned to the needs of subsequent commercial development even as they think up those previously unthought-of questions. The best researchers give their best when they’re embedded in a culture that gives them incentives to be exploratory that are also tied in to the business aims of the institution in which they work.

 

Advertisements

Add a Comment:

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s