Friday, June 24, 2016

B - Scientists' participation in public debates

Woolston C. Scientists are cautious about public outreach. Nature Febr. 2015

Scientists think that they should actively participate in public debates about science and technology - but many have misgivings about doing so, according to a survey of nearly 4,000 US researchers. Of the respondents, 87% said that scientists should “take an active role in public policy debates about science and technology”, and just over half said that they had talked about their research with reporters. However, 52% said that oversimplification of science in news reports was a major problem, They have also showed mixed feelings about news and social media.
http://www.nature.com/news/scientists-are-cautious-about-public-outreach-1.16965

B - Writing for lay audiences

Salita JT. Writing for lay audiences: a challenge for scientists. Medical Writing 2015;424(4):183-189
(doi: 10.1179/2047480615Z.000000000320)

Writing for lay audiences, especially lay summaries, is needed to increase health and science literacy, but this kind of writing can be difficult for scientists. The article describes why it can be so difficult and gives some advice on how scientists can cope with the challenge and how institutions and organisations can help.
http://journal.emwa.org/writing-for-lay-audiences/writing-for-lay-audiences-a-challenge-for-scientists/

B - Medical journalism

Whelan J. Medical journalism: another way to write about science. Medical Writing 2015;24(4):219-221
(doi: 10.1179/2047480615Z.000000000327)

True journalism differs from public relations and uncritically reproducing press releases. It involves doing background research into the context surrounding the finding being reported, seeking comments from independent experts, and highlighting the negative as well as positive aspects. In this article, the author pulls together information for medical writers interested in journalism or science writing.
http://journal.emwa.org/writing-for-lay-audiences/medical-journalism-another-way-to-write-about-science/

Wednesday, June 22, 2016

B - Replicating psychology studies

Bohannon J. Many psychology papers fail replication test. Science 2015;349(6251):910-911
(doi: 10.1126/science.349.6251.910)

In the Open Science Collaboration, 270 psychologists from around the world signed up to replicate studies; they did not receive any funding. The group selected the studies to be replicated based on the feasibility of the experiment, choosing from those published in 2008 in three journals. Of the 100 prominent papers analyzed, only 39% could be replicated unambiguously. The results lend support to the idea that scientists and journal editors are biased—consciously or not—in what they publish.
http://science.sciencemag.org/content/349/6251/910.full

B - Sex and gender equity in research: SAGER guidelines

Heidari S, Babor TF, De Castro P, et al. Sex and gender equity in research: rationale for the SAGER guidelines and recommended use. Research Integrity and Peer Review 2016;1:2
(doi: 10.1186/s41073-016-0007-6)

This article describes the rationale for an international set of guidelines to encourage a more systematic approach to the reporting of sex and gender in research across disciplines. The Sex and Gender Equity in Research (SAGER) guidelines are designed primarily to guide authors in preparing their manuscripts, but they are also useful for editors, as gatekeepers of science, to integrate assessment of sex and gender into all manuscripts as an integral part of the editorial process.
https://researchintegrityjournal.biomedcentral.com/articles/10.1186/s41073-016-0007-6

B - Rewarding reviewers

Warne V. Rewarding reviewers - sense or sensibility? A Wiley study explained. Learned Publishing 2016;29(1):41-50

In July 2015, Wiley surveyed over 170,000 researchers in order to explore peer reviewing experience; attitudes towards recognition and reward for reviewers; and training requirements. Results show that while reviewers choose to review in order to give back to the community, there is more perceived benefit in interacting with the community of a top-ranking journal than a low-ranking one. Seventy-seven per cent show an interest in receiving reviewer training. Reviewers strongly believe that reviewing is inadequately acknowledged at present and should carry more weight in their institutions' evaluation process.
http://onlinelibrary.wiley.com/doi/10.1002/leap.1002/full

B - What makes a good policy paper

Whitty JM. What makes an academic paper useful for health policy? BMC Medicine 2015;13:301
(doi: 10.1186/s12916-015-0544-8)

Getting relevant science and research into policy is essential. There are several barriers, but the easiest to reduce is making papers more relevant and accessible to policymakers. Opinion pieces backed up by footnotes are generally unusable for policy. Objective, rigorous, simply written original papers from multiple disciplines with data can be very helpful.
https://bmcmedicine.biomedcentral.com/articles/10.1186/s12916-015-0544-8



B - Post-publication peer review

Teixeira da Silva A, Dobránszki J. Problems with traditional science publishing and finding a wider niche for post-publication peer review. Accountability in Research2015;22(1):22-40
(doi: 10.1080/08989621.2014.899909)

Errors in the literature, incorrect findings, fraudulent data, poorly written scientific reports, or studies that cannot be reproduced not only serve as a burden on tax-payers' money, but they also serve to diminish public trust in science and its findings. Therefore, there is every need to fortify the validity of data that exists in the science literature. One way to address the problem is through post-publication peer review, an efficient complement to traditional peer-review that allows for the continuous improvement and strengthening of the quality of science publishing.
http://www.ncbi.nlm.nih.gov/pubmed/25275622
 

B - OA and knowledge translation

Adisesh A, Whiting A. Power to the people - open access publishing and knowledge translation. Occupational Medicine 2016;66:264-265
(doi: 10.1093/occmed/kqv191)

This Editorial attempts to demystify the rights and wrongs of self-archiving and explains some of the issues around open access (OA) publishing. There are essentially three major publication options for authors: no cost for publication in a subscription-based journal; OA journal publication where there may be an article processing charge (APC) paid by or on behalf of the authors; and publication in a hybrid journal where a subscription journal provides the option for OA publication upon payment of an APC. Occupational Medicine recognized the need for open access as early as 2007, when it became a ‘hybrid’ journal.
http://occmed.oxfordjournals.org/content/66/4/264.long

B - Rule violations

Gächter S, Schulz JF. Intrinsec honesty and the prevalence of rule violations across societies. Nature 2016;531:496-499
(doi: 10.1038/nature17160)

The authors present cross-societal experiments from 23 countries around the world that demonstrate a robust link between the prevalence of rule violations and intrinsic honesty. They developed an index of the ‘prevalence of rule violations’ (PRV). Their results suggest that institutions and cultural values influence PRV, which impact on people's intrinsec honesty and rule following.
http://www.nature.com/nature/journal/v531/n7595/full/nature17160.html

Friday, June 17, 2016

B - Sharing clinical trial data

Taichman DB, Backus J, Baethge C, et al. Sharing clinical trial data. A proposal from the International Committee of Medical Journal Editors. JAMA February 2, 2016;315(5):467-468

The International Committee of Medical Journal Editors (ICMJE) believes that there is an ethical obligation to responsibly share data generated by interventional clinical trials because participants have put themselves at risk. As a condition of consideration for publication of a clinical trial report in our member journals, the ICMJE proposes to require authors to share with others the deidentified individual-patient data (IPD) underlying the results presented in the article (including tables, figures, and appendices or supplementary material) no later than 6 months after publication. The ICMJE also proposes to require that authors include a plan for data sharing as a component of clinical trial registration.
http://jama.jamanetwork.com/article.aspx?articleid=2483008

B - Statistical reporting errors in psychology

Nuijten MB, Hartgerink CHJ, van Assen MALM, et al. The prevalence of statistical reporting errors in psychology (1985-2013). Behavior Research Methods 2015:1-22
(doi: 10.3758/s13428-015-0664-2)

This study documents reporting errors in a sample of over 250,000 p-values reported in eight major psychology journals from 1985 until 2013, using the null-hypothesis significance testing (NHST). Results showed that half of all papers contained at least one p-value that was inconsistent with its test statistic and degrees of freedom. One in eight papers contained a grossly inconsistent p-value that may have affected the statistical conclusion. This could indicate a systematic bias in favor of significant results.
http://link.springer.com/article/10.3758/s13428-015-0664-2

B - Public registry of competing interests


Dunn AG. Set up a public registry of competing interests. Nature 2016 May 5;533(7601):9.
(doi: 10.1038/533009a2016)

According to the author, publishing system for disclosing competing interests is still fragmented, inconsistent and inaccessible. About half of the studies that involve researchers who hold relevant competing interests fail to declare them, and the common causes are inconsistent requirements across journals and negligence. To solve this problems, the research community should establish a public registry of competing interests, i.e. an online database of interests declared by researchers to precisely determine the association between competing interests and the potential for bias.
http://www.nature.com/news/set-up-a-public-registry-of-competing-interests-1.19851

B - Reviewer fatigue?

Breuning M, Backstrom J, Brannon J, et al. Reviewer fatigue? Why scholars decline to review their peers' work. PS: Political Science & Politics 2015;48(4):595-600.
(doi: 10.1017/S1049096515000827)

The double-blind peer review process is central to publishing in academic journals, but it also relies heavily on the voluntarily efforts of anonymous reviewers. Journal editors have increasingly become concerned that scholars feel overburdened with requests to review manuscripts and experience “reviewer fatigue.”. The authors of this article empirically investigated the rate at which scholars accept or decline to review for the American Political Science Review, as well as the reasons they gave for declining: almost three-quarters of those who responded to requests agreed to review, and reviewer fatigue was only one of many other reasons (also busy professional and personal lives).
http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=9995005

.


B - OA publishing trend analysis

Poltronieri E, Bravo E, Curti M, et al. Open access publishing trend analysis: statistics beyond the perception. Information Research 2016;21(2), paper 712.

This analysis aimed to track the number of OA journals acquiring impact factor, and to investigate the distribution of subject categories pertaining to these journals in the period 2010-2012. Results showed a growth of OA scholarly publishing, with a prevalence for journals relating to medicine and biological science disciplines.
http://www.informationr.net/ir/21-2/paper712.html#.V2PI9P8cQuQ

Thursday, June 16, 2016

B - Gender analysis in health system research

Morgan R, George A, Ssali S, et al. How to do (or not to do)...gender analysis in health system research. Health Policy and Planning 2016;1-10
(doi: 10.1093/heapol/czw037)

The article outlines what gender analysis is and how gender analysis can be incorporated into health system research (HSR) content, process and outcomes. It recommends exploring whether and how gender power relations affect females and males in health systems through the use of sex disaggregated data, gender frameworks and questions. It also examines gender in HSR process by reflecting on how the research process itself is imbued with power relations, and in HSR outcomes by supporting how power relations can be transformed progressively or at least not exacerbated.
http://heapol.oxfordjournals.org/content/early/2016/04/26/heapol.czw037.abstract


B - The Flesch Reading Ease measure

Hartley J. Is time up for the Flesch measure of reading ease? Scientometrics 2016;107(3):1523-26
(doi: 10.1007/s11192-016-1920-7)

The Flesch Reading Ease measure is widely used to measure the difficulty of text in various disciplines, including Scientometrics. This paper argues that the measure is now outdated, used inappropriately, and unreliable. According to the author, it is now time to abandon the notion of one measure and one computer programme being suitable for all purposes. Different computer-based programmes would have greater validity than the Flesch but probably they would still fail to take into account the effects of other variables that affect readability.
http://rd.springer.com/article/10.1007%2Fs11192-016-1920-7

B - Open access impact

Tennant JP, Waldner F, Jacques DC, et al. The academic, economic and societal impacts of Open Access: an evidence-based review. F1000Research 2016;5:632
(doi: 10.12688/f1000research.8460.1)

This review aims to be a resource for current knowledge on the impacts of Open Access by synthesizing important research in three major areas of impact: academic, economic and societal. The evidence points to a favorable impact of OA on the scholarly literature through increased dissemination and reuse. Access to the research literature is key for innovative enterprises, and a range of governmental and non-governmental services, and it has the potential to save publishers and research funders considerable amounts of financial resources. Furthermore, OA contibutes to advance citizen science initiatives and researchers in developing countries.
http://f1000research.com/articles/5-632/v1

Tuesday, May 24, 2016

A - ESE Author Q&A: Omar Sabaj Meruane




In the latest edition of our ESE Author Q&A series, we speak to Omar Sabaj Meruane of Universidad de La Serena, who published the article ‘Relationship between the duration of peer-review, publication decision, and agreement among reviewers in three Chilean journals’ in the November 2015 issue of European Science Editing 41(4).

This article is of particular interest to peer review administrators and editors looking to increase the efficiency of their peer review processes, providing a novel insight into the relationships between peer review time and reviewers recommendations.

The article is now free to access from the EASE website. Download it here.


EASE:  Before we ask about some questions about yourself, please introduce our readers to the article you published in ESE.

Omar Sabaj Meruane: We explored the relation between time and agreement. We established different stages in the peer review process (reviewer selection, Notification, Publication, total review time, response to author and total time) for three international journals belonging to the fields of humanities, engineering, and higher education. In total, the peer review processes of 369 papers were analysed. Then we separated processes according to the level of agreement (low, partial and total agreement) and decision type. Total peer review time was greater for articles that were accepted. For all three of the journals examined, publication period was the longest stage, and time taken to select referees was longest for the humanities journal. Partial agreement between reviewers was related to longer publication times in the university teaching journal, while there was no relationship between reviewer agreement and publication time in the engineering journal. Duration of the peer review process was related to decision type. Relationship between level of agreement between reviewers and the duration of the various stages of the publication process was found to vary between disciplines.

EASE: What is your main area of research?

OSM: My main area of research is linguistics, specifically, the Analysis of Scientific Discourse. I am interested in exploring how the sociological attributes of scientists correlate with their discursive behaviour, when participating in constructing scientific knowledge

EASE:  How long have you been involved in this area?

OSM: I have been working in this area for 7 years. I have won two grant funds to study, first, the disciplinary variation of research articles rhetorical structure. The second grant, from which my article in ESE is a product, is devoted to analyse the peer review process.

EASE: Do you work in a group, or on your own?

OSM: We work with a large group of graduate and postgraduate students. I also have a colleague who is responsible as a co-researcher of our project. He is Carlos González (co-author of the paper) who works at the Pontificia Universidad Católica de Chile. He is also the editor of Onomázein, a very prestigious journal devoted to linguistics (www.onomazein.net)

EASE:  What are some of the innovative aspects you could tell us about your research?

OSM:  The more innovative aspect of our research programme (that is not well represented in the article we wrote for ESE) is the combination of discourse Analysis and Social Network Analysis. For years, these two disciplines have not been connected. For example, there are a lot of works analysing the review report, which give us a very detailed characteristics of this occluded genre, but we are not able to know which the sociological attributes of the reviewers are.  Thus, this lack of interaction between these two disciplines (Discourse Analysis and Social Network Analysis) , limits our understanding of how scientists behave discursively when participating in the collective system of generating scientific knowledge (peer review).
The specific innovation of the ESEarticle is the relation of time and agreement in the peer review process. One could hypothesize that if two people have to asses an object, time could be critical in arriving to a same (agreement) assessment. But this is not the case, at least not in all disciplines. So time is partially related to the probability that two reviewers agree on the publication recommendation of a paper.  The other innovation of our paper consists in the conceptualization of time. Most studies in the peer review process make general distinctions for example, from submission to decision, and from decision to publication (in the case of accepted articles). As we made more fine-grained distinction for several stages (reviewers selection time, revision time, notification time, among others) we had a better understanding of the duration of the process, so that  we can derive some tips to editors. For example, from our data we could see two patterns that characterize two typical bottle necks in managing time of peer review. The first problematic stage is the selection of referees that could take a very long time in humanities. Revision time (i.e. the average time of the two reviewers) is very similar across disciplines. The second pattern, typical for engineering, is that every stage is fairly fast, except for publication time (i.e from decision to publication). In the first case, to shorten the total time of peer review an editor should make efforts to enlarge his/her reviewer’s database. In the second case, the editor should think to augment the number of issues per year.

EASE:  What do you consider to be your best paper or work, and why?

A recent paper that appeared in The Journal of Scholarly Publishing:
Sabaj, O.; González, C. & Pina-Stranger, A. (2016). What we still don’t know about Peer Review. Journal of Scholarly Publishing 47 (2), 180-212.
We like this paper mainly because it shows various gaps in the research of peer review, so that it is useful to delineate a future research programme, where there are fully innovative opportunities. Our main claim is that we must be more interdisciplinary in approaching the very heart of scientific endeavour, namely, Peer review. Specifically, we think that exploring the nature of the discourse of reviewers report will be more enlighten if we relate the characteristics of those texts with the attributes (sociological, scientometrical) of the referees who produce them. But, as much of the information of the peer review process is confidential, it is impossible to conduct research of the process without the help and collaboration of editors.  
EASE:  Do you have any interesting work or papers that may be completed in the next year or so, that you are able to speak about?

OSM:  We are finishing two works that go on in the same line of research, which is to establish a relation between discursive and sociological attributes. Some questions we try to resolve in these works are: Do senior researchers give better feedback in the peer review process?  Does the evaluation report vary according to the sociological attributes of the reviewers?

What we are trying to configure is what we call a theory of scientific behaviour that uses methods and categories form both Discourse analysis and Social Network Analysis.

EASE:  What motivated you to write for European Science Editing?

OSM:  The prestige of the journal, the peer review process which is fast and detailed. The editors keep fluent contact with authors.

EASE:  What impact do you hope this paper could have, what changes could it make?

OSM:  It could help editor to better manage time in the peer review process.


EASE:  If people want to read more about this subject, can you name one or two specific articles they should read?

Björk B, Solomon D. The publishing delay in scholarly peer-reviewed journals. Journal of Informetrics. 2013;7:914-923. DOI: 10.1016/j.joi.2013.09.001

Azar O. Rejections and the importance of first response times. International Journal of Social Economics. 2004;31(3):259-274. DOI: 10.1108/03068290410518247


EASE:  Are there any websites or other resources related to your paper they should seek out?

OSM: 
www.omarsabaj.cl

 ------------------------------------------------------------------------------------------------------------

Read Omar’s article in the full November issue of the ESE Journal archive on the EASE website here.

Omar can be found on Twitter at @omi_sabaj

Interview conducted by Duncan Nicholasof the EASE Council.

Thursday, April 28, 2016

A: ESE Author Q&A: Lisa Colledge



In August, European Science Editing featured an article by Lisa Colledge and Chris James from the research metrics team at Elsevier.  The paper, titled A “basket of metrics”—the best support for understanding journal merit, deals with one of the most interesting and pressing elements of scholarly publishing and academia – the use of statistics to assess values of published research.

We spoke to Lisa to understand more about the importance of the paper published in ESE, her work at Elsevier and projects she is involved in that impact academic communities.

The Colledge & James paper appeared in European Science Editing 41(3) in August 2015, and can be openly downloaded from the EASE site
here.
 

EASE:  Please introduce our readers to your ESE article.

Lisa Colledge: The paper is about using a “basket of metrics” to understand merit. Using several metrics gives more varied and nuanced insights into merit than is possible by using any one metric. The basket applies to every entity, not only journals, but also researchers, articles, and institutions, and we describe the various ways in which an entity can be excellent.

F
or a journal of course this is about the papers it publishes and how these are viewed and cited, but also about the characteristics of its editorial and author communities, and its impact on the general public.  We shared survey results that tested opinions about usage metrics, and confirm that one size of metric does not fit all, and that there is real appetite to use basket of metrics.


EASE: What is your main job role?

LC: Director of Research Metrics, at Elsevier

EASE: How long have you been involved in this area?


LC: I have been in this position since October 2014, but have been working with research metrics in various roles in Elsevier since 2006.

EASE: What are some of the innovative aspects you could tell us about your work?

LC: The most innovative aspect is about making research metrics practical, so that they can be used for benefit of research by everyone, beyond the very talented and specialized scientometricians.

EASE: What is the difference between scientometrics and bibliometrics? Or are these two terms interchangeable?

LC: I believe that these terms are often used interchangeably, but there are differences:

-  Bibliometrics refers to the metrics that you can derive from written information, typically that held in a library. It includes metrics about journals and other serials, counts of items published, counts of citations, and metrics that depend on affiliation information (like the amount of international collaboration).

-  Scientometrics refers to metrics that reflect the performance of science and technology. This encompasses bibliometrics, but goes further. Scientometrics includes an analysis of funder sources / funding, insights into the commercialization / links of research with enterprise, metrics about online views or discussions in F1000 or Twitter, influence of research on national or international policy and medical guidelines, for example.

When I talk about “metrics” I am using it as shorthand for the broadest picture of research – scientometrics, but definitely including bibliometrics which continue to be hugely important.


We have developed, through our community engagements, the “2 Golden Rules” to making research metrics usable: Golden Rule 1 is to always use quantitative metric-based input alongside qualitative opinion-based input, and Golden Rule 2 is to ensure that the quantitative, metrics part, of your input always relies on at least 2 metrics to prevent bias and encouragement of undesirable behaviour.  Championing this approach with the community, by embedding it throughout our tools, gets me out of bed in the morning.

EASE:  When you talk of your Golden Rules, it might be helpful for our readers, if you could give an example of two metrics you would use to substantiate a quantitative measure or assessment?

LC: A metric is a numerical value that represents some aspect of journal performance. There are all kinds of aspects of journal performance that you can represent as a number, for instance:

-  Number of submissions or items published are 2 examples of metrics. You could use each of these to calculate a third metric – growth


-  Number of citations or online views per item published are 2 further examples


-  Number of mentions in mass media, number of shares in scholarly tools like CiteULike and Mendeley, and number of times a journal’s content is discussed in F1000, are 3 further examples.


There are more examples given in the paper, in Figure 1, which is probably an easier way of communicating the information. The point is, that there is not only one way of a journal being excellent, and you wouldn’t only want a lot of journals that were all excellent in the same way. There are different ways of being good, and so you need to have different types of metrics (numbers) to reflect a more complete version of the picture – that’s Golden Rule number 2.

The fact that metrics can never give you the complete picture, no matter how many metrics you have, and that for this you need to combine them with opinion, expertise and judgement such as peer review (but neither can opinion, expertise and judgment give you the complete picture – you need to combine those with metrics), is Golden Rule number 1.


 Figure 1. A “basket of metrics” for understanding journal performance. From “A “basket of metrics”—the best support for understanding journal merit”, by L. Colledge, 2015, European Science Editing (41(3)), 61. Copyright 2015 by the European Association of Science Editors.



EASE:  When you talk of your Golden Rule 2 could you give an example of two metrics you would use to substantiate a quantitative measure or assessment?

LC:  Yes! The example is about Field-Weighted Citation Impact (FWCI) and Citations Per Publication (CPP):
-  FWCI is a very popular metric. It takes into account the different volume and speed of citations received by articles in different fields, of different types (e.g. article as compared to review), and of different ages; these are variables that can hide real differences in performance, if they’re not taken into account, so this is a common go-to metric. A FWCI of 1 means a journal or institution, or whatever, is cited exactly as you would expect, above 1 is above average citations, below 1 below average – if your FWCI is 2.63 it means you’re 263% of expected.

-  That’s useful information, but like all metrics FWCI has weaknesses. The normalization by field, type and age makes the method quite complex and it is not easy for someone to validate the calculation themselves. Another weakness is that 2.63 doesn’t tell you anything about the number of citations you’re talking about – it could be 3 citations, or 33, or 333.


-  Simply pairing FWCI with CPP addresses these weaknesses of FWCI. CPP is a simpler metric that can be checked in the database it’s based on, and it tells you whether you are talking about 3 citations, or 33, or 333 per publication.


-  Equally, using CPP on its own wouldn’t compensate for the differences in field, type and age, and wouldn’t give you any indication of whether 33.7 citations per publication was “good” (above average) or not – but if you combine with FWCI, you solve this easily.


EASE: What do you feel are your most significant work-related achievements?

LC: When I talk about the “2 Golden Rules” with members of the research community, they are seen as common sense – practical and sensible – and I find that that is a huge achievement.

I’ll also highlight the pride I feel in an input to and an output of the 2 Golden Rules:

Snowball Metrics is one of the inputs that has led to the development of the 2 Golden Rules, and I am privileged to have been involved in that project: Eight of the world’s most prestigious universities have reached consensus on how they want their performance to be measured so they can benchmark themselves against each other, apples to apples. The most-used metrics in SciVal, a flexible benchmarking tool, are Snowball Metrics, proving that they really resonate.

SciVal is an output of the 2 Golden Rules, and offers unparalleled flexibility to users to support their diverse questions in an intuitive way.

EASE: Do you have any interesting projects in the next year or so, that you are able to speak about?

LC: The “basket of metrics” is the logical outcome of Golden Rule 2. It describes, firstly, a wide range of research metrics to showcase many different ways of being excellent; and, secondly, it says that this range of metrics should be available for all of the entities that people want to measure, such researchers, journals, institutions, funders. The basket can provide useful intelligence to every question. We are currently focusing on extending the range of metrics that we offer to include novel metrics such as media mentions and Mendeley readership, and also on improving the presentation of metrics available for serials such as journals and conference proceedings.

EASE: Are you a member of EASE?

LC: Elsevier has individual memberships of EASE, and we are exploring options for further engagement. We are looking forward to participating in future EASE conferences.

EASE: What motivated you to write for European Science Editing?

LC: Chris James, my co-author, and I were invited to extend a short article that we had prepared for Elsevier Connect. The paper was about attitudes to metrics based on usage data, created when a user makes a request to an online service to view scholarly information. We jumped at the chance to write this “basket of metrics” article for ESE because we were able to put more context around the short article, and to reach the very important audience of journal editors who are so influential in building opinion in research.

EASE:
In what way is the topic of your paper important to you?

LC: Until the end of 2014, our tools were largely based on the well-known publication, citation and collaboration metrics. The addition of usage metrics to our offerings early in 2015 felt like the first practical test of the concepts of the 2 Golden Rules, and the basket of metrics. It was extremely exciting for me and Chris to be able to talk about usage metrics, and see the feedback on the questions we asked during the webinar coming in from attendees all over the world. The feedback was extremely positive, and validated and extended the concepts I’ve mentioned.

EASE: What impact do you hope this paper could have, and what changes could it make?

LC: I hope this paper helps to drive 3 changes:


-  Acceptance that metrics do have a place in research alongside peer review - definitely not instead of it.


-  Belief that using research metrics is common sense. They can help to answer questions, and build confidence in an answer


-  There is no such thing as the “best metric”. That’s nonsense, and it’s a waste of time to think and talk about it – every metric has weaknesses, and no metric is suitable for every question. It’s much more useful to think about the best metrics, plural, that can help to answer specific questions.

EASE:
If people want to read more about this subject, can you name one or two specific articles they should read?

LC:
You can find out more about how usage metrics can be helpful to you in this article “5 ways usage metrics can help you see the bigger picture”.

Elsevier's position on the use of research metrics in research assessment is described in 12 guiding principles:, and our response to the final report to which these principles contributed is available here in Elsevier’s approach to metrics.

EASE:
Are there any websites or other resources related to your paper they should seek out?

LC: The Snowball Metrics recipe book available at www.snowballmetrics.com/metrics.
-------------------------------------------------------------------------------------------------------------



Lisa can be found on Twitter at @lisacolledge1 and @researchtrendy.

You can find Lisa’s article in the full August issue of the ESE Journal archive on the EASE website here.



Previous interviews in the ESE Author Q&A series can be found here.


Interview conducted by Duncan Nicholas of the EASE Council.