President’s Blog

Posted on December 22, 2014
John Challis

The ISB maintains a web-site some areas of which are accessible by anyone, while other parts are accessible by members only.  Our most recent congress proceedings are available via that web-site.  Originally we made the proceedings accessible by members only; it was a benefit of membership.  Recently we changed our policy on this, so now the proceedings are accessible by everyone.  Our rationale was that it is in our members’ best interests to have their research available to as many people as possible, and by having this portion of the web-site open access various indexing services can add the work presented at our congresses to their databases; raising the profile of the research and the society.

Submission deadlines seem to come around very quickly.  I always imagine that in the 24 hours before the deadline for submission of abstracts to an ISB Congress is the most productive period for biomechanists around the world.  Of course the pressure of a deadline can be very motivating but can also lead to mistakes.  A recent publishing error could have been due to rushing to meet a deadline.  In the main body of a paper titled “Variation in Melanism and Female Preference in Proximate but Ecologically Distinct Environments” which was recently published in the journal Ethology the following statement was made,

“Although the association preferences documented in our study theoretically could be a consequence of either mating or shoaling preferences in the different female groups investigated (should we cite the crappy Gabor paper here?), shoaling preferences are unlikely drivers of the documented patterns…”

Clearly there was some breakdown here in the checking of the manuscript prior to submission, and presumably at one or more of the following stages of the publication process:  peer review, editorial review, copyediting, and or the checking of proofs.  This version of the paper is no longer available, but the current version is accompanied by the note “This article has been updated since first published on 12 July 2014 and subsequently replaced due to inclusion of an author's note not intended for publication”.  As biomechanists it is hard to assess Caitlin Gabor’s work in sociobiology but the paper referred to parenthetically has been cited over 60 times.

Many evaluate journals based on their impact factor.  The impact factor is calculated as the total number of citations received by papers published in a journal in a given year divided by the total number of citable items published in the journal in the preceding two years.  An impact factor of four means that papers published in that journal are on average cited four times.  The impact factor for the Journal of Biomechanics is around 2.5, while Nature and Science have impact factors over 30.  There are other more subtle measures of journals including the Eigenfactor, although one I like is the Retraction Index.  This is a measure of the frequency with which papers are retracted from a journal after publication.  Interestingly, this index has a strong positive correlation with the Impact Factor!

The value of a paper is not accurately reflected by the impact factor of the journal in which the paper is published.  The citation rate varies between research areas, and the number of papers cited in papers also varies.  For example, in cell biology a paper published in a leading journal might receive 10 to 30 citations within two years of publication, while in a leading math journal a paper would be doing very well if it received 2 citations.  To address the growing reliance on impact factor as a metric for the quality of a paper, a group of editors and publishers of journals met at the 2012 Annual Meeting of the American Society for cell Biology.  They discussed how the quality of research is assessed, and how research is cited. The result of this meeting was the San Francisco Declaration on Research Assessment, which was published in 2013 and is available on the web,  There were 18 recommendations; here I will highlight just five.  The first is the reports general recommendation,

  • Do not use journal-based metrics, such as journal impact factors, as a surrogate measure of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions.

The other four relate to individuals,

  • When involved in committees making decisions about funding, hiring, tenure, or promotion, make assessments based on scientific content rather than publication metrics.
  • Wherever appropriate, cite primary literature in which observations are first reported rather than reviews in order to give credit where credit is due.
  • Use a range of article metrics and indicators on personal/supporting statements, as evidence of the impact of individual published articles and other research outputs.
  • Challenge research assessment practices that rely inappropriately on journal impact factors and promote and teach best practice that focuses on the value and influence of specific research outputs.


These recommendations will hopefully be helpful as you sit on committees assessing research, or as you select a journal for submission of your work, or as you cite papers in your manuscripts.




John Challis

Penn State University


Tagged as: Comments Off
Comments (0) Trackbacks (0)

Sorry, the comment form is closed at this time.

Trackbacks are disabled.