“We analysed 200 consecutive articles published in these journals between October 2010 and April 2011; 100 articles from the nSA [no Structured Abstract] journal, 50 articles from each of the two SA [with Structured Abstract] journals.”
“First, we analyzed the most popular journals because, in consideration of the high rejection rate of articles submitted every year, we expected that these journals would be the most controlled in the editorial and peer-review processes. According to the limited number of journals and articles considered, the results of our study could have a limited external validity. Second, we considered as errors only those that would not be subjective, thus we did not evaluate more methodologically relevant but less objective errors, as our aim was to detect system frailty not the consequences of errors.”
“Most of the articles were supported by a sponsor (115, 92 %), which was a private institution in 50 cases (43 %). In 25 articles (22 %) the sponsor analyzed the data itself.”
“Medical practice should be evidence based. The reliability of scientific evidence is sometimes threatened by factors extrinsic to the peer-review system. These factors may stem from unintentional bias and mistakes not detected by the peer reviewers, to the extreme instances of elaborate fraud. Publication bias is a recognized confounder in finding evidence, and in many areas, conflicts of interest have driven scientific recommendations”.
“Among the 125 articles included in the study, 102 (82 %, 95 % CI 74–88 %) contained some kind of error, even multiple.”
Source: Giorgio Costantino, Giovanni Casazza, Giulia Cernuschi, Monica Solbiati, Simone Birocchi, Elisa Ceriani, Piergiorgio Duca, Nicola Montano. Errors in medical literature: not a question of impact. Internal and Emergency Medicine March 2013, Volume 8, Issue 2, pp 157-160. Doi:10.1007/s11739-012-0880-z
Abstract/ The editorial and peer-review processes should guarantee readers as to the reliability of published data. The first step of these processes is to check for errors. The aim of our study was to look for the presence of objective errors in consecutive articles published on three of the most authoritative clinical journals. Two reviewers evaluated the presence of any error in 200 consecutive original articles containing at least two tables, allowing a reanalysis of the data, published between October 2010 and April 2011. Error was considered any action different from what was planned. Errors were listed as: methodological, numerical and slips. They were considered as severe if numbers in the abstract were completely different from numbers reported in the full text. Among the 125 articles included in the study, 102 (82 %, 95 % CI 74–88 %) contained some kind of error, even multiple. Nine articles (7 %, 95 % CI 3–13 %) contained one slip, 92 articles (74 %, 95 % CI 65–81 %) contained at least one numerical error, and 22 articles (18 %, 95 % CI 11–25 %) contained one methodological error. Five articles (4 %, 95 % CI 1–9 %) contained one serious error. None of the errors retrieved (0 %, 95 % CI 0–2 %) would have changed the results of the studies. Most of the articles published in the most important medical journals present mistakes. Our results could be a clue to editorial and peer review systems system weaknesses. A debate within the scientific medical community about these systems, and possible alternative adjustments are needed.
Keywords/ Errors Peer review Medical journals Publications Editorial system