I did some searches in the medical literature, and was a bit surprised to find that this sort of thing is a known, and fairly big, problem. The literature trail starts back at the beginning of PubMed, where the articles are scanned in from paper copies, so I have to post images to give you text, and keeps right on going up to the present, despite all the attempts to improve things in the interim.
Here are some highly cited papers, which indicates that the problem is well-known, and a serious source of concern among mainstream academics. (note that these citation counts are just for PMC articles, which is a minority subset of all PubMed articles.). It is not just a fringe group of idealistic reformers who are concerned, though I suspect it is a rather small minority, among physicians at large.CMAJ. 1986 March 15; 134(6): 587–594.
A framework for clinical evaluation of diagnostic technologies.
G H Guyatt, P X Tugwell, D H Feeny, R B Haynes, and M Drummond
Most new diagnostic technologies have not been adequately assessed to determine whether their application improves health. Comprehensive evaluation of diagnostic technologies includes establishing technologic capability and determining the range of possible uses, diagnostic accuracy, impact on the health care provider, therapeutic impact and impact on patient outcome. Guidelines to determine whether each of these criteria have been met adequately are presented. Diagnostic technologies should be disseminated only if they are less expensive, produce fewer untoward effects and are at least as accurate as existing methods, if they eliminate the need for other investigations without loss of accuracy, or if they lead to institution of effective therapy. Establishing patient benefit often requires a randomized controlled trial in which patients receive the new test or an alternative diagnostic strategy. Other study designs are logistically less difficult but may not provide accurate assessment of benefit. Rigorous assessment of diagnostic technologies is needed for efficient use of health care resources.
Jaeschke R, Guyatt G, Sackett DL. Users’ guide to the medical literature. III. How to use an article about a diagnostic test. B. What are the results and will they help me in caring for my patients? JAMA. 1994;271:703–707. doi: 10.1001/jama.1994.03510330081039. [PubMed] [Cross Ref]
Cited by over 100 PubMed Central articles
R, Guyatt G, Sackett DL. Users’ guides to the medical literature. III. How to use an article about a diagnostic test A. Are the results of the study valid? JAMA. 1994;271:389–391. doi: 10.1001/jama.1994.03510290071040. [PubMed] [Cross Ref]
Cited by 86 PubMed Central articles
Lijmer JG, Mol BW, Heisterkamp S, Bonsel GJ, Prins MH, van der Meulen JH, Bossuyt PM. Empirical evidence of design-related bias in studies of diagnostic tests. JAMA. 1999;282:1061–1066. doi: 10.1001/jama.282.11.1061. [PubMed] [Cross Ref]
Cited by over 100 PubMed Central articles
The STARD Statement for Reporting Studies of Diagnostic Accuracy: Explanation and Elaboration
Patrick M. Bossuyt1,a, Johannes B. Reitsma1, David E. Bruns2,3, Constantine A. Gatsonis4, Paul P. Glasziou5, Les M. Irwig6, David Moher7, Drummond Rennie8,9, Henrica C.W. de Vet10 and Jeroen G. Lijmer1
- Author Affiliations
1Department of Clinical Epidemiology and Biostatistics, Academic Medical Center—University of Amsterdam, 1100 DE Amsterdam, The Netherlands.
2Department of Pathology, University of Virginia, Charlottesville, VA 22903.
3Clinical Chemistry, Washington, DC 20037.
4Centre for Statistical Sciences, Brown University, Providence, RI 02912.
5Centre for General Practice, University of Queensland, Herston QLD 4006, Australia.
6Department of Public Health & Community Medicine, University of Sydney, Sydney NSW 2006, Australia.
7Chalmers Research Group, Ottowa, Ontario, K1N 6M4 Canada.
8Institute for Health Policy Studies, University of California, San Francisco, San Francisco, CA 94118.
9Journal of the American Medical Association, Chicago, IL 60610.
10Institute for Research in Extramural Medicine, Free University, 1081 BT Amsterdam, The Netherlands.
↵aAddress correspondence to this author at: Department of Clinical Epidemiology and Biostatistics, Academic Medical Center—University of Amsterdam, PO Box 22700, 1100 DE Amsterdam, The Netherlands. Fax 31-20-6912683; e-mail firstname.lastname@example.org.
The quality of reporting of studies of diagnostic accuracy is less than optimal. Complete and accurate reporting is necessary to enable readers to assess the potential for bias in the study and to evaluate the generalisability of the results. A group of scientists and editors has developed the STARD (Standards for Reporting of Diagnostic Accuracy) statement to improve the reporting the quality of reporting of studies of diagnostic accuracy. The statement consists of a checklist of 25 items and flow diagram that authors can use to ensure that all relevant information is present. This explanatory document aims to facilitate the use, understanding and dissemination of the checklist. The document contains a clarification of the meaning, rationale and optimal use of each item on the checklist, as well as a short summary of the available evidence on bias and applicability. The STARD statement, checklist, flowchart and this explanation and elaboration document should be useful resources to improve reporting of diagnostic accuracy studies. Complete and informative reporting can only lead to better decisions in healthcare.
In studies of diagnostic accuracy, results from one or more tests are compared with the results obtained with the reference standard on the same subjects. Such accuracy studies are a vital step in the evaluation of new and existing diagnostic technologies (1)(2).
Several factors threaten the internal and external validity of a study of diagnostic accuracy (3)(4)(5)(6)(7)(8). Some of these factors have to do with the design of such studies, others with the selection of patients, the execution of the tests or the analysis of the data. In a study involving several metaanalyses a number of design deficiencies were shown to be related to overly optimistic estimates of diagnostic accuracy (9).
Exaggerated results from poorly designed studies can trigger premature adoption of diagnostic tests and can mislead physicians to incorrect decisions about the care for individual patients.Reviewers and other readers of diagnostic studies must therefore be aware of the potential for bias and a possible lack of applicability.
A survey of studies of diagnostic accuracy published in four major medical journals between 1978 and 1993 revealed that the methodological quality was mediocre at best (8). Furthermore, this review showed that information on key elements of design, conduct and analysis of diagnostic studies was often not reported.