.comment-link {margin-left:.6em;}

Tuesday, September 13, 2005

Cost in Radiology

Evidence-Based Radiology -- A Primer for Referring Clinicians and Radiologists to Improve the Appropriateness of Medical Imaging CME/CE
Author: Bruce J. Hillman, MD

Medscape
Evidence-Based Radiology -- A Primer for Referring Clinicians and Radiologists to Improve the Appropriateness of Medical Imaging
Fostering Evidence-Based Radiology -- Introduction and Rationale

Over the past 20 to 35 years, medicine has witnessed a remarkable progression of imaging innovations, such as x-ray computed tomography (CT), magnetic resonance imaging (MRI), positron emission tomography (PET), digital radiography, teleradiology, and innumerable image-guided interventional procedures. These technologies have made healthcare less invasive and hence safer, have increased access to expert diagnosis for people living in remote areas, and have promoted earlier detection of life-threatening disease, allowing for improved health outcomes. Medical imaging has revolutionized patient care and has saved countless lives.

However, the very rapid progression of imaging technology has come at a significant cost. Clinical researchers have been unable to keep pace, and as a result, only a modicum of scientific evidence supports many of today's imaging applications. Most radiologic practice is based on anecdote, habit, and a literature that is heavily weighted toward uncontrolled observations and nongeneralizable single-institution research studies. There is genuine reason for concern that a sizable fraction of the medical imaging performed is of marginal value or frankly inappropriate. Inappropriate use means potential harm to patients and wasted expense.

Policymakers and payers, and lately radiologists themselves, have become acutely aware of the issue of expense. A recent report of the Centers for Medicare and Medicaid Services Medical Payment Advisory Commission (MedPAC) noted that the cost of medical imaging was rising faster than that of any other physician service.[1] According to MedPAC, from 1999 through 2002, imaging rates grew at 10.1% per year versus 5.2% for all physician fee-schedule services. During this same period, use of high technology imaging -- CT, MRI, and imaging in nuclear medicine -- increased by 15% to 20%, up to 4 times the rate of other physician services. The report indicates that these trends continue unabated. Private insurers are witnessing the same phenomenon[2]: Medical imaging now consumes 10% to 15% of their payments to physicians, compared with less than 5% only a decade ago.

There are several reasons why use of medical imaging could be increasing so rapidly, some of which could be addressed by altered health or reimbursement policy and others that would be less open to policy modifications. Among these are:

1. The aging of the US population. People are living longer, and with aging comes an extended opportunity to develop chronic conditions that are increasingly responsive to medical and surgical treatment. Virtually all of these conditions promote the use of imaging for diagnosis, determination of severity, and follow-up to evaluate the effects of therapy or to guide percutaneous treatment.

2. The structure of the US healthcare system. In the United States, employers (or the government, in the case of the elderly and the poor) provide health insurance benefits, principally by fee-for-service mechanisms. Both patients and physicians are offered incentives for high-level use of medical services. Americans have become conditioned to erroneously believe that more healthcare is better than less, so they demand services that may be inappropriate; physicians, in turn, earn more money the more services they provide. Studies have shown that patients have a profound influence on the ordering patterns of physicians.[3]

3. Insufficient evidence. Much of the marginal and inappropriate use of imaging is due to the lack of available evidence on which to base decisions about what tests to order in a given clinical situation. Conversely, even when such information is available, the referring physician is often unaware of it. In addition, if research show that a particular imaging application is useful for a specific clinical condition, physicians may incorrectly think that the same test must be effective for another, similar condition, even if no valid research supports this conclusion. This is referred to as indication creep.

4. The nature of radiology. Radiology is a referral specialty. As such, radiologists may be reluctant to challenge referring clinicians about inappropriate referrals because they fear that doing so too frequently may cost them future referrals.

5. Defensive medicine. Defensive medicine is the practice of requesting tests to rule out a disease state, even though the clinician feels there is little chance that the patient actually has the disease. Defensive medicine is practiced to minimize the risk that the patient will sue the physician if a condition is not diagnosed. No one really knows the magnitude of this problem, but it is almost certainly significant.

6. The trend toward "certainty." Being a physician means acting under conditions of uncertainty. Despite this, in US medical culture, as close to 100% certainty has become desirable to support treatment decisions. Imaging has become a preferred approach for increasing certainty, even though ordering a diagnostic test when there is too little or too much certainty has limited value to the patient.

7. Self-referral. Self-referral occurs when nonradiologist physicians refer their own patients to receive imaging technologies in which they have a financial interest. Almost uniformly, a significant body of research has shown that self-referral results in greater use of imaging than occurs when clinicians refer patients to radiologists.[4] In a 2004 report, MedPAC noted that 52% of medical imaging is performed by nonradiologists, and this has been the case for some time.[5] The great concern of policymakers is that clinicians are increasingly using a loophole in the anti-self-referral regulations to place expensive, high technology equipment, such as CT and MRI scanners, in their offices. There is less control over imaging quality in outpatient offices than in hospital settings, and self-referral has helped to drive the increased use of these technologies.[1]

Beyond concerns over the costs associated with the rapidly increasing rates of medical imaging use, there are legitimate concerns about how this phenomenon is affecting patients' health. All medical imaging tests perform imperfectly, and the application of even the most sophisticated imaging technology results in both false-positive and false-negative diagnoses. This error rate is only poorly understood for most technologies, since there often has been little rigorous testing in a generalizable population of physicians and patients.

The problem inherent in false-negative diagnoses is self-evident, while the potential for harm with false-positive diagnoses is a bit more complicated. In the case of false-negatives, patients will be incorrectly told that they do not have a serious disease and will receive false reassurance about their health. Because of this, they may fail to heed subsequent symptoms, thinking that the problem is "in their head." By the time they realize that they do need medical attention, the situation may be serious enough that treatment is less effective or will cause greater morbidity than if the condition had been discovered on the initial imaging examination.

Perhaps counterintuitively, false-positive diagnoses may have an even greater potential for harm. Because of the medicolegal environment, and also because of the quest for certainty (2 phenomena detailed earlier), it has become increasingly rare for further work-up to be avoided if an abnormality is discovered on an imaging examination. Some such findings will eventually be shown to represent real and treatable disease. However, many others will eventually prove to simply be normal variants. Others still will be what are called incidentalomas -- findings that are real but are meaningless to the health of patients.[6,7] Finally, some imaging findings will represent pseudodisease: real abnormalities -- some even serious conditions -- for which treatment will not affect patient outcome.[7] Regardless of whether the positive findings on an imaging test are meaningful, they almost inevitably lead to further imaging examinations (some of which will be more invasive) and even, in some cases, unnecessary treatment. As an extreme example of this phenomenon, one decision analytic model of whole-body CT screening has shown that the downstream diagnostic and therapeutic interventions that result from false-positive diagnoses are the principal drivers of costs associated with population-based imaging screening.[8]

As imaging innovations continue to improve in their spatial and contrast resolution, the problem of false-positive diagnoses may grow more severe.[7] "Seeing more" does not always translate to improved care.


1. Survey - In your clinical practice, approximately how many high-end (CT, MRI, PET, etc.) imaging studies are ordered per week?
5
10
20
30
>50
Unveiling the Technology Assessment Hierarchy

When new imaging technologies begin to be used in clinical settings, little is known about their potential to improve care. The radiologists who first work with these technologies are most interested in exploring their technical and medical capabilities. The earliest reports generally describe how well structures can be seen on the images; case reports and small series may detail the detection of abnormalities. Later publications may present larger series of patient observations. One concern is that this phase of evaluation seems to last longer than it should.[9,10] Case series are prone to serious selection and validation biases that skew the results in favor of the technology, presenting a more optimistic picture of performance than is reasonable. Ideally, for the sake of improving the appropriateness of medical imaging, one would hope for more rapid progression to what can be called scientific clinical research or technology assessment.

Drs. Dennis Fryback and John Thornbury were collaborators who were among the pioneers of diagnostic imaging technology assessment. They proposed a hierarchy of assessment for scientific technology that has stood the test of time with only minimal rethinking. This hierarchy can serve as a framework for understanding how assessing a technology relates to its diffusion and clinical implementation.[11]

The first level of evaluation can be called diagnostic efficacy. The appropriate research question to ask in this phase, while the technology is just beginning to diffuse into clinical practice or when a substantive advance in the technology occurs, is "How well does the new technology detect specific disease conditions?" The measures of effectiveness are sensitivity (true-positive rate), specificity (true-negative rate), positive and negative predictive value (given a certain prevalence of disease in the study population, the likelihood that a positive or negative test result means that the patient does or does not have the disease), and receiver-operating characteristic curve analysis.[12,13]

While diagnostic efficacy is principally of interest to radiologists, referring clinicians may be more interested in how the information derived from an imaging test affects how they care for patients, represented by the concepts of diagnostic thinking and therapeutic thinking efficacy. In the former case, researchers ask, "How does requesting this test affect clinicians' diagnostic considerations?" For therapeutic thinking, the corresponding question relates to the effect of imaging on considerations of treatment. Physicians are fairly good at quantifying their level of certainty in terms of percentage probability. Eliciting these estimations before and after imaging can help determine what imaging adds to diagnostic and therapeutic decision making.[14,15]

Ultimately, when society is determining whether an imaging test is appropriate, the most important variable is how the test will affect patients' health in a particular clinical circumstance. Health outcomes research in imaging should be undertaken only for technologies that are relatively well diffused into clinical practice and for which neither the technology nor the mode of practice is likely to change substantively in the near term. This is because such studies tend to be time-consuming and expensive, and the outcomes of the study may lack pertinence if the technology or mode of use has changed too drastically by the time the results emerge.

Evaluating health outcomes is a tricky business. Imaging usually represents only one or a few steps in a chain of diagnostic and therapeutic interventions, so how can we ascribe an outcome to any one of these? The performance of an imaging test may be excellent, but patients might still have adverse outcomes because the treatment was inappropriate or no adequate treatment exists. As a result of all of these factors, outcomes evaluations of imaging technologies are rare.

We would also like to know how much a successful technology will cost. This allows us to see, regardless of the health benefit, whether society can afford to implement it on a broad scale. Since cost is relative, researchers generally relate it to how much benefit is obtained for how much money. They have developed the ratio of cost per years of life saved and refer to this ratio as a technology's cost-effectiveness.[16-18] Researchers further modify the benefit by considering quality of life, recognizing that each additional year of life with a debilitating condition may not have the same value to a patient as a year of perfect health. In support of this, methodologists have developed approaches to measure that decrement in quality[19], and thus the ratio becomes cost per quality-adjusted life-years saved, or cost per QALY.

Cost studies are technically challenging. There is a tendency for studies of cost in the radiology literature to be naive, if not misleading. For the same reasons as noted for studies of health outcomes, cost studies of imaging methods should focus on relatively mature, well-diffused, and stable technologies.

Given the difficulties of conducting health outcomes and cost studies, how can we learn the potential of an emerging technology early enough to begin to understand its appropriate use? One approach that has often been applied is decision analysis.[20] Decision analysis subsumes several specific methodologic constructs that are directed at modeling real-life clinical situations. As a result, the first step in developing a decision analysis is usually to develop a clinical model that is necessarily a simplification of real life but is rationalizable to experts. Such a model will usually allow for both "decision points" (eg, whether to conduct a certain diagnostic test or choose among therapeutic options) and "chance points," which are dictated by the nature of the disease condition and patient responses to diagnostic or treatment decisions. The model requires the inputting of probability estimations into the branches of the model that are based on a review of the literature, expert opinion, or both. As I have noted throughout, such estimations may be faulty, particularly for new technologies, introducing error into the analysis. As such, decision analysis provides only a rough estimate of the possible cost-effectiveness of the technology. Even so, decision analysis provides us with the following important information:[21]

* A rough estimate of the upper and lower limits of a technology's cost-effectiveness;

* A model that may be employed iteratively over time as new and better information about the performance of a technology becomes available;

* Insight as to whether information from clinical trials will, if positive, produce a satisfactory level of cost-effectiveness; and

* Information on what specific aspects of the performance of the technology are most ambiguous, so that clinical trials can be simplified by narrowing their focus.



2. Survey - Approximately how many conversations per month do you have with a radiologist to discuss the results of a patient's imaging study?
5
10
20
30
>30
The Power of Critical Thinking

In the last section, I explained how and why new imaging technology is assessed. In this section, I make the case that even though convincing scientific evidence is often lacking for a specific clinical application of an imaging technology, physicians can use imaging more appropriately by familiarizing themselves with some simple concepts to enhance their critical thinking.[9,22]

When confronting a clinical dilemma for which an imaging test may be useful, the referring clinician has 2 options. If this is likely to be a rare situation -- in the physician's experience, one unlikely to recur in the near term -- the best thing the physician can do is call his or her local radiologist for advice on the most appropriate examination. Radiologists have had at least 1 year of clinical training and 4 years of training in imaging, and many radiologists have had additional fellowship training in a subspecialty area. In my opinion, referring clinicians use the consulting expertise of radiologists far too rarely. As a result, they may begin with the wrong test. As noted earlier, choosing an inappropriate initial test can lead to both false-negative and false-positive findings with associated patient morbidity and unnecessary expense. Consultations should consist of a thorough discussion of the patient's clinical signs and symptoms that extends far beyond the usually insufficient written history accompanying most imaging requests.

For more commonly encountered patient presentations, physicians will wish to familiarize themselves with the imaging literature. Articles in the radiology literature are organized in the same way as those in other specialties: Introduction, Methods, Results, and Discussion. For the purposes of thinking critically about whether an article is useful, all of these sections are important and should be read in detail. I recognize that many physicians will usually skim the Methods section or skip over it entirely, but it is necessary to understand and consider methodologic information if we are to establish more appropriate patterns of imaging referral.

The Introduction allows the reader to begin to think about whether the contents of the article relate to the clinical situation in which he or she is interested (this is called pertinence -- see the following discussion). The Methods section is critical to further determining pertinence, as well as the validity, reliability, and generalizability (see the following discussion) of the study, or, in other words, whether the reader should believe the results and use the technology for his or her patient.

The Results section should fulfill what is promised in the Introduction and Methods sections -- no more, no less. Finally, the Discussion section should place the principal findings of the study in the context of previous work, discuss how biases and other influences might be expected to affect the results, and indicate how the authors believe their findings should affect clinical practice or future research. This section should further help the reader to determine generalizability.

In detailing what readers of the radiology literature should be looking for, I have used the terms pertinence, validity, reliability, and generalizability, which I consider to be the 4 pillars of critical evaluation of medical articles.[22] Determining whether articles fulfill these criteria will go a long way in helping readers answer the 3 questions that Blackmore[9] proposed with respect to whether an imaging test should be used in a given situation: (1) Is it relevant? (2) Is it true? (3) Is the evidence sufficient?

Pertinence refers to whether the article sufficiently represents the same type of clinical situation facing the physician. Considerations should include aspects of the patient's clinical presentation, the population of patients being studied, characteristics of the radiologists who interpreted the imaging examinations, and aspects of the technology and how it was employed. As a hypothetical example, a physician who practices in a general community hospital in a midsized city wonders whether she should be using CT to evaluate a patient with newly diagnosed colon cancer for liver metastases. She finds an article in the literature that compares the use of CT with MRI for this purpose. The study was conducted at a major academic medical center that is a referral facility for unusual cases from a large surrounding region. The sensitivity, specificity, and areas under the receiver-operating characteristic curves are based on the performance of highly subspecialized radiologists whose only clinical practice is focused on gastrointestinal cancer. They use novel CT protocols that are not in general use. Clearly, the practice situation is not relevant to the physician's setting and practice conditions. However, this alone may not discourage her, given that she can make some allowances for differences in performance. She decides to read on.

The Methods section is where the reader will learn the most about validity, reliability, and, to some extent, generalizability. Determining validity requires the physician to ask, "Was the study performed in such a way that I can believe the results?" Nearly all studies make concessions to the practical world that allow the insinuation of biases. The question the reader must ask himself or herself is, "Are the biases severe enough that I cannot believe the results?" Imaging assessments are prone to several biases (for a fuller description, see the paper by Obuchowski[23]), but 2 critical biases, selection bias and validation bias, are common.

Selection bias refers to the tendency of imaging researchers to select patients not because of their clinical presentations but because they have received certain imaging examinations. Following the example I used earlier regarding whether CT should be used to detect liver metastases, let us consider that the researchers reviewed all cases of colon cancer at their institution over a 5-year period to see which patients received both CT and MRI examinations and then included only those who had received both tests. Almost certainly, this is a biased sample, probably including a combination of patients who presented with symptoms directly referable to the liver and patients who had an ambiguous result on the first of the 2 tests (CT or MRI) that was performed. Many excluded patients (who will far outnumber the included patients) will have had one test or the other, and some might have had no imaging at all. The sample will not be representative of the larger population of individuals who have new diagnoses of colon cancer. In most studies where selection bias is present, the results will be more optimistic than in general practice.

Validation bias also almost invariably produces false elevations in both sensitivity and specificity. Validation bias occurs because ethical and practical considerations often preclude using the same reference standard (sometimes called the "gold standard") to determine the "truth" about imaging diagnoses. To continue with the same example, patients with positive liver findings will usually undergo needle biopsies, or sometimes segmental resection. The reference standard for patients with positive results is the pathologic findings. For negative cases, however, it would be unethical to pursue an invasive procedure -- and where would you biopsy anyway? Usually, in such circumstances, "truth" is determined by a composite standard that combines reviewers' subjective correlation of prolonged clinical follow-up, subsequent imaging studies, and intervening outcomes. Clearly, however, the reference standard and the composite standard have different reliability. In the Discussion section, the authors need to address the ways in which validation bias might have affected their results.

It should go without saying that the astute reader will assure himself or herself that the radiologists interpreting the images are blinded to information that might bias their interpretations (such as previous or later imaging studies or the pathology results) and that the reviewers who determine "truth," by whatever standard, are blinded to the imaging interpretations (eg, the researchers have done all they could to avoid "review bias").

Information concerning reliability can be garnered from the Methods and Results sections. Reliability asks, "If this study were repeated, using similar technology, in a cohort of similarly selected patients in a similar setting, and if the images were interpreted by a similar group of radiologists, what are the chances that the result would come out about the same?" To assess reliability, it is useful to have a modicum of knowledge about basic statistics, particularly the concepts of P values and confidence intervals. Good researchers estimate in advance how many patients they will need in their study to determine with a particular level of certainty whether one technology is more effective than another (termed statistical significance) at a level that is clinically meaningful to patients (termed clinical significance). This is termed a power analysis.[24] However, this is not the case for all or even most studies reported in the imaging literature. As a result, a positive finding in our example could mean that CT really is superior to MRI for detecting liver metastases from colon cancer, or the result might just have occurred by chance; the likelihood of either scenario is indicated by the P value. Conversely, a negative result could mean that the technologies really are equivalent or simply that too few patients were included in the study (for a fuller explanation, see the report by Halpern and Gazelle[25]).

Finally, if the reader is satisfied that the article is relevant and true, he or she should make sure that the results are generalizable ("Is the evidence sufficient to employ what is being recommended in clinical practice?"). For this judgment, readers must consider the study results, assess the impact of the biases evident in the description of the methods, and digest the information in the Discussion section concerning other research that has addressed the imaging application in question. There are formalized methods, such as meta-analysis,[26] for evaluating the strength of the literature and combining the results of multiple studies to determine whether an imaging technology is appropriate for a particular clinical question. However, these methods tend to be tedious and beyond the level of interest of most physicians. More frequently, it is the reader who must decide subjectively whether to use the imaging test by relying on the critical thinking skills he or she has developed for exactly this purpose.


3. Survey - Before reading this activity, were you aware of the American College of Radiology (ACR) Appropriateness Criteria?
Yes
No
Guidelines, Appropriateness Criteria, and Computer-Aided Ordering Systems

Wennberg and colleagues[27,28] were pioneers in demonstrating that there are large, unexplained variations in medical practice and resource consumption in the United States By implication, large variation means that there is uncertainty about the best approach to a clinical problem, that physicians are not as up-to-date as they should be in changing their practice patterns according to currently accepted best practices, or that physicians are responding to external incentives (usually financial) that are motivating their behavior. Medical organizations issue clinical guidelines to make medical practice more uniform and appropriate. Guidelines codify what is believed to be the best approach for a clinical situation. The rationale is that even though the research evidence is imperfect, guidance on clinical practice based on a combination of existing evidence and expert opinion will improve patient care while research to improve that guidance continues. As such, to be valuable, practice guidelines should be regularly reviewed and updated to reflect best current knowledge.

With respect to medical imaging, the most respected guidelines aimed at improving the appropriateness of use are the Appropriateness Criteria of the American College of Radiology (ACR).[29] The Appropriateness Criteria process involves gathering a multidisciplinary panel of experts who are experienced with a particular clinical situation. Staff members research and distribute the pertinent literature before the panel convenes, and panel members proceed with iterative rounds of voting, distributing the results and discussing them after each round. This usually leads to a convergence of opinion, resulting in a recommendation that use of a technology for a given condition is "appropriate," "marginal," or "inappropriate." The ACR Appropriateness Criteria are available in several formats, including hard copy, CD, and Web-based versions and downloadable versions for use on a personal digital assistant.

Despite the availability of the Appropriateness Criteria, they seem to be underused. The reason for this is uncertain, but it may be related to the busyness of clinicians or the fact that they are attached to habitual, possibly outmoded ordering patterns. Also, the Appropriateness Criteria may still be too inconvenient to use despite the variety of formats available. The last of these possibilities is being addressed by automated imaging ordering systems that provide an interface among the ordering physician, the performance of the examination, and the radiologist's interpretation of the images. To use these systems, a requesting physician will have to provide sufficient information about the patient for the system to determine whether the most appropriate examination has been requested. When the clinical indication and the requested examination don't correlate, the request is denied and the physician will receive educational instruction on possible allowable options, including more appropriate imaging tests. He or she will use what this new knowledge to more appropriately order imaging for the case at hand and in the future.

The concern of payers about accelerating imaging costs may also play a role in promoting the more universal use of imaging guidelines. Already, many payers require preauthorization for high technology imaging, reserving the right to preapprove any requests for such services as CT, MRI, and PET on the basis of the clinical indication. If a payer doesn't grant the preauthorization, there is no payment for the service. Some payers have hired third-party imaging management firms that have developed their own, often more stringent sets of appropriateness guidelines to manage imaging use on the payers' behalf. As the cost to payers for imaging continues to climb, even more rigorous approaches will doubtless be developed to ensure that all imaging is verifiably appropriate.

The Future: Evidence-Based Radiology

Climbing economic costs and concern over the quality and safety of care have necessitated reforms to improve the appropriateness of imaging. Solid research evidence to underpin more appropriate imaging is lacking for many conditions and technologies, but patient care can be improved by physicians becoming more critical readers of the imaging literature, by regular consultation between radiologists and referring clinicians about problem cases, and by adherence to available appropriateness guidelines.

References

1. Report to the Congress: Medicare Payment Policy. MedPAC. March 2005. Available at: http://www.medpac.gov/publications/congressional_reports/Mar05_EntireReport.pdf.
2. Korn A, Rothenberg BM. The opportunities and challenges posed by the rapid growth of diagnostic imaging. Journal of the American College of Radiology. 2005;2:407-410.
3. Wilson IB, Dukes K, Greenfield S, Kaplan S, Hillman BJ. Patients' role in the use of radiology testing for common office practice complaints. Arch Intern Med. 2001;161:256-263. Abstract
4. Kouri BE, Parsons RG, Alpert HR. Physician self-referral for diagnostic imaging: review of the empiric literature. AJR Am J Roentgenol. 2002;179:843-850. Abstract
5. Sunshine JH, Bansal S, Evens RG. Radiology performed by non-radiologists in the United States: who does what? AJR Am J Roentgenol. 1993;161:419-429.
6. Black WC, Welch HG. Advances in diagnostic imaging and overestimations of disease prevalence and the benefits of therapy. N Engl J Med. 1993;328:1237-1243. Abstract
7. Handrich SJ, Hough DM, Fletcher JG, Sarr MG. The natural history of the incidentally discovered small simple pancreatic cyst: long-term follow-up and clinical implications. AJR Am J Roentgenol. 2005;184:20-23. Abstract
8. Beinfeld MT, Wittenberg E, Gazelle SG. Cost-effectiveness of whole-body CT screening. Radiology. 2005;234:415-422. Abstract
9. Blackmore CC. Critically assessing the radiology literature. Acad Radiol. 2004;11:134-140. Abstract
10. Hillman BJ. Outcomes research and cost-effectiveness analysis for diagnostic imaging. Radiology. 1994;193:307-310. Abstract
11. Fryback DG, Thornbury JR. The efficacy of diagnostic imaging. Med Decis Making. 1991;11:88-94. Abstract
12. Hanley JA, McNeil BJ. The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology. 1982;143:29-36. Abstract
13. Weinstein S, Obuchowski NA, Lieber ML. Clinical evaluation of diagnostic tests. AJR Am J Roentgenol. 2005;184:14-19. Abstract
14. Thornbury JR. Intermediate outcomes: diagnostic and therapeutic impact. Acad Radiol. 1999;6(suppl 1):S58-S65. Abstract
15. Tsushima Y, Aoki J, Endo K. Contribution of the diagnostic test to the physician's diagnostic thinking: new method to evaluate the effect. Acad Radiol. 2003;10:751-755. Abstract
16. Eisenberg JM. A guide to the economic analysis of clinical practices. JAMA. 1989;262:2879-2886. Abstract
17. Doubilet PD, Weinstein MC, McNeil BJ. Use and misuse of the term "cost effective" in medicine. N Engl J Med. 1986;314:253-256. Abstract
18. Carlos R. Introduction to cost-effectiveness analysis in radiology: principles and practical application. Acad Radiol. 2004;11:141-148. Abstract
19. Guyatt GH, Feeny DH, Patrick DL. Measuring health-related quality of life. Ann Intern Med. 1993;118:622-629. Abstract
20. Fineberg HV. Decision trees: construction, uses, and limits. Bull Cancer. 1980;67:394-404.
21. Weinstein MC, Siegel JE, Gold MR, Kamlet MS, Russel LB. Recommendations of the Panel on Cost-effectiveness in Health and Medicine. JAMA. 1996;276:1253-1258. Abstract
22. Hillman BJ. Critical thinking: deciding whether to incorporate the recommendations of radiology publications and presentations into practice. AJR Am J Roentgenol. 2000;174:943-946. Abstract
23. Obuchowski NA. Special Topics III: bias. Radiology. 2003;229:617-621. Abstract
24. Eng J. Sample size estimation: how many individuals should be studied? Radiology. 2003;227:309-313.
25. Halpern EF, Gazelle SG. Probability in radiology. Radiology. 2003;226:12-15. Abstract
26. Irwig L, Tosteson AN, Gatsonis C, et al. Guidelines for meta-analyses evaluating diagnostic tests. Ann Intern Med. 1994;120:667-676. Abstract
27. Welch WP, Miller ME, Welch HG, Fisher ES, Wennberg JE. Geographic variation in expenditures for physicians' services in the United States. N Engl J Med. 1993;328:621-627. Abstract
28. Wennberg JE. Understanding geographic variations in health care. N Engl J Med. 1999;340:52-53. Abstract
29. ACR Appropriateness Criteria. Reston, VA: American College of Radiology; 2005.



Authors and Disclosures

As an organization accredited by the ACCME, Medscape requires everyone who is in a position to control the content of an education activity to disclose all relevant financial relationships with any commercial interest. The ACCME defines "relevant financial relationships" as "financial relationships in any amount, occurring within the past 12 months, that create a conflict of interest."

Medscape encourages Authors to identify investigational products or off-label uses of products regulated by the U.S. Food and Drug Administration, at first mention and where appropriate in the content.

Author

Bruce J Hillman, MD
Bruce Hillman, MD, Thoedore E. Keats Professor of Radiology, University of Virginia School of Medicine

Disclosure: Bruce J. Hillman, MD, has disclosed no relevant financial relationships.

Editor

Robert Chevrier
Program Director/Site Editor, Medscape, Inc.

Disclosure: Robert Chevrier has disclosed no relevant financial relationships.



Registration for CME credit, the post test and the evaluation must be completed online.
To access the activity Post Test and Evaluation link, please go to:
http://www.medscape.com/viewprogram/4462_index
米国において、医者の技術料増加 5.2% 時代に画像診断料 10.1% 増、検査件数 15-20% 増加理由の
考察

1. 高齢化社会 (老人増えれば受診者も増える)
2. 保険支払いのシステム(出来高制)
3. 科学的根拠の不足 (診断学においては確実な根拠は少なく、偽陰性のコントロールが難しい)
4. 画像診断のサガ (中診部門なので不適切と思える画像オーダーでも
受けなきゃ仕事を失うかもしれない)
5. 防衛医療 (印象として正常でも、仮に画像検査をやっていないから見落としたと訴訟を受けないために)
6. 確実さへの渇望(不確実性の実学である医学・医療において少しでも確実さを求める手段として
画像をオーダー)
7. 自前の画像診断 (容易にアクセスできるのと減価償却のため不必要な検査が増える)

This page is powered by Blogger. Isn't yours?