Home Translating Report News Physicians Diseases Body Sites Lab tests Search
Home Diseases and Health Information

The lab made a mistake!

Everyone working in the laboratory cringes when these words are heard.  No one likes or wants to make a mistake, especially when the consequences are potentially life-threatening. Unfortunately, mistakes do occasionally occur.  However, the mistake may occur at many levels and may not necessarily be the fault of the laboratory. 

We are often asked, How accurate are the laboratory tests?  Translated...Can I trust these results?

Think about the events that occur from the time your blood or urine sample is taken until the time the laboratory report is generated. 

1. Proper Laboratory Test Selected

The physician ordering the test must be careful to order precisely what is wanted.  A Lupus test may be misinterpreted as a request for a Lupus panel or a  Lupus Inhibitor panel.  If the precise terminology is not used, the risk for an incorrectly ordered test is increased. 

2. Proper identification

As simple and as basic as this sounds, patient identification labels can be wrong.  The laboratory phlebotomist is trained to confirm the name of the patient but sometimes the patient is unable to answer.  In other circumstances, two patients with the same name may be confused.  There is always a cross check with a unique hospital identification number but it is easy to see how a potential mistake can be made if one is not vigilant.

3. Proper Technique To Obtain the Specimen

The laboratory phlebotomist does not always draw the specimen.  Nurses, medical students, physician's assistants, any number of medical professionals may draw a blood sample.  There is a proper technique to obtain a blood or urine sample.  Some patients have an intravenous (IV) line in one of the arm veins.  If a blood sample is taken from one of these lines, care must be taken to not contaminate the blood sample with the IV solution.  Some IV solutions contain glucose which can lead to a spurious increase in the blood glucose measurement if the sample is contaminated with it.


Error or Variation?

If the body's temperature can vary from minute to minute, it should come as no surprise that the chemicals and enzymes that are measured can also vary with similar rapidity.  But did you know that even a change in posture may change some laboratory values?  Some values, such as blood glucose, will directly be influenced by a recent intake of a meal.  Other values, such as magnesium, are less likely to be affected.  All of these conditions which may alter the laboratory value before the actual blood sample or specimen is taken are known as pre-analytical variables.  Careful attention to potential pre-analytical variables can often resolve puzzling laboratory values.

Errors can occur, however.   The prudent course of action is always to repeat the laboratory test to ensure that the value which was first reported is accurate.  If the two values differ significantly, sometimes an additional sample can be requested.  In addition, a careful check of the quality control samples (see next section) is made to ensure the equipment is functioning properly. 

Statistical terms can very confusing. The Doctor's Doctor has provided a short primer of some of the terms.

QUALITY IMPROVEMENT CHARACTERIZATION
PEER REVIEW  
Measuring the Value of Review of Pathology Material by a Second Pathologist

Andrew A. Renshaw, MD, and Edwin W. Gould, MD
Am J Clin Pathol 2006;125:737-739
Abstract quote


In many departments, some cases are reviewed routinely by a second pathologist within the same department before sign-out. The value of this practice is not known. We reviewed and compared the disagreement and amendment rates for cases reviewed by 1 or more pathologists based on the results of blinded review.

A total of 8,363 cases underwent blinded review, and of these, 1,087 (13.0%) were reviewed by more than 1 pathologist before sign-out. The disagreement rate for cases reviewed by more than 1 pathologist (4.8%) was significantly lower than for cases reviewed by only 1 pathologist (6.9%; P = .004). The amendment rate decreased to 0.0% from 0.5%, but this decrease was not statistically significant (P = .12).

Review of material by a second pathologist before sign-out is associated with a lower disagreement rate. These results suggest second review of surgical pathology is of value, but the best selection of cases to be reviewed remains to be defined.
A preliminary diagnosis service provides prospective blinded dual-review of all general surgical pathology cases in an academic practice.

Weydert JA, De Young BR, Cohen MB.

From the Department of Pathology, University of Iowa, Iowa City, IA.

Am J Surg Pathol. 2005 Jun;29(6):801-5. Abstract quote  

Quality assurance of diagnostic accuracy in surgical pathology is an important part of a pathologist's total quality management program. At our academic institution, the quality of diagnostic accuracy is monitored via dual-review of every general surgical pathology case, which accounts for nearly 20,000 cases per year.

This comprehensive dual-review is achieved by operating a preliminary diagnosis service, staffed by a senior or board eligible resident. Analysis of a portion of our dual-review data (6300 cases) demonstrates an overall diagnostic concordance rate of 95.4% and a clinical major discrepancy rate of 0.29% between the preliminary diagnosis and staff pathologist diagnosis, comparable to other published rates.

The incorporation of a preliminary diagnosis service into our academic surgical pathology practice has proven to be beneficial with regard to quality assurance and resident education. Other academic institutions may similarly benefit from the addition of such a service.
Institutional Pathology Consultation

Tsung, Jeffrey S. H MD

From the Department of Pathology and Laboratory Medicine, Koo Foundation, Sun Yat-Sen Cancer Center, Taipei, Taiwan.

 

American Journal of Surgical Pathology : Volume 28(3) March 2004 pp 399-402 Abstract quote

Sun Yat-Sen Cancer Center is the only cancer center in Taiwan. The hospital maintains a policy, and the division of oncology makes a concerted effort to obtain and review pertinent pathologic specimens in all patients who had pathologic diagnosis performed at other institution before rendering therapy.

A 1-year retrospective study was undertaken to assess the frequency of discordant diagnosis of our second-opinion pathology slide review and determine its impact on patient care. Discrepancies were classified into four basic categories: A) no diagnostic disagreement; B) no diagnostic disagreement but pertinent information not included, such as tumor size, lymphovascular invasion, perineural invasion, histologic grading, margin status, extracapsular spread in metastatic lymph nodes; and C) major diagnostic disagreement, which was defined as follows; 1) change from benign to malignant, 2) change from malignant to benign, 3) a different type of neoplasm, and 4) change in N and M classification in TMN staging framework. Of 715 cases, a total of 673 (94%) showed no discrepancy. However, 35 of 673 (5.2%) cases failed to offer pertinent information (category B). Major disagreement was found in 42 (6%) cases (category C). This study illustrated the fact that second pathology slide review prior to therapy can identify a small group of cases that result in a major change in their therapeutic plan.

Admittedly, the review of pathology slides involves additional time and effort for both consulting and referring institutions. It can ensure quality medical care and limit medicolegal liability. As the Association of Directors of Anatomic and Surgical Pathology recommended, second pathology review should be standard practice. It is necessary that our major Pathology Association and Societies adopt a strong position on this matter to influence government or insurance company to pay for this service rendered by pathologists.

Interinstitutional Pathology Consultations A Reassessment

Michele M. Weir, MD, FRCP, 1 E. Jan, 2 and Terence J. Colgan, MD, FRCP(C)
Am J Clin Pathol 2003;120:405-412 Abstract quote

We retrospectively determined the clinical impact of 1,000 randomly selected interinstitutional pathology consultations (IPCs). An IPC included all specimens from the patient. IPCs were classified as concordant or discordant with the original diagnosis. Discordant IPCs were classified as having a clinical impact or no impact. Discordant IPCs owing to interpretation differences were subclassified further.

The IPCs included 1,522 specimens (1,204 histology, 318 cytology); 923 (92.3%) were concordant, 9 (0.9%) indeterminate, and 68 (6.8%) discordant (clinical impact, 37; no impact, 31). Reasons for discordant IPCs were interpretation differences, 45; additional sectioning, 7; ancillary testing, 1; clerical error, 5; or a combination, 10. Reasons for 26 discordant IPCs with clinical impact owing to interpretation differences were overdiagnosis, 11; tumor subtype change, 4; stage change, 4; underdiagnosis, 3; resection margin status change, 2; undergrading, 1; and understaging with resection margin status change, 1. IPC may identify diagnostic discrepancies that impact management for some patients. The prevalence of a clinical impact of IPC on management varies according to body site.

Mandatory IPC does ensure identification of clinically significant diagnostic discrepancies; targeted IPC by body site or specimen type may represent an alternative strategy after further data accumulation. Discordant IPCs may be due to factors other than interpretation difference.
Agreement and Error Rates Using Blinded Review to Evaluate Surgical Pathology of Biopsy Material


Andrew A. Renshaw, MD, Norberto Cartagena, MD, Scott R. Granter, MD, and Edwin W. Gould, MD

Am J Clin Pathol 2003;119:797-800 Abstract quote

Blinded review has been shown to be an excellent method to detect disagreements and errors and improve performance in gynecologic cytology. Preliminary studies suggest it may be valuable in surgical pathology.

We reviewed 5,000 sequential outpatient surgical pathology biopsy cases without knowledge of the original diagnosis or history and compared the results with those of the original diagnosis.
Complete agreement was obtained in 91.12% of cases. The technique of blinded review of surgical pathology biopsy material had a sensitivity of more than 99%, failing to identify an abnormality in 19 cases. Although there was a significant level of diagnostic disagreement (444 cases), primarily due to differences in diagnostic thresholds (292 cases), diagnoses that resulted in a change in the original report (true errors) were present in only 5 cases, and only 4 were clinically significant. This clinically significant error rate of 0.08% is significantly lower than previously published error rates.

Blinded review is a sensitive (99%) and effective method to identify areas of disagreement and errors in surgical pathology biopsy material. The relatively high rate of disagreement found with blinded review coupled with the very low rate of error highlights the substantial potential for bias in nonblinded reviews.


International variation in histologic grading is large, and persistent feedback does not improve reproducibility.

Furness PN, Taub N, Assmann KJ, Banfi G, Cosyns JP, Dorman AM, Hill CM, Kapper SK, Waldherr R, Laurinavicius A, Marcussen N, Martins AP, Nogueira M, Regele H, Seron D, Carrera M, Sund S, Taskinen EI, Paavonen T, Tihomirova T, Rosenthal R.

 

Am J Surg Pathol. 2003 Jun;27(6):805-10. Abstract quote

Histologic grading systems are used to guide diagnosis, therapy, and audit on an international basis. The reproducibility of grading systems is usually tested within small groups of pathologists who have previously worked or trained together. This may underestimate the international variation of scoring systems.

We therefore evaluated the reproducibility of an established system, the Banff classification of renal allograft pathology, throughout Europe. We also sought to improve reproducibility by providing individual feedback after each of 14 small groups of cases. Kappa values for all features studied were lower than any previously published, confirming that international variation is greater than interobserver variation as previously assessed. A prolonged attempt to improve reproducibility, using numeric or graphical feedback, failed to produce any detectable improvement. We then asked participants to grade selected photographs, to eliminate variation induced by pathologists viewing different areas of the slide. This produced improved kappa values only for some features. Improvement was influenced by the nature of the grade definitions.

Definitions based on "area affected" by a process were not improved. The results indicate the danger of basing decisions on grading systems that may be applied very differently in different institutions.

How Many Cases Need to Be Reviewed to Compare Performance in Surgical Pathology?


Andrew A. Renshaw, MD
Mary L. Young, MS
and Michael R. Jiroutek, MS

Am J Clin Pathol 2003;119:388-391 Abstract quote

Recent studies have shown increased interest in measuring error rates in surgical pathology. We sought to determine how many surgical pathology cases need to be reviewed to show a significant difference from published error rates for review of routine or biopsy cases. Results of 4 series with this type of diagnostic material involving a total of 11,683 cases were reviewed to determine the range of published false-negative, false-positive, typing error, threshold error, and clinically significant error rates.

Error rates ranged from 0.00% to 2.36%; clinically significant error rates ranged from 0.34% to 1.19%. Assuming a power of 0.80 and a 1-sided alpha of 0.05, the number of cases needed to be reviewed to show that a laboratory with either twice or one half the published error rate was significantly different from the range of published error rates varied from 330 to 50,158. For clinically significant errors, the number of cases varied from 665 to 5,886.

Because the published error rates are low, a relatively large number of cases need to be reviewed and a relatively great difference in error rate needs to exist to show a significant difference in performance in surgical pathology.


Blinded review as a method for quality improvement in surgical pathology.

Renshaw AA, Pinnar NE, Jiroutek MR, Young ML.

Department of Pathology, Baptist Hospital of Miami, Miami, Fla (Drs Renshaw and Pinnar); Department of Biostatistics, University of North Carolina, Chapel Hill, NC (Mr Jiroutek and Ms Young).

 

Arch Pathol Lab Med 2002 Aug;126(8):961-3 Abstract quote

Context.-Several studies have shown that blinded review, because it is less biased and may improve vigilance, is an excellent method for detecting errors and improving performance in gynecologic cytology. The value of blinded review in surgical pathology is not known.

Objective.-To determine the value of blinded review in surgical pathology.

Methods.-Five hundred ninety-two biopsy cases were reviewed without knowledge of the original diagnosis or history, and the results were compared with those of the original diagnosis.

Results.-Complete agreement was obtained in 567 (96%) of 592 cases. The technique of blinded review of biopsy material had a sensitivity of 98%, failing to identify a lesion in 7 cases; no cases of malignancy were missed. The specificity was 100%. Differences in diagnostic threshold were the most common source of disagreement. False-negative cases were identified by the technique and were clinically significant. Power studies show that the number of cases requiring review to identify significant errors are large, but potentially achievable by blinded review.

Conclusion.-Blinded review is a sensitive and effective method for identifying areas of disagreement, including false-negative cases, and for decreasing errors in surgical pathology biopsy material.

 

Amended reports in surgical pathology and implications for diagnostic error detection and avoidance: a College of American Pathologists Q-probes study of 1,667,547 accessioned cases in 359 laboratories.

Nakhleh RE, Zarbo RJ.

Department of Pathology, Henry Ford Hospital, Detroit, Mich 48202, USA.

 

Arch Pathol Lab Med 1998 Apr;122(4):303-9 Abstract quote

OBJECTIVES: To evaluate amended report rates relative to surveillance methods and to identify surveillance methods or other practice parameters that lower amended report rates.

DESIGN: Participants in the 1996 Q-Probes quality improvement program of the College of American Pathologists were asked to prospectively document amended surgical pathology reports for a period of 5 months or until 50 amended reports were recorded. The methods of error detection were also recorded and laboratory and institutional policies surveyed. Four types of amended reports were investigated: those issued to correct patient identification errors, to revise originally issued final diagnoses, to revise preliminary written diagnoses, and to revise other reported diagnostic information that was significant with respect to patient management or prognosis.

PARTICIPANTS: Three hundred fifty-nine laboratories, 96% from the United States.

RESULTS: A total of 3147 amended reports in all four categories from a survey of 1,667,547 surgical pathology specimens accessioned during the study period were issued by the participants. The aggregate mean rate of amended reports was 1.9 per 1000 cases (median, 1.5 per 1000 cases). Of these, 19.2% were issued to correct patient identification errors, 38.7% to change the originally issued final diagnosis, 15.6% to change a preliminary written diagnosis, and 26.5% to change clinically significant information other than the diagnosis. Most frequently, a request from a clinician to review a case (20.5%) precipitated the error detection. Although not statistically significant, a higher amended report rate (1.6 per 1000) for all error types was associated with routine diagnostic slide review that was performed after completion of the surgical pathology report. This is compared to rates for institutions that had routine diagnostic slide review of cases prior to finalization of pathology reports (1.2 per 1000) and institutions that had no routine diagnostic slide review (1.4 per 100). Slide review of cases prior to completion of reports lowered the rate of amended reports issued for two types of amended reports: those in which the originally issued final diagnosis was changed and those in which information other than the diagnosis was changed for patient management or prognostic significance. Other laboratory practice variables examined were not found to be associated with the amended report rate.

CONCLUSIONS: There is an association between lower amended report rates and diagnostic slide review of cases prior to completion of the pathology report. The level of case review and type of case mix that is necessary for optimal quality assurance needs further investigation.

SECOND OPINION AND OUTSIDE SLIDE REVIEW  


Clinical significance of performing immunohistochemistry on cases with a previous diagnosis of cancer coming to a national comprehensive cancer center for treatment or second opinion.

Wetherington RW, Cooper HS, Al-Saleem T, Ackerman DS, Adams-McDonnell R, Davis W, Ehya H, Patchefsky AS, Suder J, Young NA.

Am J Surg Pathol 2002 Sep;26(9):1222-30 Abstract quote

Immunohistochemistry (IHC) is an important adjunctive test in diagnostic surgical pathology. We studied the clinical significance and outcomes in performing IHC on cases with a previous diagnosis of cancer who are coming to the Fox Chase Cancer Center (FCCC), a National Cancer Institute designated National Comprehensive Cancer Center (NCCC), for treatment and/or second opinion.

We reviewed all the outside surgical pathology slide review cases seen at the FCCC for 1998 and 1999 in which IHC was performed. Cases were divided into the following: confirmation of outside diagnoses without and with prior IHC performed by the outside institution (groups A and B, respectively) and cases with a significant change in diagnosis without and with prior IHC performed by the outside institution (groups C and D, respectively). During 1998 and 1999, 6678 slide review cases were reviewed at the FCCC with an overall significant change in diagnosis in 213 cases (3.2%). IHC was performed on 186 of 6678 (2.7%) slide review cases with confirmation of the outside diagnosis in 152 (81.7%) cases and a significant change in diagnosis in 34 (18.3%) cases. Patient follow-up was obtained in 32 of 34 (94.1%) cases with a significant change in diagnosis (groups C and D), which confirmed the correctness of our diagnosis in 26 of 27 cases (96%; in five cases follow-up was inconclusive). We repeated the identical antibodies performed by the outside institutions in group D (37 antibodies) and group B (133 antibodies) with different results in 48.6% and 13.5%, respectively (overall nonconcordance 21.2%). In group D additional antibody tests beyond that performed by the outside institution were needed in 88.8% of cases to make a change of diagnosis.

In the setting of a NCCC, reperforming and/or performing IHC on cases with a previous diagnosis of cancer is not a duplication of effort or misuse of resources. Repeating and/or performing IHC in this setting is important in the care and management of patients with cancer.

CRITICAL VALUES CHARACTERIZATION


Laboratory Critical Values Policies and Procedures.

Howanitz PJ, Steindel SJ, Heard NV.

Department of Pathology, State University of New York, Downstate Medical Center, Brooklyn, NY (Dr Howanitz); the Public Health Practice Program Office, Division of Laboratory Systems, Centers for Disease Control and Prevention, Atlanta, Ga (Dr Steindel); and the Department of Laboratory Medicine, West Los Angeles Veterans Administration Medical Center, West Los Angeles, Calif (Dr Heard).

 

Arch Pathol Lab Med 2002 Jun;126(6):663-669 Abstract quote

Context.-Critical values lists have been used for many years to decide when to notify physicians and other caregivers of potentially life-threatening situations; however, these lists have not been studied widely.

Objectives.-To investigate critical values lists in institutions participating in the College of American Pathologists Q-Probes program and to provide suggestions for improvement.

Setting.-A total of 623 institutions voluntarily participating in the Q-Probes program.

Design.-A multipart study in which participants responded to information from preprinted lists, collected information about current practices, completed a questionnaire, monitored critical values calls, reviewed patients' medical records, and surveyed nursing supervisors and physicians about critical values.

Main Outcome Measures.-Defining critical values systems, including lists, personnel, costs, processes, usefulness, and related medical outcomes.

Results.-Critical values lists were determined for routine chemistry and hematology analytes and were found to vary widely among participants. In contrast, more than 95% of participants reported positive blood cultures, cerebrospinal fluid cultures, and toxic therapeutic drug levels as critical values. Based on more than 13 000 critical values, participants' data showed that most critical values reports (92.8%) were made by the person who performed the test, and that 65% of reports for inpatients were received by nurses. For outpatients, physicians' office staff received the largest percentage (40%) of reports. The majority of participants (71.4%) had no policy on how repeat critical calls should be handled. On average, completion of notification required about 6 minutes for inpatients and 14 minutes for outpatients. Slightly greater than 5% of critical value telephone calls were abandoned, with the largest percentage abandoned for outpatients. More than 45% of critical values were unexpected, and 65% resulted in a change in therapy. Although only 20.8% of 2301 nursing supervisors thought critical values lists were helpful, 94.9% of 514 physicians found critical values lists valuable.

Conclusions.-Critical values systems were medically important, highly variable, but also costly practices for participants. We propose a number of recommendations for improvement, including that the critical values list should be approved by the medical staff, each laboratory should develop a written policy for handling initial and repeat critical values reports, a foolproof policy should be established to report results from calls abandoned, and efforts at automating the process should become widespread.

REFERENCE RANGES  
Probability-Based Construction of Reference Ranges for Ratios of Log-Gaussian Analytes
An Example From Automated Leukocyte Counts


Donald C. Trost, MD, PhD,1 Mingxiu Hu, PhD,2 Allison G. Brailey,3 and Joel M. Hoffman, PhD

Am J Clin Pathol 2002;117:851-856 Abstract quote

Reference ranges (RRs) are frequently used for interpreting laboratory values in clinical trials, assessing abnormality of laboratory results, and combining results from different laboratories. When a clinical laboratory measure must be derived from other tests, eg, the WBC differential percentage from the WBC count and WBC differential absolute count, a derivation of the RR may also be required.

A naive method for determining RRs calculates the upper and lower limits of the derived test from the upper and lower limits of the measured values using the same algebraic formula used for the derived measure. This naive method and any others that do not use probability-based transformations do not maintain the distributional characteristics of the RRs. RRs derived in such a manner are deemed uninterpretable because they do not contain a specific proportion of the distribution.

We propose a probability-based approach for the interconversion of RRs for ratios of 2 log-gaussian analytes. The proposed method gives a simple algebraic formula for calculating the RRs of the derived measures while preserving the probability relationships. The nonparametric method and a parametric method that takes the log transformation, estimates an RR, and then exponentiates are provided as comparators. An example that compares the commonly used naive method and the proposed method is provided on automated leukocyte count data. This provides evidence that the proposed method maintains the distributional characteristics of the transformed RR measures while the naive method does not.

 

Quality Control, Quality Assurance, and Quality Improvement

These two terms describe similar concepts and are related.  

QUALITY CONTROL Activities that evaluate the uniformity of specific processes and basic functions to assure they are operating within acceptable parameters. It compares the actual performance of the process with the process set forth in the departmental procedure manual.
QUALITY ASSURANCE A system is designed with internal quality checks. It relies upon a collection of data of an outcome and usually encompasses several processes or procedures.
QUALITY IMPROVMENT These are activities that aim to improve on outcomes. These activities utilize quality assurance monitors to determine the effectiveness of an intervention.

Quality control (QC) involves the monitoring of  intraday and intratest variation. It includes activities such as monitoring temperature logs, determining the quality of histology sections and stains, and routine checking of instrumentation. Quality Assurance (QA) is the ongoing assurance that the laboratory tests are measuring what is intended to measure.  Examples of these activities include monitoring of frozen section accuracy and turn around time. Monitoring of diagnostic accuracy and errors and completeness of information fall under this category. Quality improvement (QI) involves a step-wise process which first identifies an indicator or process to improve, measures the current level of performance, determines the target or desirable level of performance, designs an intervention, re-evaluates the level of performance, and repeats the steps necessary to achieve the desired level of performance.

Every laboratory test is performed with several control samples, a sample which has a known premeasured value.  In addition, any value which is out of the usual range can be analyzed again to ensure that it is a true value.  Any aberrant value which is confirmed is immediately reported to the physician ordering the test.

Adapted from Quality Improvement Manual in Anatomic Pathology. Second Edition. Nakhleh RE and Fitzgibbons PL.

TYPES OR INCIDENTS CHARACTERIZATION
Classifying Laboratory Incident Reports to Identify Problems That Jeopardize Patient Safety


Michael L. Astion, MD, PhD,1 Kaveh G. Shojania, MD,2 Tim R. Hamill, MD,3 Sara Kim, PhD,4 and Valerie L. Ng, MD

Am J Clin Pathol 2003;120:18-26 Abstract quote


We developed a laboratory incident report classification system that can guide reduction of actual and potential adverse events. The system was applied retrospectively to 129 incident reports occurring during a 16-month period. Incidents were classified by type of adverse event (actual or potential), specific and potential patient impact, nature of laboratory involvement, testing phase, and preventability. Of 129 incidents, 95% were potential adverse events.

The most common specific impact was delay in receiving test results (85%). The average potential impact was 2.9 (SD, 1.0; median, 3; scale, 1-5). The laboratory alone was responsible for 60% of the incidents; 21% were due solely to problems outside the laboratory's authority. The laboratory function most frequently implicated in incidents was specimen processing (31%).

The preanalytic testing phase was involved in 71% of incidents, the analytic in 18%, and the postanalytic in 11%. The most common preanalytic problem was specimen transportation (16%). The average preventability score was 4.0 (range, 1-5; median, 4; scale, 1-5), and 94 incidents (73%) were preventable (score, 3 or more).

Of the 94 preventable incidents, 30% involved cognitive errors, defined as incorrect choices caused by insufficient knowledge, and 73% involved noncognitive errors, defined as inadvertent or unconscious lapses in expected automatic behavior.

Accreditations

The laboratory is one of the most highly regulated areas in medicine.  There are several levels of accreditation.  Every laboratory must have a CLIA license (Clinical Laboratory Investigation Act).  In addition, a hospital based laboratory must be accreditated by the JCAHO (Joint Commission of Accreditation of Hospitals).  This is the same agency which inspects hospitals and certifies them for operations.  The highest level of accreditation for all laboratories in the United States is CAP (College of American Pathologists) accreditation.  The CAP maintains a rigorous set of standards which usually exceeds the other certifying agencies' requirements. In fact, JCOAH will often waive the inspection of the laboratory if it is CAP accreditated.  

The blood bank is subject to even more rigorous regulations.  In addition to the above mentioned agencies, the blood bank must answer to two additional federal agencies, the FDA (Food and Drug Administration) and the Department of Health.  Blood and blood products are regarded as drugs and thus must be regulated as such.  These products are also perishable with a finite shelf life and subject to different regulations than reagents used in chemistry or hematology.   Finally, there is an even higher level of accreditation, unique to blood banks.   This is the AABB (American Association of Blood Banks).  All blood banks aspire to this level.  Their standards are the most rigorous in the entire laboratory field. 

Proficiency Testing

I took my last test in school! 

You did...but we have tests several times a year. After accreditation is achieved, a laboratory must demonstrate continued excellence in all areas where testing is offered.  Testing varies for the different areas of the laboratory.  For hematology, it may involve review of kodachrome slides demonstrating different blood cells.  In chemistry, serum samples are provided and the appropriate tests are evaluated.  In microbiology, unknown microbial cultures are provided and identification of the microbe is required..

The answers are submitted to the agency, graded, and then returned. Any wrong answer requires a written explanation. Failure of the testing may lead to a site visit by the inspecting agency.  Your laboratory results are also compared to all of the laboratories participating in the testing to assure the testing is fair and also to provide a measure of accuracy.  When the laboratory is reinspected (usually every 2 years), all of the proficiency testing is reviewed and the past performance is a critical factor in recommending re-accreditation. 

Summary

It may come as a surprise to many of you how highly regulated the laboratory is.  The modern laboratory requires state of the art equipment directed by state of art people.  You cannot trust your laboratory results with anything less.


Commonly Used Terms

Basic Principles of Disease
Learn the basic disease classifications of cancers, infections, and inflammation

Commonly Used Terms
This is a glossary of terms often found in a pathology report.

Diagnostic Process
Learn how a pathologist makes a diagnosis using a microscope

Surgical Pathology Report
Examine an actual biopsy report to understand what each section means

Special Stains
Understand the tools the pathologist utilizes to aid in the diagnosis

How Accurate is My Report?
Pathologists actively oversee every area of the laboratory to ensure your report is accurate

Got Path?
Recent teaching cases and lectures presented in conferences


Internet Links

Pathologists Who Make A Difference
Search for a Physician Specialist


Last Updated June 6, 2005

Send mail to The Doctor's Doctor with questions or comments about this web site.
Read the Medical Disclaimer.

Copyright © The Doctor's Doctor