Outcomes Research—Preventing Infections to Improve Wound Care Outcomes: An Epidemiological Approach
Epidemiology and Epidemiologic Research Methods
Epidemiology (from the Greek: epi = upon,demos = people) is an applied science that seeks to describe both the frequency and the determinants of disease in a population, and in doing so, leads to evidence-based prevention strategies that are implemented to reduce the incidence of disease. Much of the epidemiological evidence that drives improvements in healthcare is the product of the 3 major forms of investigation described below. An understanding of the strengths and limitations of these methods is essential when formulating new research proposals or using them to manage patients. The most fundamental of these, descriptive or observational studies, simply measure the prevalence and/or incidence of diseases or other events of interest in a population and describe the demographic characteristics of that population. Studies of this nature provide data to show what the frequency of a disease is,who is getting the disease, and when the disease is occurring. These studies are otherwise incapable of determining why some groups are more likely than others to develop a particular disease.
Analytical studies look for why diseases occur by building upon the findings of prior descriptive studies, laboratory results, clinical trials, etc., not only to characterize the frequency of disease, but also to test causal hypotheses. To do this, analytical studies collect, aggregate, and analyze data from multiple subjects with respect to their disease status and their exposure to various treatment or exposure factors that may either increase or decrease the risk of disease. Careful analysis of these data may lead to the identification of statistically significant risk or treatment factors, and to the development of interventional strategies that can be applied individually or as a part of a “bundle” to reduce the risk of disease. Finally, analysis may include methods to “risk adjust” these data in a manner that supports benchmarking and makes possible “apples to apples” comparisons between individual facilities and nationally recognized norms.
Experimental epidemiology or experimental studies actually apply identified or hypothesized interventions in a systematic way in order to measure their impact on subsequent cohorts of subjects. A clinical trial, which compares standard and new treatment modalities in 2 or more arms of a study, is a form of experimental epidemiology. Properly designed clinical trials typically yield the strongest evidence of the effects of alternative therapies, as potential confounding variables are controlled for in the subject allocation process, making it much more likely that differences in outcomes are attributable to the single variable of interest. Clinical trials are also much more difficult to initiate, because an Investigational Review Board (IRB) that provides safeguards for the study subjects must review and approve the study.
Descriptive Statistics: Calculating Prevalence, Incidence, and Measuring Change
The prevalence of a condition or attribute in a population of interest is defined as the presence of that condition in the individuals subjected to review at a given point in time (eg, a photo snapshot).The prevalence rate (or more accurately, the proportion) is calculated by dividing the number of individuals with the condition by the total number of individuals evaluated for the condition. The result is then multiplied by some constant, usually 100.Thus,if 75 subjects are evaluated for a condition, and the condition is present in 12 of those subjects, the prevalence rate is 12/75 x 100, or 16%.
A single prevalence study or survey does not discriminate between new and old cases of a disease, nor is it capable of characterizing trends or changes in the frequency of the condition. A series of consecutive prevalence surveys performed using the same rule sets, case definitions, and validated for inter- and intra-rater reliability and overall accuracy, can be more useful in recognizing trends.The mathematics of this type of analysis is beyond the scope of this discourse.A complete description of the standard methods used in this instance is available in various reference books.1
Incidence describes newly occurring cases of a disease or condition over a period of time (eg, a videotape of events). The images change with the progression of time, and the dynamics of disease occurrence become evident. Classically, the incidence rate (or more accurately, the incidence proportion) is a simple calculation where the numerator is the count of new events, the denominator is the total of all individuals at risk for developing the condition, with the resultant value multiplied by some constant, usually 100.Thus, if 50 patients were admitted last month, and 4 developed a pressure ulcer, it could be said that the incidence rate for that month was 4/50 x 100, or 8%.
More recently, incidence rates have been developed to take into account the aggregate duration (usually in days) of exposure to a given risk factor, such as central lines or Foley catheters, within a group of interest.This is formally referred to as the incidence density rate, but is also commonly referred to as a device-days or patient days rate. In this instance, the numerator in the rate calculation is still the count of newly occurring events (eg,Foley catheter urinary tract infections) in the study group, but the denominator is the sum of Foley catheter days experienced by all members of the group, multiplied by some constant, usually 1000. Thus, if 7 Foley-related urinary tract infections developed during an observation period, and if as a group there were 212 catheter days during that same observation period, the resultant incidence rate would be 7/212 x 1000, or 33.0 per 1000 device days. For events where there is no device of interest, such as with pressure ulcers, researchers can use the number of patient care days as the denominator. The numerator would be the number of pressure ulcers developing during the observation time period.There is no direct equivalency between percentage rates and device or patient care day rates. Care must be taken to avoid confusion when discussing these rates.
Trends and changes in incidence rates over time can be analyzed in many ways and these methods are specific to the types of rates being analyzed.For instance,when analyzing classical incidence rates, a simple chi-squared test of population proportions may be sufficient to discriminate between a current rate and a previously established historical norm. The norm can be either from internally gathered data, or as in the case of surgical site wound infections, can be derived from stratified, procedure, and risk-index adjusted data published by the US Centers for Disease Control and Prevention (CDC).2 If continuous monitoring is desired, as with a series of monthly or quarterly rates, data can be collected to construct P or NP Shewhart Process Control Charts. These charts show when incidence rates are “out of control” and interventions should be implemented. Conversely, device-days rates can be readily compared to historical norms either from internally generated data or from risk stratified device-days rates published by the CDC. In either case, a large sample Z-test is used.3 Again, it is also useful to construct Shewhart Process Control Charts to represent temporal trends and identify outliers; in this instance, use of a U chart method is more appropriate. The addition of a least-squares regression line to control charts allows easy visualization of overall trends in the rates. Benneyan4,5 and Gustafson6 provide a full description of these methods.
Conducting prevalence and incidence (P&I) studies on a regular basis (2 to 4 times annually) is the best way to measure changes in pressure ulcers.With this method a true incidence is obtained, as patients/residents are assessed for pressure ulcers on a given day and again after a designated period. An alternative is a prevalence study, repeated 2 to 4 times annually. This is a snapshot in time, and its validity is based on the accuracy of documentation of the skin assessment on admission. A patient/resident admitted with a pressure ulcer that was not documented becomes a statistic as having a nosocomial pressure ulcer. As a result, the facility’s preventive measures appear to be ineffective. The first and most effective step to preventing a pressure ulcer from becoming infected is to prevent the ulcer.
Ideally, the P&I studies should provide a basis for comparison not only for interfacility effectiveness of interventions, but also for intra-facility areas of excellence or needed improvement. Benchmarks can be established.A goal of reducing pressure ulcer incidence or infection should result in corresponding reduction of pressure ulcers or infections in the institutions or wards where the protocols are implemented to achieve these goals.
All forms of epidemiologic investigation deal with groups of subjects, and while epidemiologically supported interventions are ultimately applied to individuals,any improvement in health status is measured not by a change in any individual’s outcome, but rather by how well or how poorly the group fares. Change is measured over time within a group using standard epidemiological techniques; interventions are made at the patient level. Individual patient outcomes become part of the epidemiologic database to identify trends within the population managed using the standardized, individual patient interventions. These data provide valuable feedback to clinicians and administrators about outcomes achieved and expected in the population managed with that protocol of care.
Understanding How Interventions Affect the Patient: The Conceptual Model
One fundamental way of gaining insight into the factors that contribute to the development or prevention of a disease (eg, surgical site infection), is through an understanding of the conceptual formula:
Disease = Dose x Time x Virulence/Host Resistance
Disease means the probability or likelihood of development of an infection. Dose describes the quantity of infectious agents (bacterial, viral, or fungal) either inoculated onto or allowed to develop on a growth-supportive substrate. It should be intuitive that increasing or decreasing the dose of microbes alters the likelihood of infection, but a few examples may further illustrate the point.
Hand hygiene is often cited as the most fundamental and most effective means of preventing disease transmission. Clinicians may use soap and water, with or without an antimicrobial agent, or an alcohol-based hand rub. This is done before and after each patient contact, whether or not gloves are used. No one would contend that hand hygiene somehow sterilizes the hands,because it does not. Hand washing does reduce the overall bacterial load (dose) on the hands, especially for transiently acquired microbes, to a level that is less likely to be infectious, that is, to a numerically smaller inoculum. Highlevel disinfection of semicritical medical devices eliminates most forms of bacterial life. Certain spores may be left behind, but in such low concentrations that they are clinically inconsequential. The classic work of Elek and Conen7 shows how the presence of a foreign object in a wound lowers the required infective dose of microorganisms. They were able to demonstrate that the infective dose of an intra-dermal inoculum of Staphylococcus pyogenes (now known as Staphylococcus aureus) is reduced by a factor of 10,000 (from 5,000,000 colony forming units [CFU] to as few as 300 CFU) in the presence of suture material.Thus, even without altering any of the other elements in the formula, what is done clinically to reduce the overall dose of microbes presented to the patient helps lower the probability of infection. Conversely, even small doses of infectious agents can cause infection if other factors are altered.
Time describes the duration of exposure to a given risk factor. Imagine walking barefoot and a wooden splinter becomes lodged in your foot. Most likely, you will respond immediately and extract the offending item. What happens if you break off the exposed end of the splinter and leave most of it under the skin? Within a day or two, periwound erythema develops and the wound is noted to have purulent drainage, is obviously infected, and needs to be removed before the infection will resolve. The interval between when the foreign object enters the body and when it is removed is one of the determinants of whether an infection will develop. So it is for various medical devices—Foley catheters, vascular access devices, endo-tracheal tubes and other drainage devices. All of them are “splinters,” which even though they may have been inserted under optimal aseptic conditions, are likely portals of microbial entry and easily can become the nidus of an infectious process. Just as with a splinter, the best way to avoid a device-related infection is not to insert the device in the first place. If it is medically necessary to use a device, it should be inserted using optimal aseptic techniques. Once in place, it should be protected and otherwise cared for judiciously, and just as importantly, it should be removed as soon as it is no longer medically necessary.
Virulence describes the characteristics of a potential pathogen in terms of its ability to produce toxins, invade tissues, and avoid the body’s defense mechanisms. Microbial virulence factors are not normally subject to manipulation by caregivers, since they are basically intrinsic to the infectious agents themselves. Still, much of what we do in healthcare allows certain organisms with only modest virulence to become clinically significant. The development of Clostridium difficile associated diarrhea is an example. On its own, C difficile is good at producing toxins, but not very good at competing with other microflora of the gut. Broad-spectrum antibiotics change the balance. Normal gut flora is reduced or eliminated, and C difficile proliferates in a vegetative state, producing toxins that cause diarrhea, pseudomembranous colitis, toxic megacolon, and sometimes death. In the conceptual formula, greater microbial virulence can be seen as a major contributor to the development of disease.
Host resistance describes the spectrum of specific and nonspecific defense mechanisms,which when intact and functioning properly, provide protection from extrinsic and intrinsic infectious agents:
Nonspecific mechanical barriers:
• skin
• sphincters
• upper airway particulate filtration
Nonspecific chemical barriers:
• lysozymes in various body fluids (tears and respiratory secretions)
• gastric acidity
• fatty acids on the skin
Nonspecific phagocytic defenses
Microbial barriers—competitive inhibition of pathogen growth by normal flora:
• skin
• gut
• genito-urinary tract
Pathogen-specific responses:
• humoral immunity
• cell-medicated immunity.
In the normal delivery of healthcare, clinicians often transgress or compromise these systems by inserting medical devices, administering antibiotics, altering gastric pH, and myriad other care-related activities. Maintaining the patency of the mechanical barriers, enhancing nutritional support of the patient, offering vaccinations when indicated, and judicious use of antibiotics, can help promote more effective resistance.As the host resistance is maintained or improved, the probability of disease goes down. Unfortunately, the opposite is also true.
Predictors of Postoperative Surgical Site Infection (SSI): The CDC Risk Stratification System
In order to make meaningful comparisons between facilities or between any given facility and a nationally aggregated database, such as that developed and maintained by the CDC in its National Nosocomial Infections Surveillance (NNIS) system, now known as the National Healthcare Safety Network (NHSN), certain fundamental conditions need to be met. First and foremost, case definitions and surveillance processes must be standardized and applied evenly across all facilities supplying data to the CDC data pool. In turn, any facility wishing to compare its data to the CDC data should also be using the same rule set for case identification as the reporting facilities. In order to allow valid comparisons to be made between any hospital and the CDC data, the CDC stratifies their data first by surgical procedure(s), and secondarily uses a risk adjustment method that takes into account the following 3 major risk factors: 1)
The duration of the surgical procedure, 2) the wound classification, and 3) the American Society of Anesthesiologists (ASA) score.The risk adjustment process and point system is referred to as the CDC SSI risk index. The duration of the surgical procedure relates directly back to the conceptual formula described previously. Cruse and Foord8 found that the risk of surgical site infection increased roughly proportionally to the duration of the procedure—the rate of infection was 1.3% for procedures that last 1 hour or less.The rate was 4.0% for procedures lasting 3 or more hours. Other researchers have found that the second most predictive independent variable for risk of surgical wound infection is procedure duration of greater than 2 hours.9 Thus, procedures (such as a total hip replacement) that take longer carry a greater risk of infection than procedures that are completed more quickly. CDC researchers analyzed thousands of procedures and determined that, other things being equal, procedures in the upper 25th percentile in duration are at a higher risk for infection. According to the CDC risk stratification system, procedures in this upper category are assigned 1 point.
Similarly, the degree of microbial contamination of the operative site influences the risk of wound infection. Wounds are classified as Clean, Clean Contaminated, Contaminated, or Dirty (alternatively, Class I, II, III, and IV) according to the amount of contamination present or introduced into a wound during surgery.10
Clean or Class I: An incision and operative procedure in which no inflammation is encountered, that enters only sterile body sites, and is not inadvertently contaminated during surgery.
Clean Contaminated or Class II: Surgical procedures that involve the controlled entry into nonsterile body sites, such as the respiratory, alimentary, or genital tracts. Clean procedures that involved a minor break in technique, such as a glove tear,would be downgraded to Clean Contaminated.
Contaminated or Class III: Sites or wounds that involve open, fresh, accidental wounds, or operations with major breaks in sterile technique or gross spillage from the gastrointestinal tract.
Dirty or Class IV:Wounds that have retained devitalized tissue or foreign bodies, or if an abscess or frank pus is encountered during the procedure.
According to the CDC risk stratification system, procedures classified as either Contaminated or Dirty are assigned 1 point.
The American Society of Anesthesiologists (ASA) developed a scoring system that rates the patient’s physiological status and his/her ability to tolerate anesthesia and the impending surgical procedure.11 This rating, referred to as the ASA score, scores patients on a scale from 1 to 5 where 1 is “very good condition,”and 5 “moribund and likely not to tolerate the procedure well.” It is known that patients with ASA scores of 3 or more also have a higher probability of infection, that is, if they survive the procedure. Patients with an ASA score of 3 or higher are assigned 1 point according to the CDC risk stratification system.
Thus the range of possible scores under the CDC system is from zero to 3 (4 possible scores), and for any given procedure or procedure group, such as a knee prosthesis, the risk of infection increases as the risk index increases.This does not necessarily mean that a single patient with a score of 3 will become infected, but it does mean that a group of patients all with scores of 3 will have a higher incidence of infection than a group with scores of 2. An important point to remember is that without this risk adjustment, comparisons to nationally accepted norms would be invalid.
Braden Scale
The optimal path to prevent infection in a pressure ulcer is to prevent the pressure ulcer itself. Although there are unavoidable pressure ulcers,12 the majority should be preventable. To help prevent an ulcer, all patients as well as residents of long-term care facilities need to be assessed for risk of developing a pressure ulcer (PU). Among clinically validated tools such as the Braden, Norton, and Gosnell scales, the Braden Scale probably is the most widely validated.13
Using the Braden Scale for Predicting Pressure Sore Risk© is similar to using a risk assessment for infection in surgical wounds. In this case, points are given for Sensation Perception, Moisture, Activity, Mobility, Nutrition, and Friction & Shear—the lower the score, the higher the risk for a PU, and thus, subsequent higher risk for pressure ulcer development. The standard for use of the Braden Scale is on admission to a facility, transfer to another unit or floor, with change in status, and at discharge. Acute care facilities should assess patients every 48 hours, with ICU patients assessed on a daily or per shift basis, but many acute care facilities now assess all patients on a daily basis. Longterm care facility patients should be assessed every 48 hours the first week of admission, weekly for the first month, then monthly or quarterly. Home health patients should be assessed at every visit.14 Scores of 15 to18 indicate mild risk of a PU, 13 to 14 indicate moderate risk,10 to 12 predict high risk, and ≤ 9 very high risk of PU.14 If the score (maximum of 23) is ≤ 18 and no wound is present, interventions should be initiated for prevention of pressure ulcers. If a wound of any etiology is present, treatment protocol orders that include pressure ulcer prevention interventions should be initiated.
In light of the recent US Centers for Medicare and Medicaid Services (CMS) decision not to pay for nosocomial pressure ulcers in acute care facilities beginning October 1, 2008, it is more important than ever to accurately document patients with a pressure ulcer upon admission to the facility (reporting of nosocomial PUs begins October 1, 2007).This documentation must be in the physician’s History and Physical (H&P). If the admitting nurse identifies a PU that is not noted in the H&P, he/she should contact the physician immediately to make the correction. After admission, nurses must complete accurate and timely risk assessments to avoid a PU, initiate appropriate interventions, and document thoroughly. If a PU is “unavoidable,” documentation should support that declaration.
Only pressure ulcers are staged.15 One would suppose that due to tissue degradation it intuitively makes sense that the higher PU stage (I, II, III, IV, unstageable, DTI) the greater the chance for infection. Necrotic tissue is typically associated with increased PU stage and feeds pathogens. Undermining/tracts, which trap these pathogens, and vascular compromise, which prevents systemic antibiotics from reaching the involved tissue, and multiple comorbidities with resultant immunosuppression, all increase the risk for infection.
The Wound as an Ecosystem
A wound, especially a chronic, open wound, is a complex, dynamic, biological system that responds to internal and external variables in predictable ways and thus can be compared to any other environmental ecosystem. In an ecosystem, nutrient availability, temperature, moisture, light, the presence or absence of toxins,oxygen,and inter-species competition, all influence which species proliferate in any given ecological niche, and which species ultimately die out. When the variables are manipulated, either through natural or man-made means, the balance of nature is altered. New species move in, ultimately out-compete and displace the old species, and the system evolves—open chronic wounds seem much the same.
The presence of the skin breakdown means that there has been a loss of the mechanical, chemical and biological barriers intrinsic to the skin. New organisms, most of which are not known for their ability to penetrate the skin, now find the ingress pathway unobstructed. Upon arrival, they find an environment well suited for proliferation: nutrient rich due to serous drainage and tissue necrosis, moist, dark, warm, and competition-free. Since the outermost surfaces of an open wound are not a part of an intact tissue with effective blood and lymphatic circulation, nonspecific defense mechanisms such as macrophage inactivation of microbes is not likely. The newly arrived microbes will survive in this hospitable environment for as long as conditions permit.
We take steps in our everyday lives to modify ecological niches to restrict the growth of undesirable invaders. We heat and/or cool our food; we wash and dry dishes and clothing, and use pesticides and herbicides to change the type and balance of plants and insects in our yards. How can clinicians alter the ecology of the chronic wound to reduce the undesirable bioburden and to promote wound healing?
To alter the ecosystem of an open wound, topical antimicrobials are frequently used.These include silver16 and cadexomer iodine.17 The use of cytotoxic agents such as hydrogen peroxide, acetic acid, and Dakin’s solution (bleach and boric acid) are used less frequently. Their use is somewhat controversial.18
For patients and residents who develop an open wound and have a history of wound infections, especially multidrug resistant organisms (MDRO) or have a wound with high risk of infection, clinicians often will use a silver dressing or silver sulfadiazine to prevent infection. For patients not allergic to iodine, who do not have thyroid problems, have heavy drainage in the wound,or have a wound,such as a heel ulcer with eschar that should be kept dry;cadexomer iodine (not povidone iodine, which is cytotoxic to viable tissue)19 is a good choice.
Removing the necrotic tissue and biofilm is imperative to alter the ecosystem so that the bacteria do not have a nutrient supply to harbor microbes and promote their growth. Sharp debridement and pulsatile lavage with suction (PLWS) effectively and quickly debride.Low frequency and low intensity ultrasound is effective for removing biofilm.All 3 interventions accomplish this in an efficacious and efficient manner. Enzymatic debriders and autolytic debridement are slow, yet effective methods, but are not ideal for preventing infection—they can contribute to the infectious process.As they slowly break down the necrotic tissue and sit in the wound, they become an ideal medium for proliferation of bacteria.
Hydrogen peroxide, acetic acid, and Dakin’s solution are all cytotoxic to viable tissue, and should be used sparingly, if at all. Their use should be short-term, and after each irrigation the wound should be rinsed with normal saline (NS). In addition to its cytotoxicity, hydrogen peroxide has been known to cause gas emboli.20
The Use of Topical Antimicrobial Agents
There are numerous published reports describing an apparent benefit of using topically applied silver containing compounds to prevent infection and promote wound healing.21 While these reports may seem encouraging, some doubts regarding the use of silver compounds are beginning to emerge. Of greatest concern is the observation that the genes coding for silver resistance can be found on the same plasmids that also carry multiple genes coding for antibiotic resistance.22 Thus, even in the absence of antibiotic selection pressure, the widespread and long-term use of silver containing compounds may have the collateral effect of selecting antibiotic resistant bacteria.This would occur by first selecting organisms with silver resistance plasmids that simultaneously carry the antibiotic resistance genes.This phenomenon has been reported23,24 and seems most likely to occur in burn centers where heavy use of antibiotics and silver containing compounds is common.
Similarly, there are numerous reports of pathogens developing resistance to other topical agents25–29 and this should come as no surprise.For an antimicrobial agent to be effective, it needs to reach the intended target microbes at or above a certain minimum concentration. For antibiotics, the clinical microbiology laboratory can determine this effective concentration using routine culture and sensitivity testing.The laboratory may use either the Kirby-Bauer disc-diffusion method of sensitivity testing on an agar plate or it may use a tube dilution method that employs the inoculation of microbes into broth solutions containing various known concentrations of the antibiotics being tested. In both cases, the results can be reported as “Sensitive,”“Intermediate,” or “Resistant,” and in the latter case, the laboratory may report a numerical value referred to as the minimum inhibitory concentration (MIC). Both processes test the effectiveness of the antibiotic across a concentration gradient, and in each case reveal a threshold drug concentration below which the antibiotic will be ineffective. Once this threshold is known, the physician can then choose which antibiotic to prescribe by also taking into consideration the site of the infection,the pharmacokinetics of the drug itself,and the likely bioavailability of the drug at the site of infection. This works well for orally or parenterally administered systemic antibiotics, but not so well for topical agents.
In many ways, the application of a topical antimicrobial agent is analogous to performing sensitivity testing. Application of a topical agent will, without question, result in a concentration gradient.At the point where the drug is applied, it is surely the most concentrated.With increasing distance from the application point, the concentration of the drug fades, eventually becoming so dilute that an effective inhibitory/cidal threshold is not met.This theoretical threshold will be different for each drug-microbe combination, but it will exist. It is at this point, just below the threshold, where resistance can develop. The concentration is neither inhibitory nor cidal, and the organisms may simply “switch on” certain phenotypic characteristics already coded for in their genome (adaptive resistance). They may also acquire a resistance factor from an external source (plasmid) or, more rarely they may serendipitously mutate the needed resistance gene. In any case, the concentration gradient of the topical antimicrobial agent provides a suitable environment for this to occur. Accordingly, the clinician may want to approach the use of topical agents with caution, especially if there are other acceptable approaches to bioburden reduction, such as the use of sharp debridement, debridement with pulsatile lavage with suction or biofilm reduction using low intensity, low frequency ultrasound.
Other Measures Known to Lower the Risk of Surgical Site Infection
Timing and appropriateness of prophylactic antibiotics. In 2006, the Hospital Quality Alliance listed as standard quality measures administration within 1 hour before surgical incision of certain prophylactic antibiotics and their discontinuation within 24 hours after the end of surgery. Reported surgical infection outcomes are optimal if antibiotics administered for surgical prophylaxis are administered within thirty minutes of the actual surgical incision to allow for tissue perfusion before the incision is made. It may be necessary to administer a second antibiotic dose if the procedure is exceedingly long, as the serum levels of the drug may fall below the known effective level due to normal elimination processes. Since this is prophylaxis and not treatment, the antibiotic should be discontinued within 24–48 hours after the first dose, depending on the specific procedure and protocol.30–32
Preoperative bioburden removal. Pre-operative bioburden reduction, targeting methicillin sensitive and/or methicillin resistant Staphylococcus aureus, is a strategy that has been shown to be effective in reducing infections in selected types of surgical procedures. In particular,many cardiovascular surgeons now commonly prescribe intranasal mupirocin in tandem with preoperative showers using an antimicrobial product such as chlorhexidine gluconate (Hibiclens®, Molnlyke Health Care, Norcross, Ga) to reduce the staphylococcal colonization levels in the nose and on the skin.At the present time, the evidence for the beneficial effects of intranasal mupirocin is more convincing than that for preoperative showering with an antimicrobial substance.33–37 Preoperative showering with an antimicrobial substance has been shown to significantly reduce bacterial CFU, but this intervention by itself has not been shown to definitively reduce the incidence of postoperative wound infection. In addition, resistance to mupirocin has been reported27 and some clinicians are cautioning against the widespread use of mupirocin for this reason. Still, the combined effect of removal or control of intranasal or skin colonizers appears to be beneficial in reducing the risk of postoperative wound infection in selected surgical patients.
Hemostasis. Recognizing hematoma formation is an important skill of wound management clinicians. Hematomas are masses of coagulated blood where microbes flourish. Sometimes seen post falls or other traumatic injuries, the bruising evidenced on the surface of the skin gradually fades as the hematoma resolves. Likewise, post surgery, if hemostasis is not obtained, a hematoma will form. Usually the discoloration will fade and the hematoma resolve. But if the clinician notices indurated areas,continued discoloration,edema,warmth, and/or tenderness and suspects a hematoma that is not resolving, he/she should contact the physician to surgically open the wound so that it can be debrided. If the wound is open, tracts and undermining should be gently explored for hematoma formation, and irrigated and debrided. Pulsatile lavage with suction is an excellent choice for this procedure, using a flexible tip for tracts and tunnels. Infection can be prevented or assisted to resolve with removal of the hematoma.
Shaving. Shaving is a traumatic event that can lead to infection, especially in preparation for surgery or in conjunction with wound management. Photomicrographs of the skin following preoperative shaving reveal numerous small surface cuts and abrasions.38 This results in serous drainage and the formation of tiny scabs that may not be removed during the surgery preparation protocol. If they are not scrubbed off, the skin prep will not reach the microbes in the substrate. The surgical incision is then made not only through the open micro cuts, but also the scabs, scattering bacteria into the surgical wound. Studies have shown clearly that not shaving lowers surgical infection rates.7,39 If hair removal is necessary, the surgeon or clinician performing wound management should use clippers.
Respect for viable tissue. During surgery temperature plays a part in the possibility of infection and subsequent wound healing.40 Operating rooms (ORs) are kept at a temperature well below body temperature, which not only impedes healing, but also perpetuates evaporation of moisture from the wound, accelerating drying of the exposed tissue. Surgeons frequently moisten sponges/pads with warmed normal saline (NS) to cover the periwound/peri-incision area to the margin of the incision in an attempt to warm the area.
Glucose control. Glucose control is an important factor not only for optimal wound healing, but also it is now recognized that hyperglycemia is a significant risk factor in infection for surgical wounds, especially in persons with diabetes. Reports in the literature describe the successful reduction of the incidence of surgical site infection in cardiovascular surgery as the result of careful regulation of blood glucose levels using intensive insulin therapy.41,42
Discussion
The delivery of healthcare is evolving rapidly.Where once we may have been content to follow the example of a few influential thought leaders,we are now deriving standards of care from carefully designed, executed, and analyzed epidemiological studies.This can be seen in the implementation of the Institute for Healthcare Improvement bundles, which are packets of epidemiologically derived interventions to lower the incidence of ventilator associated pneumonia or central line associated bloodstream infections in critically ill patients.
Evidence-based practice is replacing opinion-based practice and an increasingly well-informed and assertive patient base expects to receive the optimal care available. Third party payers already are denying claims for expenses associated with adverse outcomes, such as nosocomial infections or pressure ulcers.They are refusing to pay for expenses that they consider to be the result of medical errors, and the presence of the adverse outcome is, in their eyes, prima facie evidence that an error has occurred. Better documentation and patient assessment on admission will help in this regard, yet in the end, continued study to identify and control risk factors for adverse outcomes will be the key to preventing these events and avoiding the attendant organizational consequences.
The development and proliferation of multidrug resistant organisms is of major national and international concern. The introduction of new classes of antibiotics has not kept pace with the emergence of new MDROs and without effective treatment modalities, experts increasingly are becoming convinced that antibiotic stewardship, along with active search and contain strategies, are the best available weapons to reverse that trend. “Think globally and act locally,” has become a catch phrase in MDRO control. Locally, every instance of antibiotic use should be viewed in the context of how it might benefit the patient and in the larger context of the potential deleterious effects of antibiotic over use in the healthcare ecosystem.
Conclusion
The need for continued investigation into the causes and prevention of infection is great, and there is room for collaboration between various healthcare disciplines and healthcare epidemiologists. Healthcare professionals are skilled at delivering care in their respective fields of expertise, while healthcare epidemiologists are more expert at assessing the aggregate impact of caregivers’ endeavors. By combining their expertise, patient outcomes and patient care practices can be carefully analyzed, modified, and tested with the ultimate goal of reducing the incidence of nosocomial disease.


