Stigmatization of the Mentally Ill: What Jesus Might Have Said and Done October 19, Stigmatization of the Mentally Ill: What Jesus Might Have Said and Done
September 16, 2022
Learning Goal: I’m working on a music theory question and need an explanation an
September 16, 2022

An Evaluation of Medicares Hospital Compare as a Decision-Making Tool for Patients and Hospitals. *U.S. Hospital Performance Methodologies: A Scoping Review to Identify Opportun

An Evaluation of Medicares Hospital Compare as a Decision-Making Tool for Patients and Hospitals. *U.S. Hospital Performance Methodologies: A Scoping Review to Identify Opportun

 Read the following attachments:

*An Evaluation of Medicare’s Hospital Compare as a Decision-Making Tool for Patients and Hospitals.

*U.S. Hospital Performance Methodologies: A Scoping Review to Identify Opportunities for Crossing the Quality Chasm Download U.S. Hospital Performance Methodologies: A Scoping Review to Identify Opportunities for Crossing the Quality Chasm

*Impact of Teaching Intensity and Sociodemographic Characteristics on CMS Hospital Compare Quality Rating

Your role in this scenario is that you work in the department of quality improvement (QI) at the University of Iowa Hospital & Clinics. Your CEO calls upon you in the QI department to resolve the low performing quality and patient safety issues. The Quality Improvement Measure focus is on “Patients who “Strongly Agree” they understood their care when they left the hospital” under the “Survey of Patients’ Experiences” category.

-For this first part of the quality improvement (QI) initiative, you will explore the University of Iowa Hospital & Clinics (https://www.medicare.gov/care-compare/details/hospital/160058?city=Iowa%20City&state=IA&zipcode=). Explore the categories listed on the hospital’s webpage to get an understanding of how it scores. Be sure to scroll down and expand each category to see the measures nested within the category. For example, for one particular hospital, under the category “Timely and Effective Care for Sepsis Care,” the hospital scored 53% for this measure. This is the percentage of patients who received appropriate care for severe sepsis and septic shock. The national average was 60%, and the state average was 55%. This is a low performing quality and patient safety issue that could be explored. Here is another example: Under the category “Timely and Effective Care for Emergency Department Care,” this hospital had 146 minutes for the measure, “average (median) time patients spent in the emergency department before leaving from the visit.” The national average had 172 minutes, and the state average had 158 minutes. But another hospital in the same area had 125 minutes.

In three- to five-pages address the following:

  • Describe      the nature of the business,      services or products, and customers      served by the University of Iowa Hospital & Clinics.
  • Evaluate      the Quality Improvement Measure “Patients who “Strongly Agree”      they understood their care when they left the hospital” under the “Survey      of Patients’ Experiences” category.
  • Discuss the      importance of patients being able to understand their care when they leave      the hospital (e.g., accreditation status, patient      safety, and/or financial status of the hospital).
  • Define      two to three SMART goals for the measure, “Patients who      “Strongly Agree” they understood their care when they left the      hospital,” to improve. 
    • Be       sure to include all five elements—specific, measurable, attainable,       relevant, and time-bound—in each goal. 
    • For       example: To implement an up-and-running emergency department tracking       system by 12/31/2022. To hire and train additional five nurses to meet       the staff-to-patient ratio during peak times by 12/31/2022.  To       reduce the turnaround time at 10% for lactic acid/lactate levels       laboratory testing by 12/31/2022.
  • Analyze      specific local, state, or national policies (e.g., The Joint Commission      Standards) that have been developed to improve patients being able to      understand their care when they leave the hospital based on evidence-based      practice research.
  • Must      use at least three scholarly or peer-reviewed sources published in the      past 5 years and format in APA style.

-See the attached Quality Improvement Initiative template for guidance on completing this.

RESEARCH ARTICLE Open Access

U.S. hospital performance methodologies: a scoping review to identify opportunities for crossing the quality chasm Kelly J. Thomas Craig*† , Mollie M. McKillop†, Hu T. Huang, Judy George, Ekta S. Punwani and Kyu B. Rhee

Abstract

Background: Hospital performance quality assessments inform patients, providers, payers, and purchasers in making healthcare decisions. These assessments have been developed by government, private and non-profit organizations, and academic institutions. Given the number and variability in available assessments, a knowledge gap exists regarding what assessments are available and how each assessment measures quality to identify top performing hospitals. This study aims to: (a) comprehensively identify current hospital performance assessments, (b) compare quality measures from each methodology in the context of the Institute of Medicine’s (IOM) six domains of STEEEP (safety, timeliness, effectiveness, efficiency, equitable, and patient-centeredness), and (c) formulate policy recommendations that improve value-based, patient-centered care to address identified gaps.

Methods: A scoping review was conducted using a systematic search of MEDLINE and the grey literature along with handsearching to identify studies that provide assessments of US-based hospital performance whereby the study cohort examined a minimum of 250 hospitals in the last two years (2017–2019).

Results: From 3058 unique records screened, 19 hospital performance assessments met inclusion criteria. Methodologies were analyzed across each assessment and measures were mapped to STEEEP. While safety and effectiveness were commonly identified measures across assessments, efficiency, and patient-centeredness were less frequently represented. Equity measures were also limited to risk- and severity-adjustment methods to balance patient characteristics across populations, rather than stand-alone indicators to evaluate health disparities that may contribute to community-level inequities.

Conclusions: To further improve health and healthcare value-based decision-making, there remains a need for methodological transparency across assessments and the standardization of consensus-based measures that reflect the IOM’s quality framework. Additionally, a large opportunity exists to improve the assessment of health equity in the communities that hospitals serve.

Keywords: Hospital quality, Measures, Safety, Methods, Healthcare reporting, Ratings

© The Author(s). 2020 Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

* Correspondence: [email protected] †Kelly J. Thomas Craig and Mollie M. McKillop denotes co-first authorship IBM® Watson Health® Center for AI, Research, and Evaluation, 75 Binney Street, Cambridge, MA 02142, USA

Thomas Craig et al. BMC Health Services Research (2020) 20:640 https://doi.org/10.1186/s12913-020-05503-z

Background Today, hospital performance is increasingly important given growing demands to control healthcare costs [1, 2]. Hospitals are being reimbursed based on their ability to deliver high quality care and deliver value to patients [3], and patients are taking a more active role in their healthcare decisions [4]. Performance measurements are progressively being linked to reimbursement in pay-for- performance models [5]. Yet, quality metrics used in the measurement of value-based care may not optimally re- flect the quality of care provided. Therefore, a need ex- ists to balance quality initiatives with financial feasibility (i.e., value-based care). Commonly used domains for understanding quality

are the Institute of Medicine’s (IOM) framework (safety, timeliness, effectiveness, efficiency, equitable, and patient- centeredness; the acronym referred to as “STEEEP”). Using these domains may help balance quality with value for particular measures. Moreover, employing the domains of STEEEP may reduce variation in how care is delivered and practiced, revealing differences that exist across geographic, cost, and personal (e.g. racial) charac- teristics [6, 7]. IOM STEEEP proposes domains for quality care, but

does not articulate specific quality indicators nor how to combine these measures to assess quality performance as a whole. Measures of performance are dependent upon the availability of data. Existing means of measur- ing hospital performance may include regulatory inspec- tion or reporting, surveys, and statistical indicators which are often combined into composite scores. Al- though many measures exist, no clear consensus has been reached on which measures should be used for measuring hospital performance. For example, few com- mon scores or standardized measures exist across the various national hospital ratings systems [8]. Yet, it is clear that better and worse methods of meas-

uring hospital performance exist [9], such as consensus- driven and evidence-based indicators endorsed by the National Quality Forum (NQF) [10] and the Agency for Healthcare Research and Quality (AHRQ) [11]. More- over the Donabedian framework can help guide how comprehensively quality is assessed across assessments using different performance measures. However, there are not clear guidelines for assessments to incorporate specific methodologies and appropriate measures to fit within the IOM’s STEEEP framework. Examining these aspects of existing hospital performance assessments are a first step toward developing more transparent and ro- bust methods for determining how accurately and com- prehensively hospitals provide quality care. The purpose of the scoping review is to provide a

comprehensive analysis of United States (US) method- ologies used to assess hospital performance and their

measures as they correspond to the IOM’s STEEEP qual- ity framework. Using the STEEEP framework, quality do- mains and respective gaps were identified across currently available assessments using a systematic ap- proach. Robustness (e.g. number of data sources and measures) and transparency of methodologies, to under- stand how measures were combined to assess hospitals, were evaluated. Additionally, in the context of informing policy to support value-based, patient-centered care, op- portunities were identified for hospital assessments to “cross the quality chasm” [12].

Methods Study design A scoping review [13] was conducted in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA- ScR) to identify studies that provide assessments of US- based hospital performance whereby the study cohort examined a minimum of 250 hospitals [14]. The review was designed to curate a comprehensive snapshot of re- cent and active methodologies regarding hospital per- formance in order to evaluate the current landscape. Therefore, inclusion criteria were limited to identifying published studies from 2017 to 2019 that included meth- odologies examining performance of 250 or more hospi- tals, which allowed for generalizable synthesis. Details of the methodology are provided in Additional File 1.

Search strategy A systematic search query of MEDLINE via PubMed and the grey literature was conducted to identify refer- ences published or available online between September 1, 2017 to September 1, 2019. This timeframe supports the identification of recently published hospital perform- ance assessments.

Screening process Relevant references related to hospital performance as- sessment were screened and abstracted into standardized forms by independent dual review and conflict adjudica- tion was provided by a third reviewer. Interrater reliabil- ity was determined by the kappa statistic [15].

Data extraction The following criteria were abstracted into standard- ized forms for synthesis and evaluation: data source including origin of data, data linkage, availability, type, sample size, and observation period; cohort develop- ment including inclusion/exclusion criteria and data pre-processing; measure (see below); and score includ- ing composite calculation.

Thomas Craig et al. BMC Health Services Research (2020) 20:640 Page 2 of 13

Measure characteristics Beyond the data extracted from selected assessments as described above, specific measure or indicator characteris- tics were abstracted including name of measure, measure calculation, normalization, and explanation of why the measure was included, if any. Because measure character- istics were the focus, direct evaluation of the sensitivity of the measures was not conducted; however, data abstrac- tion included how measures were chosen. Each measure was mapped, if possible, to categories within the Donabe- dian conceptual model of quality improvement which in- cludes structural, outcome, and process categories, and the STEEEP framework for the domains of quality. To determine if STEEEP mapped measures were sup-

ported by federal and non-profit organizations that lead consensus- and evidence-based measure reporting for healthcare quality, each measure was cross-referenced to AHRQ (prevention, inpatient, and patient safety quality cat- egories) quality recommendations, and NQF endorsement.

Results Summary of included assessments From 3058 unique records screened, 19 hospital per- formance assessments described in the literature met in- clusion criteria (Fig. 1). Of those studies, five de novo assessments [16–20], six evaluations of organizations’

ratings [21–25], and eight organizations providing assess- ments (with shorthand designation noted in brackets) were identified [26–33]: (1) Consumer Reports® Hospital Ratings [Consumer Reports], (2) Healthgrades™ America’s Best Hospitals [Healthgrades], (3) The Centers for Medicare & Medicaid Services (CMS) Hospital Compare [Hospital Compare], (4) IBM® Watson Health® 100 Top Hospitals® [IBM], (5) Island Peer Review Organization (IPRO), Why Not The Best? [IPRO], (6) The Joint Com- mission America’s Hospitals [Joint Commission], (7) Leapfrog Top Hospitals [Leapfrog], and (8) U.S. News and World Report Best Hospitals Procedures and Conditions [US News].

Assessment methodologies overview Four types of hospital assessments were identified: rank- ing, rating, listing, and evaluation-based studies. Ranking (IPRO, Hamadi et al. (2019) [16], Odisho et al. (2018) [19], Walkey et al. (2018) [34], Yokoe et al. (2019) [18]) assessments denoted a system by which all hospitals are arranged in order of ascending performance. Rating (Consumer Reports, Healthgrades, Leapfrog, US News; Al-Amin et al. (2018) [17]) assessments placed hospitals into relative quality groups. Listing assessments (Hos- pital Compare, IBM, Joint Commission) indicated hos- pital quality without comparison to other hospitals.

Fig. 1 Results of the literature search, Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) flow diagram

Thomas Craig et al. BMC Health Services Research (2020) 20:640 Page 3 of 13

Identification start of flow chart 1: Records identified through database searching (N=3.043) (tab to Records after duplicates)

start of flow chart 2: Records identified through handsearching (N=16) (tab next)

Records after duplicates removed (N=3.059)Screening

Records screened (N=3.059) (tab to Full-text articles or to next)

Records excluded (N=3.009) (end flow chart)

Eligibility

Full-text articles assessed for eligibility (N=5R) (tab to Included studies or tab to next)

Records excluded (N=39) (tab to Exclusions)

Included Included Studies (N=19) Database, 11 Web. 8 (end of flow chart)

Exclusions: Number of hospitals < 250, 26 Hospitals outside of US, 11 Outside of publication range of interest, 2 (end of flow chart)

Lastly, evaluation-based studies [20–25] provided critical examinations of hospital performance assessment meth- odologies from Hospital Compare [21, 23, 24, 35] and US News [21, 25]. Most of the assessments explained why specific mea-

sures were chosen for their particular methodology in- cluding the Consumer Reports, Hospital Compare, IBM, US News, and the de novo and evaluation-based studies. Reasons for including specific measures were wide- ranging, but centered on existing evidence that an indi- cator is associated with an endorsed quality outcome, such as mortality. Clear descriptions for why specific measures or indicators were chosen were not identified for Healthgrades, IPRO, Joint Commission, and Leapfrog. Rather than addressing overall hospital performance,

some studies assessed specific quality domains such as patient safety (e.g., surgical site infections [18], surgical procedures [20]), effectiveness (e.g., 30-day readmission [20]; 30-day mortality [34], and patient-centeredness (e.g., Hospital Consumer Assessment of Healthcare Pro- viders and Systems (HCAHPS)) measures [17]. Summary information about the data sources, cohort

development, scoring, and model performance across as- sessments can be found in Additional File 2.

Performance measures The kappa statistic for interrater reliability of data ex- traction was 0.69, including both Donabedian categoriza- tions (e.g., structure, process, outcome) and STEEEP framework mapping. For simplicity of comparisons and to provide a subgroup analysis, this section will focus on the following eight organizations that provided overall hospital performance (i.e., reported and assessed infor- mation in more than one quality domain): Consumer Re- ports, Healthgrades, Hospital Compare, IBM, IPRO, Joint Commission, Leapfrog, and US News. Most performance assessments used primarily out-

come (n = 187) and secondarily process-driven indicators (n = 80) while three (IPRO, Leapfrog, US News) also in- cluded structural-based measures (n = 16) to assess qual- ity according to the Donabedian conceptualization (Fig. 2a). Three assessments did not use multiple concepts in their methodologies; Healthgrades and IBM exclusively reported outcome measures while the Joint commission methodology was limited to process measures. Within the STEEEP quality framework, all assessments

contained safety, five used timeliness, seven discussed ef- ficiency, six used effectiveness, none explicitly reported equity, but five conducted risk- or disease severity- adjustments in models of other quality domains to ad- dress an equity-related issue (e.g., effectiveness and safety: race-adjusted mortality rate), and six included pa- tient-centeredness indicators.

Across the assessments, measures were mapped (some to more than one domain); safety indicators (n = 168) were most commonly identified followed by effectiveness (n = 88), timeliness (n = 49), efficiency (n = 42), patient- centeredness (n = 33), and equity (n = 10) (using adjust- ments for equity-related variables) measures (Fig. 2b). Figure 2 summarizes the Donabedian conceptualization and STEEEP framework mapping of identified quality measures across assessments. Notably, some structural measures were unable to be mapped to STEEEP (e.g., ad- justed operating profit margin, hospital-specific designa- tions, percent of Medicare beneficiaries of all ages with diabetes or heart disease, and programs data). Common themes among process and outcome mea-

sures mapped to the STEEEP framework were identified along with their respective weights to determine hospital scoring (Figs. 3 and 4). Large overlap or similarity of identified measures occurred in the following themes: safety and effectiveness domains included mortality, re- admission, complications, and hospital acquired infec- tions (HAIs); timely and efficient care regarded emergency department (ED) throughput and length of stay (LOS); and lastly, patient-centeredness was limited to patient experiences summarized by HCAPHS survey data (Fig. 3a). The weighting of these frequent STEEEP quality indicators varied widely across assessments or was not provided (Fig. 3b). Mortality weighting ranged from 2 to 50%; readmissions indicators contributed to roughly 20% of the score when weighted; complications weighting ranged from 10 to 50%; ED throughput weighting range was lowest with 4–10%; LOS was only weighted at 10% by one assessment; and HCAPHS sur- vey data contributed 10–22% of the scoring. Figure 3b details the weights provided for other measures that were not commonly identified across assessments to demonstrate transparency of scoring, where possible. With safety and effectiveness as overt priorities in hos-

pital performance outcomes, 30-day mortality and 30- day readmission rates were commonly identified with the exception of Joint Commission and Leapfrog assess- ments; notably, Leapfrog used death rate of surgical in- patients with serious treatable conditions as a measure of mortality. These 30-day effectiveness of care measures identified varied in their risk- and severity- adjustments, as did patient conditions (acute myocardial infarction, chronic obstructive pulmonary disease, heart failure, pneumonia, and/or stroke) as components of these com- posite outcomes. Harm outcomes were also frequently represented across assessments (except Joint Commis- sion) including medical and surgical complications and HAIs. Medical complications were occasionally grouped with HAIs when the AHRQ patient safety indicator (PSI) 90 was used; other medical complication measures examined pressure ulcer rates, iatrogenic pneumothorax

Thomas Craig et al. BMC Health Services Research (2020) 20:640 Page 4 of 13

rates, in-hospital falls and trauma, and venous thrombo- embolism (VTE) incidence. Surgical complications var- ied greatly, but the most frequently identified measures related to hip fracture treatment, hip and knee replace- ments, and postoperative respiratory failure and wound dehiscence rates. HAIs measures commonly included catheter-associated urinary tract infections (CAUTIs), Clostridium difficile (C. diff) infections, central-line asso- ciated bloodstream infections (CLABSIs), methicillin- resistant Staphylococcus aureus (MRSA) infections, se- vere sepsis and shock, and surgical site infections (SSIs). Timely care outcomes that reduce wait times or harm-

ful delays and efficient care outcomes that reduce cost and unnecessary resource utilization, as adapted from AHRQ and CMS definitions, were identified as common STEEEP domains. Four (Hospital Compare, IBM, IPRO, Joint Commission) assessments focused primarily on ED throughput measures, and LOS was examined by two

assessments (IBM provided severity-adjusted LOS when compared to unadjusted LOS by US News). ED through- put measures considered median times from ED arrival to ED departure for both admitted and discharged ED patients as well as admit decision time, time to pain management, time to fibrinolytic therapy, and patients left without being seen. Patient experience (patient-centeredness) outcomes

were identified in most assessments except Healthgrades and Joint Commission. The results were derived from survey questions using HCAPHS data; most were a com- posite of multiple categories related to communication from provider, patient-provider relationships, receiving help when needed, controlling pain, cleanliness of room, quietness of room, likelihood to recommend hospital, and overall patient experience. Equity-based measures were not stand-alone metrics

to demonstrate the remediation of differences in the

Fig. 2 Frequency of (a) Donabedian categorizations and (b) percentage of STEEEP measures per assessment

Thomas Craig et al. BMC Health Services Research (2020) 20:640 Page 5 of 13

a)

b)

Fig. 3 (See legend on next page.)

Thomas Craig et al. BMC Health Services Research (2020) 20:640 Page 6 of 13

a)

Frequent MeasuresEffectiveness & Safety Timely & Efficient Patient-Centered

AssessmentMortalityReadmissionComplicationsHospital Acquired Infections

ED ThroughputLength of Stay

HCAPHS Surveys

Consumer Reports(blue dot)(blue dot) (blue dot) (blue dot)

Healthgrades(blue dot) (blue dot) Hospital Compare(blue dot)(blue dot) (blue dot) (blue dot)(blue dot) (blue dot)

IBM (blue dot)(blue dot) (blue dot) (blue dot)(blue dot) (blue dot)(blue dot) IPRO (blue dot)(blue dot) (blue dot) (blue dot)(blue dot) (blue dot) Joint Commission (blue dot)

Leapfrog not a 30 day mortality rate

(blue dot) (blue dot) (blue dot) US News (blue dot)(blue dot) (blue dot) (blue dot)(blue dot)

b)

Measure GroupingEffectiveness & safety Timely & EfficientPatient-Centered

AssessmentMortalityReadmissionComplicationsHospital Acquired Infections

ED ThroughputLength of stay

HCAPHS Surveys

Other

Consumer Reports

20% 20% 20% 20% 20%

Healthgrades50% 50% Hospital Compare22% 22% 22% Combined

with complications

4% 22% 8%

IBM 20% 10% 10% 10% 10% 10% 10% 20% IPRO NP NP NP NP NP NP NP Joint Commission NP NP

Leapfrog 2% 27.1% 20.9% 14.5% NP US News NP NP NP NP NP NP

quality of health and healthcare across different popula- tions in the communities that hospitals serve. Identified equity measures included risk- and disease severity- adjustments for covariates such as gender, geography, and socioeconomic status (e.g., Medicare/Medicaid dual eligibility as a proxy) and were used by five assessments (Consumer Reports, Healthgrades, IBM, IPRO, US News) in LOS, mortality, complications, and/or post-surgical infection measures (Fig. 4a). Measure developers, both government and non-profit,

provide endorsements using consensus and evidence- based review. These recommended measures allow com- parisons of performance to recognized standards for the improvement of care and outcomes. Identified measures were mapped to AHRQ and NQF endorsements (Table 1). Using standardized quality indicators from the AHRQ as benchmarks, patient safety indicators were in- cluded by all assessments except the Joint Commission. AHRQ inpatient indicators were used by all assessments except IPRO, Joint Commission, and Leapfrog. AHRQ prevention measures were only used by IPRO. Upon examining NQF endorsements of AHRQ measures, all

assessments used at least one measure endorsed by NQF in each AHRQ category.

Discussion Hospital performance is often assessed beyond the exam- ination of quality measures, including financial health and employee health of the organizations being reviewed. This study intended to examine quality domains (i.e., STEEEP) and their use as part of hospital performance assessment, and identify relationships, if any, between the two. Cover- age and weighting of measures mapped to the STEEEP framework varied across assessments, which indicates that there is limited consensus on how to best measure hos- pital quality. Moreover, disparate measures and methodo- logical disagreement may foster cynicism and confusion [9] among stakeholders that include patients, providers, payers, purchasers, and policy makers. This does not mean quality assessments should be disregarded, but that they should be considered in the larger context of hospital performance. Our identification of evaluation-based studies that crit-

ically examined assessment methodologies determined

(See figure on previous page.) Fig. 3 Frequent quality domain (a) measure overlaps and (b) comparison of their weights among assessments a) Abbreviations: ED, emergency department; HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems. b) Descriptions of “Other” across assessments. Consumer Reports: Other, efficient use of imaging process measures; Hospital Compare: Other, 4% efficient use of imaging and 4% effectiveness of care process measures (e.g., patients assessed and given influenza vaccination; percentage of patients who left the ED before being seen; percentage of patients who came to the ED with stroke symptoms who received brain scan results within 45 minutes of arrival; percentage of patients receiving appropriate recommendation for follow-up screening colonoscopy; percentage of patients with history of polyps receiving follow-up colonoscopy in the appropriate timeframe; percent of mothers whose deliveries were scheduled too early (1-2 weeks early), when a scheduled delivery was not medically necessary; percentage of patients who received appropriate care for severe sepsis and septic shock; patients who developed a blood clot while in the hospital who did not get treatment that could have prevented it; percentage of patients receiving appropriate radiation therapy for cancer that has spread to the bone). IBM: Other, 10% operating profit margin (no mapping) and 10% adjusted inpatient expense per discharge for efficiency. IPRO: Other, weight not provided for timely and effective 1) stroke care (thrombolytic therapy, antithrombolytic therapy by end of hospital day 2, VTE prophylaxis, discharged on antithrombolytic therapy, anticoagulation therapy for atrial fibrillation/flutter, discharged on statin medication, stroke education), and 2) blood clot prevention and treatment (VTE prophylaxis, intensive care unit VTE prophylaxis, incidence of potentially preventable VTE, anticoagulation overlap therapy, unfractionated heparin with dosages/platelet count monitoring, warfarin therapy discharge instructions; safety, early elective delivery rates; efficiency, spending per Medicare beneficiary and health care costs; structural HIT measures and imaging for efficiency and safety; efficiency, population health and utilization costs; structural measures from county health rankings data on health factors and health outcomes related to preventive care for safety. Joint Commission: Other, weight NP for process measures. Timely and effective 1) stroke care (thrombolytic therapy, antithrombolytic therapy by end of hospital day 2, VTE prophylaxis, discharged on antithrombolytic therapy, anticoagulation therapy for atrial fibrillation/flutter, discharged on statin medication, stroke education; assessed for rehabilitation, VTE discharge instructions, and 2) blood clot prevention and treatment (VTE prophylaxis, intensive care unit VTE prophylaxis, incidence of potentially preventable VTE, anticoagulation overlap therapy, unfractionated heparin with dosages/platelet count monitoring, warfarin therapy discharge instructions; safety, early elective delivery rates; safety and effectiveness of antenatal steroids; safety and effectiveness for inpatient psychiatric services (admission screening, physical restraint, seclusion, and justification for multiple antipsychotic medications); safety and effectiveness of preventive care for influenza immunization, tobacco use (screening, treatment provided or offered, treatment provided or offered at discharge), hearing screening, alcohol use (screening, brief intervention provided or offered, or other drug use treatment provide or offered at discharge); effectiveness of exclusive breast milk feeding; surgical care effectiveness and safety of urinary catheter removal and antibiotics within one-hour before first surgical cut; safety and effectiveness, children’s asthma care, home management plan of care; and timely acute myocardial infarction measures (fibrinolytic therapy within 30 minutes and primary percutaneous coronary intervention received within 90 minutes). Leapfrog: Other, 23.1% safety practice process measures (leadership structures and systems; culture measurement, feedback, and intervention; identification and mitigation of risks and hazards; nursing workforce; hand hygiene) and 11.5% HIT (computerized physician order entry and bar code administration) safety, timeliness, and efficiency. Notably, the weights provided by Leapfrog only sum to 97.3% rather than 100%. US News: Other, weight NP for process measures on effectiveness (patient flu immunization and worker flu immunization) and safety (noninvasive ventilation and transfusion); outcome measures on patient-centeredness and safety (discharge to location other than patient’s home); structural safety measures related to information on board certifications and specialties, number of patients (volume), nurse staffing, number of intensivists, and transparency (reporting of performance). Abbreviations: ED, emergency department; HCAHPS, Hosptial Consumer Assessment of Healthcare Providers and Systems; HIT, health information technology; NP, not provided; VTE, venous thromboembolism; -, not an included measure

Thomas Craig et al. BMC Health Services Research (2020) 20:640 Page 7 of 13

 

“Looking for a Similar Assignment? Get Expert Help at an Amazing Discount!”

The post An Evaluation of Medicares Hospital Compare as a Decision-Making Tool for Patients and Hospitals. *U.S. Hospital Performance Methodologies: A Scoping Review to Identify Opportun appeared first on nursing writers.

"Are you looking for this answer? We can Help click Order Now"