Tuesday, December 27, 2016

My top 7 blog posts of 2016

I’ve written over 700 posts since I started blogging in July 2010. Here are my seven most viewed posts of 2016.

My perspective on the notorious “study” claiming medical errors are the third leading cause of death in the United States. Are there really 250,000 preventable deaths per year in US hospitals?

I followed up by commenting on the negative impact of naive reporting about that preventable death study in When bad research is not critically reported by journalists.

Radiologist Saurabh Jha and I discussed the risks of radiation and rationale for ordering a CT scan for the diagnosis of appendicitis in this post Irrational fear of CT scans in appendicitis.

Another post about appendicitis was my critique of a meta-analysis claiming that antibiotics were safe and efficacious for treating simple appendicitis. Needless to say, I disagreed. Antibiotics vs. surgery for appendicitis.

The issue of surgeon headgear doesn’t seem to go away. The traditional surgeon cap is being banned by some states and nursing organizations. This post, It's time to discuss surgeon headgear again, was popular. Bonus eighth post: The subject came up again when the Association of periOperative Registered Nurses and the American College of Surgeons had a dustup about it later in the year. OR head covering controversy: ACS versus AORN.

I reported on a controversial paper about the relationship between surgeons and anesthesiologists How frequently do surgeons and anesthesiologists lie to each other?

One of my favorite topics is the lack of consistency among the multitude of hospital rating systems. I gave some examples in this post Why hospital rankings are bogus.

Thanks for following my blog and reading my posts. Happy New Year.

Friday, December 23, 2016

Good patient safety news you didn’t hear about

In the last five years, there’s been a 21% reduction in hospital acquired conditions (HACs) says a report by the Agency for Healthcare Research and Quality. This means that patients suffered 3.1 million fewer HACs than if the HAC rate had stayed at the 2010 level.

Since 2011, the decrease in HACs has reduced healthcare costs by an estimated $28.2 billion and has saved almost 125,000 lives.

This graphic summarizes the AHRQ findings.
Central line-associated bloodstream infections have fallen by 91%, and postoperative venous thromboembolism by 76%. Here’s a chart that shows the percent decreases in HACs.



The report said the reasons for these improvements “are not fully understood,” but might be due to the following:
  • Financial incentives created by the Centers for Medicare & Medicaid Services (CMS) and other payers’ payment policies,
  • Public reporting of hospital-level results,
  • Technical assistance offered to hospitals by the Quality Improvement Organization (QIO) program, and
  • Technical assistance and catalytic efforts of the HHS PfP [[Pay for Performance] initiative led by CMS.
An thorough Google search found a few articles about this important and positive AHRQ report. Were they in the New York Times, Washington Post, Newsweek, US News, or The Daily Mail?

No. the news could only be found on HealthcareIT Analytics, Fierce Healthcare, the website of the Healthcare Association of New York State, Pharmacy Practice News, and HealthcareIT News where I obtained the multicolored graphic above.

Why do you suppose no major media outlet reported the story?

Good news doesn’t get clicks.













Wednesday, December 21, 2016

No improvement in complication rates after instituting an operating room checklist

A before and after study at the University of Vermont Medical Center found that a 24-item operating room checklist did not significantly reduce the incidence of any of nine postoperative adverse outcomes.

More than 12,000 cases were studied, and outcomes included mortality, death among surgical in patients with serious treatable complications, sepsis, respiratory failure, wound dehiscence, postoperative venous thromboembolic events (VTE), postoperative hemorrhage or hematoma, transfusion reaction, and retained foreign body (FB).

After the checklist was established, respiratory failure rates decreased significantly on the initial analysis, but the difference disappeared when the Bonferroni correction* was applied to the data set.

Why didn’t the checklist work? I have discussed this in previous blog posts here and here. As was true in previous papers of this nature, many of the complications studied—respiratory failure, wound dehiscence, transfusion reaction, postoperative hemorrhage or hematoma—could not have been prevented by a checklist.

Tuesday, December 13, 2016

Breaking residency placement fever

 By Francis Deng* and Skeptical Scalpel

A recent opinion piece entitled “Residency Placement Fever” in the journal Academic Medicine by Gruppuso and Adashi noted a recent intensification in the volume of residency applications submitted and interviews offered/attended per applicant.

For keen observers of the Match process, this trend is neither a secret nor a surprise. The Electronic Residency Application Service (ERAS) has seen an increase in applications filed per US medical school graduate from an average of 30.3 in 2005 to 45.7 in 2015.



Sunday, December 11, 2016

Who really did the case?

According to the Residency Review Committee for Surgery, "A resident may be considered the surgeon only when he or she can document a significant role in the following aspects of management: determination or confirmation of the diagnosis, provision of preoperative care, selection, and accomplishment of the appropriate operative procedure, and direction of the postoperative care."

In nearly all instances, resident "determination or confirmation of the diagnosis, provision of preoperative care, selection of the operative procedure, and direction of the postoperative care" happen only in emergencies. For the majority of elective patients and same day operations, the residents do not play significant roles in most components of perioperative management.

What about "accomplishment of the appropriate operative procedure"? Are the residents really doing the cases they scrub on?

A recent paper from the University of Texas Medical Branch in Galveston, called "Who did the case? Perceptions on resident operative participation," looked at this question in a surprisingly candid way. The authors asked residents and faculty to independently assess what percentage of the operation the resident performed.

For the 87 cases for which responses from both resident and attending surgeon were available, agreement on percent of the case performed by resident (< 25%, 25 to 50%, 50 to 75%, > 75%) occurred in 61%, agreement of the role the resident played (first assistant, surgeon junior year, surgeon chief resident, teaching assistant) occurred 63% of the time, and agreement on both percent and role occurred only 47% of the time.

This reminds me of a story from when I was a resident. In the surgeons' locker room one day, someone asked a senior attending if the resident who scrubbed with him had done the case. The attending replied, "He thinks he did."

That's what the authors from Texas found too. In about two-thirds of the cases with disagreement about the percent of a case the residents did, the residents felt they performed larger portions of the case than did the faculty.

What constitutes "a significant role" is open to interpretation.

A resident once came to me and said, "I'm not really sure I should claim I was the surgeon for a case I scrubbed on today. Should I log myself as 'surgeon' anyway?"

I said, "If you have to ask, you probably shouldn't claim it."

Surgical residents are supposed to enter the cases they do in an online database, and the RRC uses these data in its accreditation process. The American Board of Surgery mandates that residents perform specific numbers of various types of cases in order to be eligible to take their boards.

A 2016 study in the Journal of Surgical Education surveyed 82 residents from various surgical specialties at UC Irvine and found only about half of the responding residents were told how to assess what their role was, and they were often delinquent [for more than one year at times] in logging their procedures leading to inaccuracies in the logs.

The authors concluded that the way cases were being logged raised "concerns about the use of the system for assessing surgical preparedness or crediting training programs."

The two papers cited above are small studies from single institutions, but in my opinion, probably reflect the reality in most residency training programs.

Submitted case log numbers may be misleading. This may be a previously unidentified factor in the crisis in confidence afflicting some graduating chief surgical residents.

Would competency-based training be better? The buzz about competency-based training has died down, and there are skeptics including the authors of this thoughtful editorial from the Journal of Graduate Medical Education.

Starting in the 2017-2018 academic year, the American Board of Surgery will require a minimum of 850 operative procedures for the five-year training period and 200 operations in the chief resident year—increases from 750 and 150, respectively.

Will competency-based training or increasing the number of operations required help?

Not if the residents aren't really doing the cases.