Mining for Fool’s Gold

My wife found an interesting news item the other day about robots getting smarter than humans. I got a kick out of it because I thought of it as just another sort of below-the-fold or even back page filler item we see occasionally on the idea that machines are taking over the world.

Then I had one of those inexplicable memories of my childhood. When I was tall enough to reach the top of the sideboard in our home, it became my chore to dust the furniture.

I had to do a thorough job of dusting each and every piece of bric-a-brac on the sideboard, which included a fist-sized hunk of iron pyrite, otherwise known as Fool’s Gold. The rock glittered seductively whether I dusted it or not.

Mom taught me what Fool’s Gold is, that it’s not the real McCoy, not real gold, pretty but valueless. So why did she make me dust that rock? I’ll never know.

You could say that a lot of what we encounter in life is Fool’s Gold, including a lot of notions people take for granted. The smart robots article says robots will probably never have the common sense that humans have.

That got me to wondering what the definition of common sense is. I like the urban dictionary definition–“What I think you should know.”

Anyway, I got a little nervous when I read that doctors could be replaced by robots who could “screen” for depression, simply by interpreting the look on your face.

On the other hand, it’s probably just Fool’s Gold. I still think those suffering from depression really want another person to share their burden.

When I was a boy, I used to deliver newspapers. Anybody remember that now? Yes, the news was made of paper and the news was printed on pages. And newspaper boys had to collect money from customers to whom they delivered the expertly rolled or folded paper news that you could toss on to the porch.

One old boy I tried to collect from shouted at me from inside, saying the paper “…ain’t worth a shit!” He’d been drinking. Later, I learned that his wife had recently died.

The news is mostly digital now. It’s still mostly not worth a shit. Fool’s Gold.

Later on, I got sold something a slick salesman kept calling “this product” for a good long while before I finally found out the product was–Amway. By then it was too late. When it was my turn to try to sell “this product,” I got a big “No thank you!” from nice people. Fool’s Gold.

Most of you know how I feel about Maintenance of Certification (MOC). It’s American Psychiatric Association (APA) election time again and members running for office are trying to get votes by promising to make the Focus Journal for Psychiatry (a major source of CME, Self-Assessment, and other credits toward MOC recertification) free. The candidates and some members call it a pretty good deal.

But they forget about the MOC not measuring up to what the American Board of Medical Specialties (ABMS) and other specialty boards like the American Board of Psychiatry and Neurology (ABPN) promise.

I paid over $300 for my Focus Lifelong Learning Psychiatry subscription, as I have for many years. I suppose that’s a testament to its value as a tool for staying current. However, I wonder if the APA would simply add the $300 to the already high membership fee of $981.

I’m a clinician-teacher at The University of Iowa and I’ll not be renewing either my APA or AMA membership because of the high cost of both. That’s a personal decision I’ve made after a great deal of reflection. It’s mainly because I have to pay so much already for all of the educational materials required just to keep up with MOC. I’ll have to renew my general psychiatry certification one more time before I retire in just a few years.

I know that many trainees, faculty, and practitioners in private practice value their membership and I’m not criticizing them. And I’m fully aware that MOC has spread throughout our health care system in almost every direction including but not limited to reimbursement in the form of the CMS PQRS MOC for which penalties will be coming for non-participation in 2015.

I’m also aware that the recently published studies (JAMA, link https://thepracticalpsychosomaticist.com/2014/12/16/jama-studies-support-moc-you-decide/ ) attempting to show whether or not MOC actually changes outcomes in internal medicine showed pretty weak effects. The authors acknowledge that the evidence base for showing MOC itself is effective is practically nonexistent—despite their obvious conflicts of interests.

I notice that the Focus Journal for Lifelong Learning in Psychiatry has been calling for papers on MOC. Does anyone know of studies showing that MOC changes outcomes in psychiatric patients?

I hope there’s still room for criticism of MOC, which in the opinion of thousands of doctors is not fulfilling the claims of the ABMS or any of the specialty certification boards. The suspicion that MOC makes too much money for board executives continues to be high.

There are bad doctors everywhere. Many of them are board-certified and can game the MOC system with ease.

There are at last count about 14 state medical societies which have adopted resolutions opposing MOC and Maintenance of Licensure (MOL). Over the past year, I’ve sponsored the ones in Iowa which have both passed.

But I have always treasured and practiced the principle of lifelong learning. I still believe that what I do in my position as a teacher and psychiatric consultant in an academic medical center is more relevant to my work than MOC will ever be. I think the pursuit of lifelong learning must come from the inside out and I’m skeptical of any effort by regulators to compel doctors to engage in reflective self-improvement.

The foregoing (starting with “I paid over $300…” was my contribution to an APA LinkedIn discussion.

MOC is Fool’s Gold.

And then, I recently found out about a new tool for assessment of medical students and residents called the Script Concordance Test (SCT). I didn’t know it at the time, but one of the rock star medical students here introduced the concept of the SCT to me. He served on the psychiatry consultation service a while ago and did an outstanding job on my idea of a practical lifelong learning process relevant to what I do as a doctor–the Clinical Problems in Consultation Psychiatry (CPCP) presentation. He learned about the SCT (or at least what it purports to measure) from a doctor for whom I have a great deal of respect and summarized it on one of the slides:

In my first year of medical school, Dr. LeBlond gave a lecture to our FCP class on diagnostic reasoning. He recounted a story where his father had told him, “…it’s the train you don’t see that’ll hit you.” That’s stuck with me since that day, and it reminds me that, in order to arrive at the diagnosis that will allow you to help the patient, you have to think of that diagnosis first. Sometimes, this is obvious, but other times, you have to dig deeper. Generally, we are taught that when we hear hoof beats, we should think horses (not zebras). However, we must also be astute enough to realize when we are metaphorically in Africa. A good clinician thinks first about likely causes, as “common things occur commonly,” but when common things have been ruled out, the master clinician can use hypothetico-deductive reasoning to arrive at the correct diagnosis, even if it is more obscure. This, however, relies on a full knowledge base. Why discuss HE today? It’s a rare disorder, after all. However, we do note that it has been known to present with symptoms that may cause the primary team to consider calling for a psychiatry consult. Indeed, much of the consulting psychiatrists job is reassuring the primary team that the problem they face is not, in fact, a primary psychiatric problem. In addition, as healthcare providers, we are all concerned for the best outcome for the patient, and in the case of HE, that requires early diagnosis and treatment. For that reason, it behooves us to have HE in the back of our mind as a possible explanation for acute or sub-acute encephalopathy.

It’s the “hypothetico-deductive reasoning” that I guess the SCT is supposed to assess rather than memorized medical knowledge in the form of facts. As the abstract for a paper cited below says, “The script concordance test (SCT) assesses clinical reasoning under conditions of uncertainty.”

Well, it sounds like a pretty good test; something which avoids the memorization that is so much a part of medical education and which educators are trying to change.

And then I dug through PubMed and found a recently published paper describing a particular flaw in the SCT, which is that trainees can exploit a weakness in the test simply by giving middle-of-the-road answers, avoiding the extreme choices [1].

I think it’s funny that the SCT is still called a “new” test although references go back 14 years. Change is sometimes glacial. One of the older references at the end of the See et al paper calls the SCT “…a tool to assess the reflective clinician.” Sounds ambitious.

Maybe it’s Fool’s Gold.

I don’t know whatever happened to that hunk of Fool’s Gold I used to dust every day. It’s gone, along with the sideboard. Mom’s gone too. She taught me what Fool’s Gold is.

Reference:

1. See, K. C., et al. (2014). “The script concordance test for clinical reasoning: re-examining its utility and potential weakness.” Medical Education 48(11): 1069-1077.
Context The script concordance test (SCT) assesses clinical reasoning under conditions of uncertainty. Relatively little information exists on Z-score (standard deviation [SD]) cut-offs for distinguishing more experienced from less experienced trainees, and whether scores depend on factual knowledge. Additionally, a recent review highlighted the finding that the SCT is potentially weakened by the fact that the mere avoidance of extreme responses may greatly increase test scores. Objectives This study was conducted in order to elucidate the best cut-off Z-scores, to correlate SCT scores with scores on a separate medical knowledge examination (MKE), and to investigate potential solutions to the weakness of the SCT. Methods An analysis of scores on pulmonary and critical care medicine tests undertaken during July and August 2013 was performed. Clinical reasoning was tested using 1-hour SCTs (Question Sets 1 or 2). Medical knowledge was tested using a 3-hour, computer-adapted, multiple-choice question examination. Results The expert panel was composed of 16 attending physicians. The SCTs were completed by 16 fellows and 10 residents. Fourteen fellows completed the MKE. Test reliability was acceptable for both Question Sets 1 and 2 (Cronbach’s alphas of 0.79 and 0.89, respectively). Z-scores of − 2.91 and − 1.76 best separated the scores of residents from those of fellows, and the scores of fellows from those of attending physicians, respectively. Scores on the SCT and MKE were poorly correlated. Simply avoiding extreme answers boosted the Z-scores of the lowest 10 scorers on both Question Sets 1 and 2 by ≥ 1 SD. Increasing the proportion of questions with extreme modal answers to 50%, and using hypothetical question sets created from Question Set 1 overcame this problem, but consensus scoring did not. Conclusions The SCT was able to differentiate between test subjects of varying levels of competence, and results were not associated with medical knowledge. However, the test was vulnerable to responses that intentionally avoided extreme values. Increasing the proportion of questions with extreme modal answers may attenuate the effect of candidates exploiting the test weakness related to extreme responses.

 

Advertisements
%d bloggers like this: