I often wonder if Artificial Intelligence (AI) mavens could come up with a better psychiatric consultant for the general hospital. I have renewed interest in the idea now that the National Geographic Channel is halfway through the 6 episodes series, “Year Million.” It’s a fun, speculative, futurist view of how computers are becoming more integrated into our daily lives leading to the idea that maybe we should just upload ourselves to the big computer in the sky.
The show is narrated by Laurence Fishburne, a good choice for a program that reminds me of The Matrix movie, which gets mentioned a couple of times, naturally. I googled the term “Year Million” and found a book titled “Year Million” by Damien Broderick which was published in 2008. It doesn’t look like the makers of the TV series were aware of the book, but I thought one of the book’s reviews on Amazon could easily apply to the show. An excerpt goes like this:
“This book is almost all pure science fiction. These essays range from wildly optimistic to awesomely wildly optimistic. I wonder if the contributors were told explicitly to “keep it positive”. They certainly were not told to “keep it realistic”. Anyone who has studied physics and a couple of other sciences can see that there is very little realism in these essays. Many of them assume that human civilisation is going to spread around the galaxy in the next million years. You just have to look at the cost of sending humans to the Moon to get some clues as to why that is nonsense. Also look at how reliable our computer systems are. A high-tech device that works for 100 years is extremely exceptional, even if not exposed to the conditions in space. A probe going at 0.1c through solar and interstellar dust at 4 degrees Kelvin for 100 years is just not going to make it. High-tech devices need doctors and nurses to fix them when they are sick.”
Speaking of “doctors and nurses” I also noticed how progress is going on integrating mind and body in our health care system. Forgetting for the moment about integrating electronic circuits and the human brain, there was an article in the May 26, 2017 issue of Psychiatric Times titled “Medically and Psychiatrically Complicated Patients” by Kenneth Certa, MD. Dr. Certa does an excellent job of describing the problems of the dis-integrated care found in most hospitals, where medical and psychiatric disorders are treated sequentially instead of simultaneously–as they can be on a Medical-Psychiatry Unit (MPU), which he never mentions in the text of the article.
However, his short reference list includes a paper about the European model of the MPU, which targets mainly persons suffering from somatic symptom disorders. That”s a far cry from the American model, which in some medical centers, notably the MPU at The University of Iowa Hospitals and Clinics. I left a comment at Psychiatric Times about this, which is copied below:
“I think Dr. Certa’s review is excellent. However, it triggers the question of why medical-psychiatry units (MPUs) weren’t mentioned. MPUs address many of the challenges Dr. Certa identifies as being associated with the split-care model.
Dr. Certa lists one reference about the European MPU model, which is mainly for patients with somatic symptom disorders.
That is not the model used in the U.S. The American model is the Type IV MPU, one example being the 15 bed unit at The University of Iowa Hospitals and Clinics (UIHC). The Type IV MPU can manage both high acuity medical and high acuity psychiatric problems.
There are pros and cons about MPUs and it’s tough to find any published literature about them past the mid-1990s (see below).
I still like to call it a “medical-psychiatry unit” (MPU) even though my old teacher, Dr. Roger Kathol, prefers Complexity Intervention Unit (CIU).
Dr. Kathol created the MPU at UIHC in the 1980s. It is highly respected and the most popular model we all point to as the best example of how to provide integrated care in a hospital setting. Hospital representatives from around the country and from overseas as well, travel here to learn how to implement the MPU in their own systems. And it’s a great training setting for learners in the Medicine-Psychiatry Residency Program.
I worked as a co-attending on the MPU here at UIHC for about 17 years. I can tell you, in my opinion, it’s the best way to provide excellent clinical care to patients who have complex, comorbid psychiatric and medical problems.
What might make the MPU more widely adaptable and readily adopted? The current payer system by private insurance carriers is one thing. Other thoughts about it are in my blog post at https://thepracticalpsychosomaticist.com/2013/05/12/the-medical-psychiat…
Hall RCW. Kathol RG: Developing a level III/IV medical/psychiatry unit: establishing a basis. design of the unit. and physician responsibility. Psychosomatics 33:36K-375. 1992
R.G. Kathol, H.H. Harsch, R.C.W. Hall, et al. Categorization of types of medical/psychiatry units based on level of acuity Psychosomatics, 33 (1992), pp. 376–386″
If we’re having this much trouble integrating mind and body just in terms of improving health care service delivery, I’m not sure how successful we’ll be integrating AI and our brains. By the way, if you want real neuroscience about the human brain, you can check out the recent blog blog post of Dr. George Dawson. And Dr. Bill Yates has an interesting post about executive function as well. I see executive dysfunction every day, frequently associated with delirium.
Let’s see, I started this meandering essay with a question about whether or not a general hospital psychiatric consultant could be replaced by a robot. I’m not sure. Some workers are already being replaced by bots or AI. You can probably find a lot of articles online authored by experts who believe that bots can take over the jobs which humans find “dirty, dangerous, and dull.” Well, a hospital can be a pretty dirty place, which is probably one reason why about once a year you can find supporters of the idea that shaking hands is bad in hospitals and should be replaced by the fist bump. And hospitals can be dangerous in some ways. You can pick up an infection (unless you fist bump), and delirious patients sometimes can become violent. I’m having a little trouble with the idea that a hospital is dull, at least for a psychiatric consultant.
But AI might not be bothered by dullness or excitement. That assumes we can’t build emotions into them although in one article which reads suspiciously like it might have been written by a bot, that’s debatable.
It’s pretty complicated to even study how humans learn to be empathic, much less build empathic bots–and reading the kinds of papers researchers write about this topic can be a lot more demanding than sitting in an easy chair and getting dazzled by Laurence Fishburne’s laid back language and the hypnotic light show of The Year Million TV show:
Lockwood, P. L., et al. (2016). “Neurocomputational mechanisms of prosocial learning and links to empathy.” Proceedings of the National Academy of Sciences 113(35): 9763-9768.Lockwood, P. L., et al. (2016). “Neurocomputational mechanisms of prosocial learning and links to empathy.” Proceedings of the National Academy of Sciences 113(35): 9763-9768. Reinforcement learning theory powerfully characterizes how we learn to benefit ourselves. In this theory, prediction errors—the difference between a predicted and actual outcome of a choice—drive learning. However, we do not operate in a social vacuum. To behave prosocially we must learn the consequences of our actions for other people. Empathy, the ability to vicariously experience and understand the affect of others, is hypothesized to be a critical facilitator of prosocial behaviors, but the link between empathy and prosocial behavior is still unclear. During functional magnetic resonance imaging (fMRI) participants chose between different stimuli that were probabilistically associated with rewards for themselves (self), another person (prosocial), or no one (control). Using computational modeling, we show that people can learn to obtain rewards for others but do so more slowly than when learning to obtain rewards for themselves. fMRI revealed that activity in a posterior portion of the subgenual anterior cingulate cortex/basal forebrain (sgACC) drives learning only when we are acting in a prosocial context and signals a prosocial prediction error conforming to classical principles of reinforcement learning theory. However, there is also substantial variability in the neural and behavioral efficiency of prosocial learning, which is predicted by trait empathy. More empathic people learn more quickly when benefitting others, and their sgACC response is the most selective for prosocial learning. We thus reveal a computational mechanism driving prosocial learning in humans. This framework could provide insights into atypical prosocial behavior in those with disorders of social cognition.
Like it or not, science is hard and some of us better being doing it or we’ll be in pretty big trouble. But I don’t think I want to upload myself into the Psychiatric Consultant Supercomputer. I would rather look at trees, flowers, and birds with my own flawed eyes, feel the breeze and the warmth of the sun on my wrinkled skin, and hear music playing with my own ears.