Studies say you can prevent about 50% of mental health challenges if you catch and address them early. Doctors from Cincinnati Children’s hospital are using the world’s second-most powerful supercomputer and AI to help solve mental health right at the start: when we’re kids.
This is a big deal.
Support TechFirst: Become a $SMRT stakeholder
About 13% of us suffer from some form of mental health disorder … that’s 971 million people globally. And it’s only gotten worse since Covid. If we can head off half of these challenges before they even really get started — or deeply rooted — that’s a massive amount of human suffering and damage we can prevent. And, undoubtedly, a huge economic and financial savings as well.
In this TechFirst episode, we meet and chat with Dr. John Pestian, who is leading the effort.
Check out my post on Forbes, or keep scrolling for full video, podcast, and transcript …
(Subscribe to my YouTube channel)
TechFirst podcast: using the world’s second-fastest supercomputer to predict mental health challenges in kids
Transcript: fixing 50% of mental health issues with AI, a supercomputer, and early intervention?
(This transcript has been lightly edited for length and clarity.)
John Koetsier: Can supercomputers and AI improve mental health? According to the Institute for Health Metrics [and] Evaluation, about 13% of us suffer from some form of mental health disorder. That’s about 971 million people, globally, and it’s only gotten worse since COVID. Doctors from Cincinnati Children’s Hospital are using the world’s second most powerful supercomputer to help solve the problem, perhaps, at the source … when we’re children.
Welcome, Dr. John Pestian.
Dr. John Pestian: Thank you. Thank you for having us today.
John Koetsier: Super happy to have you. What is your project?
Dr. John Pestian: So we’re working on the whole idea of pulling together and computing mental health trajectories. And what does that mean?
Well, when you’re young, often your mental illness will begin to appear, and as you get older its kind of trajectory becomes worse and then you have a lot more treatment and that, and so the whole idea is to find early identification to the start of these trajectories for pediatric and adolescence.
And we focus on depression, anxiety, and suicide prevention, and those are the three areas that we’re starting with in the beginning. And we want to use our clinical expertise and the expertise of Oak Ridge National Labs — so, again, along with Cincinnati Children’s, we want to use those two expertise to be able to develop a trajectory of how your mental illness is.
So, you may, if you have children or not, or ever gone into a pediatrician’s office and you’ll see these growth charts where we measure the circumference of your head, and your weight, and your height, and you get a trajectory of where you’re going to fall along in growth.
ut what we’re doing here is we’re developing a mental health trajectory or a growth chart to show how people are developing, and with early intervention you can avoid a great deal of mental illness.
In fact, there’s a number of studies that show that if we can identify this early, we can stop about, or we can treat for and alleviate, almost 50% of the mental illness that goes into adulthood. So catching it young, catching it early, and giving care is a very important part.
So that’s what our project is … to say it, quickly.
John Koetsier: That is amazing. 50% is potentially what you can head off with early intervention. How does it work? I mean, we know how growth charts work, right, we see, okay, they’ve measured people at various heights, and ages, and stages, and grade levels, and all that stuff.
How do you project out a mental health trajectory? What kind of data are you using for that?
Dr. John Pestian: So it’s not much different than a growth chart, but the data that we’re using is related to mental illness.
And so we go through and we capture our data that we collect at Cincinnati Children’s and we use those in order to predict a high, a medium, or a low likelihood of mental illness. And so you just kind of take those and then you take me or you, or whoever goes in, and we take your current state.
So that’s what we use, the first thing is what we use the AI for, is we create these feature spaces — and I know this technology — we create these features spaces of mental illness and we compute the space, and then we plot you or me or whomever against that space, and it shows that you’re in the high, the medium, and the low. And the graph that we have that shows the trajectories would show a high/medium/low space and where you are, or maybe I came in six times and then all of a sudden I’m plotting close to the anticipated mental depression line, it would say, ‘Oh, you’re plotting closely [inaudible] to the depression line.’
And then the clinician would then intervene at that point in time.
So we’re just developing a very high tech decision support system. We’re not going to make decisions. We’re going to present it and the clinician will say, ‘Wow, this looks like you’re following on this trend, let’s go ahead and do this care, or let’s do that care. So let’s do the early identification.’
And that’s how it works.
John Koetsier: Where’s the supercomputer come in? So far, seems fairly simple, seems fairly doable … growth chart, mental health chart, where you’re trending, what’s happening … why do you need a supercomputer?
Dr. John Pestian: So, mental illness, as you can think, is complex. It’s very complex. And why is it complex? It has a biological component. It has a thought component. It has an environmental characteristics and whether you’ve been bullied. You have those social determinants and that, so where you have all these features, all these characteristics, that make it a very big place to try to compute.
I’m on the U.S. veterans’ Million Veteran Program where we use natural language processing in order to compute the likelihood of a veteran, or prevent veteran suicide. And the model we built, I built early on, was a model — and I hate to use these numbers, but it’s very big, it’s 10 to the power of 18 — which is, if you try to, John, if you try to run that on your local computer, it’s going to take you about a decade … to compute that.
John Koetsier: Yeah.
Dr. John Pestian: You might take it to your local university and it might take their cluster five years.
Well, the supercomputers allow us to bring it down to five hours, or four minutes, and we need that to train the model.
So the complexity of mental health — when you think about it, what other illness perturbs the biology space, perturbs you biologically to get you to think differently, to take your own life?
John Koetsier: Yes.
Dr. John Pestian: I mean, really think about that. And so it’s very complex and there’s so many externals, in some cases it could be the temperature that, you know, it could, it can be; we talked about bullying; there’s so many variables and characteristics that set that biology off, and then the biology keeps going and going and then all of a sudden you’re thinking about I can’t take this anymore.
Ed Shneidman, one of my mentors who was the founder of suicide research, since passed away years ago, he would call it “psychache.” It leads to the pain in the brain that eventually you want to get rid of it. What other disease, what other problems, what other illness, what other symptoms leads us to want to take our own lives?
John Koetsier: Mm-hmm, horrific.
Dr. John Pestian: So we need the supercomputers in order to deal with that complexity. And we couldn’t 10 years ago, we couldn’t do it, they didn’t exist. They just didn’t exist. I mean, we thought about these things, but now that we have these high performance computing we have more and more opportunity to test this.
John Koetsier: You actually go beyond the statistics as well, and the correlations, because you’re using artificial intelligence also. How are you using that and what are you getting out of that?
Dr. John Pestian: So the AI or machine learning, or, first of all, let’s acknowledge that everybody in the world now uses AI or anything [laughter] so let’s say that we’re using it beyond, you know, when we need our next gallon of gas or something, but we’re really seriously, seriously using it.
And so, we use the AI to first build that space on what to anticipate on whether you’re high, medium, or low … because the data are so big and so powerful, traditional statistics won’t allow you to do that.
Then, how do you do, um, so now we have this model working and there’s constant literature coming out. How do we update the model to meet the needs of the scientific literature?
So here we use autonomous curation where we use natural language processing, and we’re just testing it now to read through Medline and PubMed articles in order to kind of look through that and say, well, you know, this may be important to your model, let’s go ahead and update it and test it.
So, feature selection, testing and validation, and autonomous curation are the three spaces that we’re, things that we’re working with now.
John Koetsier: Amazing. So you can update, I won’t say real-time, but near real-time as research comes out and you can change your models, and you can probably change your predictions, and you can probably update those predictions to clinicians and people in the field, and they can take real-time action almost, correct?
Dr. John Pestian: Yes, but we’ll always have a human — well, for the near term, we’ll have a human intervention added to decision support. It’s not a decision tool. There’s a big difference between letting the machine make a decision and the machine saying, ‘Oh, it looks like you’re going to be heading into depression.’ And so we have to make sure that we support decisions and still keep that human intervention.
John Koetsier: What does that look like? What does that look like when a human gets involved?
Is that in a clinical setting? Is that in a school setting? Is that in the hospital setting? What’s that look like? How’s that work?
Dr. John Pestian: So, the answer to that is yes, it’s in all those settings. But let’s say in the hospital setting, let’s say in the emergency room, we build a tool that listens to an adolescent discussion and we ask them questions when they come in, what we call ubiquitous questions, and we ask these questions.
And they’re all built off of this massive collection of, you know, a couple thousand suicide notes that I collected and then built natural language models off of those. And these are notes that people wrote just before they died by suicide.
So we took those and we built this corpus and we said, ‘Well, what are important? What kind of questions do you see?’ And we found that questions like, ‘Do you have secrets? Are you angry?’ And those types of questions were very good at pulling out information that would help us identify how close they came to that original corpus I talked about, that low, medium and high for anxiety, but this was for suicide.
So, in the ED, we had a little handheld device — it was an early thing about 10 years ago and we probably wouldn’t use the same, it was like an iPhone — and then you put it down and they would talk and ask the questions and the patient would answer, and then they gave you a rank where you’re high, medium, or low for suicide behavior — and not suicide, but suicide behavior — no one can really pick if you’re going to commit suicide, they could just say, ‘You have high risk. Are you doing it?’
So, that’s how it looked.
And then the physicians could help, and the clinicians could help use that in order in their decision-making process, you know, do we admit this person? And in the schools, the same thing is working for the schools. We’ve spun it out to a start-up company in that case and they used it to listen to the voice.
And so we were able to show that, and we published articles that showed that when you’re talking, if you’re suicidal, your pauses are longer. And so, things along that line, [unclear], these things are different when you’re talking, and so we could show that and you could use that for decision support in school clinics, if it’s helping.
We also showed that, yeah, there’s different facial expressions in people that we’ve published that shows that there’s been able to see that your facial expressions are different. And so, those are all the things, how it works clinically, and we look, we decompose those characteristics of language and how you communicate and then go from there.
John Koetsier: So, [sighing] this is a pretty deep topic, and usually I focus mostly on technology, but I want to also ask — and if it’s too personal, it’s too personal — how did you come to this field of study?
How did you come, that has to be … not first pick for a lot of people, to read through a huge collection of notes that people write in the depths of despair just before they make a very life-altering decision. How did you come to be in this field?
Dr. John Pestian: So, you’re right. There’s not a lot of first picks for people, but there’s, at this point, it’s enough first picks for a lot of us that are working on this. And so, when I first started this 15 years ago, there was just a handful of people that were interested in this and the computation was just breaking open and we’re talking about can we do this, and so it was early on.
There’s nothing I’m … it’s like what you do, you disseminate the knowledge, and you’re taking what I’m saying and you’re figuring out a way to tell people about it so they can understand it. It’s your vocation. It’s what you’ve done, it’s what you’ve chosen. This is my vocation. My vocation is to find technology to make, to people with mental illness, to reduce their misery, or maybe save their lives on occasion.
And so, I don’t have anything particular. I wish — I’m glad I don’t have anything particular to say that something happened in my family member or, you know … it just happened, it’s just what I’m cut to do.
John Koetsier: Mm-hmm. Wonderful. Where is the project right now and what are the next steps? When do you think this will be fully rolled out?
Dr. John Pestian: So, the data are being analyzed. We’ve combined all those pieces of data — the clinical, the external data, the social determinants, the census data, the environmental data, all those things — we’ve combined those together, and we’re just working on now how do we best identify, algorithmically, how do we plug into the AI algorithmically? What’s the best way, and there’s a lot of questions in that, there’s like, how do we find out that someone spoke, some clinician, some physician spoke about suicide in the clinical notes.
You know, so you have to write your natural language processing, but there’s a lot of ways to say suicide. So your natural language processing has to be trained in order to [inaudible].
So we have a lot of those things that we’re doing, and we hope within the next six months to be able to plot the first plot of a group of patients in a group of early adopters, to start plotting on where it is on the graph. And then in there over the next, maybe year or so, we’ll work on putting it on the first adopters’ clinician’s desktop, where then when the patient comes into their office they’ll see that, you know, looks like you’ve been depressed. Let’s talk about that. What can we do in order to help?
So, I’d like to say a year we have the first thing and then probably it will take us three years to learn all our mistakes and then go back and kind of redo it again, because there’s always — that’s just the nature of science, as you know, and technology. You build, you fail, you build, you fail, you know what is it, ‘fail fast often’ or things like that.
So we just kind of go through, and that’s where I think we’ll need most of the technology.
One of the things that we’re working on now is how do you portray this new language you’re developing, graphically, the whole idea of symbolic language? How do I present this new symbolic language to people, and to clinicians, and to parents, and to the patients, all in a way that everyone can understand what we’re saying? And just not a big spreadsheet of numbers or something like that.
John Koetsier: [Laughing] Yes.
Dr. John Pestian: So that’s going to take a little bit of effort. We have some folks, Dr. Zinder and others who are helping us with that. So those are the type of things we need to get done as you say, where are you? That’s where we are and we’ll hopefully have some good samples on the desk in a year or so. It’s moving along fast and I’m very excited about it.
John Koetsier: Let’s talk about taking this kind of technology to the next billion people, maybe globally, at some point. If you can look in your crystal ball a little bit … mental health is such a widespread thing, challenges with it, poor mental health, problems, those sorts of things.
I mentioned off the top, 13% was a global estimate, it’s almost a billion people. And guess what? The rest of us aren’t always happy and wonderful and, you know, on the tree tops or on the mountain tops all the time either, right?
What can we build into the technology that all of us have — whether that’s our smartphones, whether that’s a digital assistant, like Siri or Alexa, or Hey Google or something like that — that can help. Do you see that in the future?
Dr. John Pestian: You know, I can probably think with you about that now. I haven’t spent a whole lot of time thinking about it — our emphasis is on clinical care delivery and those people that are treating the patients, how do we help them — but if I wanted to scale that, I think I would scale it to caregivers.
I’m not one that has bought into the entire idea of allowing these tremendously important decisions be done by a machine. I like the idea of using machines to bring the best information to the clinician and the patient, and say, ‘How do you think we should go from here? What way will treat you best?’
So, my idea of scaling is getting it in the hands of those that treat people and that clinician could be the social workers, it could be the physicians, it could be the school counselors. It doesn’t really matter who it is, but there’s still — I’m not ready to see that the machine’s going to do all for us … and I don’t know if I ever will be.
John Koetsier: Mm-hmm. Okay.
Dr. John Pestian: There’s just, it’s because it’s so complicated. Like we talked about biology, environment, thought, all that is so complicated and I don’t know how the machines would do it totally for us, but they can help us a great deal.
John Koetsier: Yeah. Excellent. Well, a very important project. I wish you all the success that you can have there, and thank you so much for your time.
Dr. John Pestian: And thanks for your time. Thanks for the invite.
TechFirst is about smart matter … drones, AI, robots, and other cutting-edge tech
Made it all the way down here? Wow!
The TechFirst with John Koetsier podcast is about tech that is changing the world, including wearable tech, and innovators who are shaping the future. Guests include former Apple CEO John Scully. The head of Facebook gaming. Amazon’s head of robotics. GitHub’s CTO. Twitter’s chief information security officer, and much more. Scientists inventing smart contact lenses. Startup entrepreneurs. Google executives. Former Microsoft CTO Nathan Myhrvold. And much, much more.
Consider supporting TechFirst by becoming a $SMRT stakeholder, and subscribe on your podcast platform of choice: