Skip to Main Content
All Podcasts

AI and Prostate Cancer

Transcript

  • 00:00 --> 00:04Support for Connecticut Public Radio comes from AstraZeneca,
  • 00:04 --> 00:10a biopharmaceutical business that is pushing the boundaries of science to deliver new cancer
  • 00:10 --> 00:14medicine. More information at astrazeneca-us.com.
  • 00:14 --> 00:20Welcome to Yale Cancer Answers with doctor Anees Chagpar.
  • 00:20 --> 00:30Yale Cancer Answers features the latest information on cancer care by welcoming oncologists and specialists who are on the forefront of the battle to fight cancer. This week,
  • 00:30 --> 00:34it's a conversation about machine learning and prostate cancer treatment
  • 00:34 --> 00:43with doctor John Onofrey. Doctor Onofrey is an Assistant Professor of Radiology and Biomedical Imaging and of Urology at Yale School of Medicine.
  • 00:47 --> 00:48John, let's start
  • 00:48 --> 00:54off by having you tell us a little bit about yourself and what exactly you do.
  • 00:54 --> 00:57I have a background in computer science,
  • 00:57 --> 01:05so I actually spent four years working as a software engineer in the defense industry before coming back to get my PhD,
  • 01:05 --> 01:07which I actually did here at Yale.
  • 01:07 --> 01:18In that time I became interested in medical image processing and part of that that became a driving factor was the use of machine learning and artificial intelligence to
  • 01:18 --> 01:20create solutions for image analysis problems,
  • 01:20 --> 01:23and particularly those applied to radiology.
  • 01:23 --> 01:25All of that sounded really cool,
  • 01:25 --> 01:31but you're kind of losing me in terms of what exactly you are talking about.
  • 01:31 --> 01:42People go and they get X Rays and CT scans and ultrasounds and those kinds of things as diagnostic tests and some of us may have heard
  • 01:42 --> 01:46a little bit about artificial intelligence and machine learning,
  • 01:46 --> 01:49but it seems to be this amorphis concept like
  • 01:49 --> 01:54are machines actually going to learn how to do the job of humans?
  • 01:54 --> 01:58Are they going to take over what we do?
  • 01:58 --> 02:05Put that whole concept together for me and explain a little bit about what exactly is the marriage between those two things.
  • 02:05 --> 02:10Artificial intelligence and machine learning really is a very broad concept,
  • 02:10 --> 02:18and it's especially a very broad range in terms of medical diagnosis or any kind of medical decision making.
  • 02:18 --> 02:20A lot of problems involved though.
  • 02:20 --> 02:23What's something that the computer can help a clinician do?
  • 02:23 --> 02:32Is there a task that the computer can aid them in some way so that they can do their job either better or more efficiently?
  • 02:32 --> 02:34Especially in a imaging,
  • 02:34 --> 02:39the most basic task is, well can I identify some part of an image that is of interest.
  • 02:39 --> 02:42So for example in prostate cancer care,
  • 02:42 --> 02:47one of the preliminary steps in any analysis is just to identify the prostate gland itself,
  • 02:47 --> 02:49and it turns out a machine.
  • 02:49 --> 02:50is able to do that
  • 02:50 --> 02:59if you have someone to teach it and that data is very important and that data comes from these radiologists that are available at our institution,
  • 02:59 --> 03:01so it's really what data goes in,
  • 03:01 --> 03:05the machine learns what these radiologists do,
  • 03:05 --> 03:14hopefully they can do it as well and spit out an answer and try to do in an automated fashion and that way you can hopefully aid this clinician
  • 03:14 --> 03:20with their job.
  • 03:20 --> 03:25So you have an image like a CT scan, the prostate is a part that we can find on the see CT scan.
  • 03:25 --> 03:34And so if the radiologist, who are used to looking at CT scans, can teach the computer what a prostate gland looks like,
  • 03:34 --> 03:36then the computer can identify it.
  • 03:36 --> 03:42But then the question becomes, the radiologist is more than looking at where the prostate gland is,
  • 03:42 --> 03:47they are the ones who say is there something wrong with the prostate?
  • 03:47 --> 03:50Is there a nodule in the prostate,
  • 03:50 --> 03:52is there a cancer lurking in that prostate?
  • 03:52 --> 03:55Can the computers help us with that too?
  • 03:55 --> 03:55Absolutely.
  • 03:55 --> 03:59Just to clarify though. Actually in the prostate radiology world,
  • 03:59 --> 04:02actually most of the imaging is done with magnetic resonance imaging,
  • 04:02 --> 04:08so that just gives a richer sense of that issue that's within the prostate compared to something like CT.
  • 04:08 --> 04:10But yes, to answer your question,
  • 04:10 --> 04:13so whenever a radiologist looks at this image,
  • 04:13 --> 04:18they have years and years of training that goes into what to look for.
  • 04:18 --> 04:22So not only are they looking at just the shape of the prostate,
  • 04:22 --> 04:30but they make a diagnosis on what they think is suspected cancer and those manifest in different ways in this image.
  • 04:30 --> 04:32So they look for different patterns,
  • 04:32 --> 04:36different textures, and it all comes with years and years of training.
  • 04:36 --> 04:43So essentially what we do is we have that radiologist with their pre annotated results.
  • 04:43 --> 04:47So they mark up this image somehow with their tool.
  • 04:47 --> 04:53They'll say, well, I think this has some level of prostate cancer risk or some assessment.
  • 04:53 --> 04:55And then we can take that data,
  • 04:55 --> 04:58both the original image and what their labeling is,
  • 04:58 --> 04:59put it into an algorithm,
  • 04:59 --> 05:03and then hopefully that algorithm can learn to do a similar thing.
  • 05:03 --> 05:05Now the goal is,
  • 05:05 --> 05:11can you actually achieve some kind of performance that applies to all the datasets that you haven't seen?
  • 05:11 --> 05:13That's a real challenge in artificial intelligence.
  • 05:13 --> 05:19Can you get something that you've never seen before and that's one of the big questions that we have.
  • 05:19 --> 05:23So what we're really trying to distill is all the knowledge within this model.
  • 05:23 --> 05:26Just think of it as a black box.
  • 05:26 --> 05:30Can we capture what these radiologists have taught within this black box and so
  • 05:30 --> 05:35essentially the question is, can one day the computer take over the job of the radiologist?
  • 05:35 --> 05:37I don't think so. That seems to
  • 05:37 --> 05:46be everyone's fear. I look at it more as it could be a helpful assistant and aid like a clinical diagnostic tool that they can leverage to improve their own
  • 05:46 --> 05:55level of care and also see a very big point of this could be at Yale were very fortunate to have lots of experts doing this kind of imaging,
  • 05:55 --> 05:57one of the main challenges is,
  • 05:57 --> 06:00what if you have someone who is not an expert trained in this?
  • 06:00 --> 06:03Will they perform as well as the expert?
  • 06:03 --> 06:07Most likely no. But if you're able to give them this tool,
  • 06:07 --> 06:11can we bring that more novice reader up to the level of the expert?
  • 06:11 --> 06:19And can you disseminate this technology down into lower centers of care that it could be really impactful to patient health across the population?
  • 06:21 --> 06:27For example, if you're in the community and you don't have one of these experienced radiologists,
  • 06:27 --> 06:29maybe you have a general radiologist.
  • 06:29 --> 06:34The computer might be able to show them a spot that maybe they should be more worried
  • 06:34 --> 06:37about.
  • 06:37 --> 06:44This machine learning could highlight an area of interest and you never want to say that that area of interest is definitely cancer,
  • 06:44 --> 06:47but what we want to do is point it out to the radiologist.
  • 06:47 --> 06:50Make them aware, maybe it was something that they would have missed,
  • 06:50 --> 06:52that they would have not seen otherwise.
  • 06:52 --> 06:55But if they take a second look because of this algorithm,
  • 06:55 --> 06:57then that means we've done our job,
  • 06:57 --> 07:02especially if it leads to that was actually something that they should have been looking at,
  • 07:02 --> 07:03and they just happened to
  • 07:03 --> 07:05overlook it. And I think that's
  • 07:05 --> 07:11possible because humans are human and suffer from fatigue or whatever else absolutely,
  • 07:11 --> 07:13so that's usually the next step after diagnosis.
  • 07:13 --> 07:18Once you have the image and you see something that looks a little funny,
  • 07:18 --> 07:21the next step is a biopsy.
  • 07:21 --> 07:25Will artificial intelligence and machine learning help us in that?
  • 07:25 --> 07:26So that's actually
  • 07:26 --> 07:29one area of research that I've been involved in,
  • 07:29 --> 07:33how to improve the targeting of that biopsy.
  • 07:33 --> 07:36So when a patient goes for a biopsy,
  • 07:36 --> 07:37they do so under ultrasound guidance,
  • 07:37 --> 07:41so a urologist has the ability to see what their targeting,
  • 07:41 --> 07:46but they aren't able to discern what is a cancerous lesion or not of the prostate.
  • 07:46 --> 07:50However, that lesion is able to be discerned on the MRI.
  • 07:50 --> 07:55The problem then becomes how do you map your target in your MRI image to your ultrasound,
  • 07:55 --> 08:03and that's where we came in to develop a model that could actually predict the way that the prostate would change during the two procedures,
  • 08:03 --> 08:06so it provided a way to hopefully more
  • 08:06 --> 08:11accurately target these so by imagining it like having a bullseye,
  • 08:11 --> 08:17we want to show where exactly that urologist should aim their biopsy needle.
  • 08:17 --> 08:18So how do you do that exactly?
  • 08:20 --> 08:30Because we've had urologists on the show before and they've talked about how they can see things on the MRI and when they go to ultrasound they really
  • 08:30 --> 08:33can't. And so sometimes these biopsies are almost,
  • 08:33 --> 08:35I don't want to say random,
  • 08:35 --> 08:39but almost because you can't necessarily correlate it,
  • 08:39 --> 08:43especially if there's no palpable lesion that you can feel,
  • 08:43 --> 08:51so how does the computer take an image on one modality is completely different?
  • 08:51 --> 08:55They look nothing alike either and translate it into another modality.
  • 08:55 --> 08:59I mean,
  • 08:59 --> 09:01an ultrasound is completely different.
  • 09:01 --> 09:02How do you do that?
  • 09:02 --> 09:03We actually are
  • 09:03 --> 09:06able to leverage human intelligence in this case,
  • 09:06 --> 09:11so both the radiologist and the urologist provide an initial guess about where the prostate gland is itself.
  • 09:11 --> 09:13So first on the radiology side,
  • 09:13 --> 09:15a radiologist will actually contour,
  • 09:15 --> 09:17we call it segmentation of the prostate gland,
  • 09:17 --> 09:21and that takes a few minutes to do, and again,
  • 09:21 --> 09:24this gets back to something that I was talking about earlier.
  • 09:24 --> 09:27Can you have a computer program do that automatically?
  • 09:27 --> 09:31So there's one way that we can improve the efficiency of the workflow.
  • 09:31 --> 09:34But right now we manually have to do it,
  • 09:34 --> 09:36because that's what we rely upon,
  • 09:36 --> 09:39and the urologist will actually do the same thing in the ultrasound.
  • 09:39 --> 09:43While they're doing the procedure before it starts for the biopsy,
  • 09:43 --> 09:47they will contour this ultrasound and they will find out where the prostate gland is.
  • 09:47 --> 09:51So now we have two shapes of what the prostate looks like,
  • 09:51 --> 09:53one in the MR imaging and one in the ultrasound.
  • 09:53 --> 09:58So now now that we have these surface, these shapes were able to co register,
  • 09:58 --> 09:59we call this image fusion,
  • 09:59 --> 10:02we actually bring the two into alignment.
  • 10:02 --> 10:05And by using these models instead,
  • 10:05 --> 10:07these surfaces, instead of the image in itself.
  • 10:07 --> 10:13That's how we kind of get away with the very different appearances of these images in the two different imaging
  • 10:13 --> 10:20modalities.
  • 10:20 --> 10:27I get the fact that you contour it out and you say here is the prostate in this ball.
  • 10:27 --> 10:31And here is the prostate in this other ball on the ultrasound.
  • 10:31 --> 10:38But to put them together because then ultimately you have to feed that information to the urologist,
  • 10:38 --> 10:44not only to say, you know that ball that you were thinking was the prostate on the ultrasound,
  • 10:44 --> 10:49well here it is. How it looks on the MR and
  • 10:49 --> 10:54oh, by the way, the lesion that we're going after is here,
  • 10:54 --> 10:57which you can't really see on the ultrasound,
  • 10:57 --> 11:04but you're going to have to trust us that it's kinda here in this fused image that you can't really see.
  • 11:04 --> 11:07Correct. What we do is basically that fusion,
  • 11:07 --> 11:09like I said before,
  • 11:09 --> 11:15it provides a target so that target is displayed in real time on the ultrasound image.
  • 11:15 --> 11:17So when the urologist is performing
  • 11:17 --> 11:24the procedure they look at the ultrasound image and the beauty of ultrasound is that it is in real time.
  • 11:24 --> 11:28So what you see is what you are looking at currently in real time,
  • 11:28 --> 11:35and so the software is actually able to transform and fuse that lesion on to that image in real time.
  • 11:35 --> 11:38So then the urologist is able to target it.
  • 11:38 --> 11:40That's where they aim the biopsy needle,
  • 11:40 --> 11:46and so the particular device that's here at Yale actually has a mechanical arm that stabilizes the biopsy procedure.
  • 11:46 --> 11:47And so it's a
  • 11:47 --> 11:51Iknown trajectory on where that biopsy needle is going to go,
  • 11:51 --> 11:53and so able to not only target the lesion,
  • 11:53 --> 11:56but also records where that biopsy sample was performed,
  • 11:56 --> 12:00and so that actually gets into the downstream effects of when that goes to pathology.
  • 12:00 --> 12:01Did you actually hit
  • 12:01 --> 12:04that lesion which was going to be my next question?
  • 12:04 --> 12:08Because you can tell me that the target is at Point X on the ultrasound,
  • 12:08 --> 12:11but if I can't see Point X on the ultrasound,
  • 12:11 --> 12:13I'm kind of taking your word
  • 12:13 --> 12:17for it. You are putting your trust entirely in the fusion algorithm itself,
  • 12:17 --> 12:25right? Which is particularly interesting because the segmentation or the outlining of that gland on the ultrasound is extremely challenging.
  • 12:25 --> 12:29Urologist have a very difficult time and it's not against them.
  • 12:29 --> 12:35I mean they have years of training and you ask the same urologist to do the same person again,
  • 12:35 --> 12:44you'll get a different answer and that's actually where the innovation and the research that we've been doing here at Yale comes in.
  • 12:44 --> 12:46Can we handle these kinds of mistakes?
  • 12:46 --> 12:50These errors that are going to happen no matter what.
  • 12:50 --> 12:52Can we make a more robust fusion
  • 12:52 --> 13:00that is less sensitive to these kinds of problems and so you have the variability in the urologist outlining the prostate
  • 13:00 --> 13:10and then you have the fact that they can't see the lesion and you give them a target and you tell them aim here and the biopsy is taken there.
  • 13:10 --> 13:13Have you looked at how often you're right?
  • 13:14 --> 13:15We're actually quantifying that right now,
  • 13:18 --> 13:22Not only pathology, but what if on the MR are were wrong,
  • 13:22 --> 13:27right? So to go back and look at the MR and say I did the biopsy here,
  • 13:27 --> 13:30was it actually the place where we meant to target?
  • 13:30 --> 13:32Because we can see it on the MR.
  • 13:32 --> 13:37We actually do that in tumor board when we get everybody together in a room.
  • 13:37 --> 13:39We get the radiologist. We get the pathologist,
  • 13:39 --> 13:44altogether and what we do is we look at what cases we possibly missed.
  • 13:44 --> 13:47And that's a very useful thing.
  • 13:47 --> 13:48So
  • 13:48 --> 13:50we're actually going backwards from results.
  • 13:50 --> 13:56There's a lot more to talk about about in AI and prostate cancer,
  • 13:56 --> 13:57right after we
  • 13:57 --> 14:10take a short break for a medical minute. Support for Connecticut Public Radio comes from AstraZeneca working side by side with leading scientists to better understand how complex data
  • 14:10 --> 14:13can be converted into innovative treatments. More information at astrazeneca-us.com.
  • 14:13 --> 14:17This is a medical minute about pancreatic cancer,
  • 14:17 --> 14:21which represents about 3% of all cancers in the US,
  • 14:21 --> 14:23and about 7% of cancer deaths.
  • 14:23 --> 14:33Clinical trials are currently being offered at federally designated comprehensive Cancer Centers for the treatment of advanced stage and metastatic pancreatic cancer using chemotherapy and
  • 14:33 --> 14:36other novel therapies, FOLFIRINOX
  • 14:36 --> 14:43a combination of five different chemotherapies is the latest advances in the treatment of metastatic pancreatic cancer,
  • 14:43 --> 14:48and research continues. It centers around the work looking into targeted therapies.
  • 14:48 --> 14:58and a recently discovered marker hENT-1. This has been a medical minute brought to you as a public service by Yale Cancer Center.
  • 14:58 --> 15:03More information is available at yalecancercenter.org, you're listening to Connecticut Public Radio.
  • 15:03 --> 15:04Now John,
  • 15:04 --> 15:13right before the break we were saying that the urologist really puts their trust in this targeting device,
  • 15:13 --> 15:15because they can't see the lesion.
  • 15:15 --> 15:18The lesion shows up on the MR
  • 15:18 --> 15:21but they're doing the biopsy under ultrasound,
  • 15:21 --> 15:23which can't see the lesion,
  • 15:23 --> 15:27and so they're trusting your algorithm
  • 15:27 --> 15:34to tell them exactly where to biopsy and you're also knowing that urologists are human and radiologists are human,
  • 15:34 --> 15:38and the outlines that they provide are not necessarily always completely accurate,
  • 15:38 --> 15:42and so you're dealing with a little bit of variability.
  • 15:42 --> 15:44But at the end of the day,
  • 15:44 --> 15:52the urologist puts that needle into the prostate into the part of the prostate that you told them to
  • 15:52 --> 15:59and then you go back and you look at the MRI to see whether or not they biopsied the right spot.
  • 15:59 --> 16:00Correct,
  • 16:00 --> 16:05you can do that. It's very interesting these cases that do have discordant results,
  • 16:05 --> 16:12which is expected we do go back and look at them and see what was missed in either case,
  • 16:12 --> 16:14but it's fascinating actually.
  • 16:14 --> 16:21If you look at the size of the gland compared to the size of the biopsy,
  • 16:21 --> 16:23it's something like .05% of the gland.
  • 16:23 --> 16:30That is all your sampling and many studies have actually shown that this targeting of biopsies
  • 16:30 --> 16:36is really the way to go because you get a much higher rate of detection of cancer that way.
  • 16:36 --> 16:45There's still a lot of variability in that and what's very interesting about the research that we've done here is we propose this novel fusion algorithm to hopefully
  • 16:45 --> 16:52map these lesions better and what we're able to do is here at Yale is we were able to see them in real time.
  • 16:54 --> 16:58Currently how they do it and then our method and we're able to
  • 16:58 --> 17:00see the variability in the targets itself.
  • 17:00 --> 17:04And then variability there, just the urologist looking at it,
  • 17:04 --> 17:09gave him some indication of how bad or incorrect that biopsy might be so while
  • 17:09 --> 17:13we weren't able to change a biopsy trajectory for the study,
  • 17:13 --> 17:16it gave an idea down the line
  • 17:16 --> 17:19of maybe this is why we missed this thing,
  • 17:19 --> 17:26because it was just a problem with sampling the wrong location because the wrong location
  • 17:26 --> 17:29was given.
  • 17:29 --> 17:33I'm sure that there are people who are listening to this going,
  • 17:33 --> 17:38I can't imagine that the wrong part of my prostate might be biopsied.
  • 17:38 --> 17:48How often is it inaccurate and how often is it inaccurate with the fusion technology versus how often is it inaccurate when you know the urologist goes in
  • 17:48 --> 17:53blind to do a biopsy under ultrasound of the thing that they can't see?
  • 17:53 --> 17:56I can't give you specific numbers on that.
  • 17:56 --> 18:01Studies do show that if you have a target presented by one of these devices
  • 18:01 --> 18:07you are much more likely to find that cancer that you were looking for,
  • 18:07 --> 18:12but again, that's something that's only available to a small number of institutions.
  • 18:12 --> 18:20Institutions that are larger are able to have these devices so traditionally a biopsy was taken in a just a regular systematic fashion.
  • 18:20 --> 18:23A urologist would only take 12 of them,
  • 18:23 --> 18:28and that's less of a game of chance.
  • 18:28 --> 18:32Like I said before, you're taking less than .05%
  • 18:32 --> 18:37of that prostate.
  • 18:37 --> 18:42You do end up with cases where you do find cancer where it wasn't suspected,
  • 18:42 --> 18:45certainly, and maybe that was just pure luck.
  • 18:45 --> 18:48But would you want to trust that, I don't know?
  • 18:48 --> 18:53You have much better chance of finding that cancer if you have these targets,
  • 18:53 --> 18:55even if these targets may not be 100%
  • 18:55 --> 19:00correct, it is much more likely that you're going to find it and be successful and have
  • 19:00 --> 19:09a better diagnosis.
  • 19:09 --> 19:12And so if on the MR are you see something suspicious and the radiologist says that's what we want to go after and you do the fusion algorithm and you target that thing and it comes back in,
  • 19:12 --> 19:17the pathologist says now it's benign. You talked before the break about
  • 19:17 --> 19:19discussing these cases in tumor board,
  • 19:19 --> 19:25tell us about what happens there and how you can get yourself either reassured that yeah,
  • 19:25 --> 19:27that really is benign, or
  • 19:27 --> 19:30we might have missed it even with our algorithm.
  • 19:30 --> 19:33That's what's great about the tumor
  • 19:33 --> 19:38board. It puts everybody that needs to make that decision in the room together.
  • 19:38 --> 19:40They're able to discuss it,
  • 19:40 --> 19:46so each specialist discusses what they see on either the imaging or the pathology,
  • 19:46 --> 19:55and then the urologist, what they saw during the procedure of the biopsy and it all kind of comes together to make one cohesive decision and
  • 19:55 --> 19:58a lot of time they come to some kind of consensus
  • 19:58 --> 20:01and the best plan is made for that patient.
  • 20:01 --> 20:04Often times if it is something that was not suspected,
  • 20:04 --> 20:08a patient will be placed on something that's called active surveillance.
  • 20:08 --> 20:17So they will be monitored more frequently for their care and the goal is that maybe if you missed it that first time by monitoring them actively,
  • 20:17 --> 20:19you'll be able to catch it a second time.
  • 20:19 --> 20:21Or if there's any progression.
  • 20:21 --> 20:24So if you missed it just by chance the first time,
  • 20:24 --> 20:25maybe they'll
  • 20:25 --> 20:31be more likely to see it the next time with all of the talk of AI,
  • 20:31 --> 20:34and there talk of AI in everything these days.
  • 20:34 --> 20:36I wonder about the downside of AI.
  • 20:36 --> 20:39I mean, certainly cost is likely an issue,
  • 20:39 --> 20:41and with health care costs rising
  • 20:41 --> 20:51I can't imagine that this is any cheaper or just as expensive as doing a regular biopsy, talk about the cost of the technology and the other downsides to AI.
  • 20:51 --> 20:53A we discussed before,
  • 20:53 --> 20:58AI, algorithms, or any kind of tools could be a real efficiency for clinicians.
  • 20:58 --> 21:03It could help them make decisions in an easier way, a cheaper way.
  • 21:03 --> 21:09The problem with training these algorithms is they are only as good as the data that you put in.
  • 21:09 --> 21:11There's the adage, garbage in,
  • 21:11 --> 21:17garbage out. So if you don't train these things with well annotated data or something that's really noisy,
  • 21:17 --> 21:19you're not going to get anything useful.
  • 21:19 --> 21:26That's a problem. Another inherent problem is this is they are potentially biased to whatever you trained on.
  • 21:26 --> 21:31So just for example, some of my own research I had 300 datasets from Yale,
  • 21:31 --> 21:36300 from Stanford. We trained an algorithm on one and ran it on the other.
  • 21:36 --> 21:45It didn't work. Shocking, we had perfect performance on the other site but something to realize is that these algorithms do not generalize well.
  • 21:45 --> 21:51You can't make a general inference as well as a human radiologist easily.
  • 21:51 --> 21:55Can a radiologist from Yale or Stanford easily tell what the prostate is?
  • 21:55 --> 21:59But this algorithm couldn't just because it was from a different
  • 21:59 --> 22:01location.
  • 22:01 --> 22:03So
  • 22:03 --> 22:08if I was living in California and I went to Stanford,
  • 22:08 --> 22:13and you did this fusion algorithm and did a biopsy,
  • 22:13 --> 22:17you'd be accurate. If I then went to Yale,
  • 22:17 --> 22:18you use the same
  • 22:18 --> 22:21algorithm, and it would be inaccurate?
  • 22:21 --> 22:28Potentially, yes.
  • 22:28 --> 22:28Then that means that you would have to retrain this algorithm for every new center that you plan on using it in, correct?
  • 22:28 --> 22:30That is an active area of research.
  • 22:30 --> 22:34Actually, people are looking at ways that they can either retrain things faster,
  • 22:34 --> 22:37or that they can just make these algorithms better from the start.
  • 22:37 --> 22:42Whether it's something you do to the data from the beginning of the pipeline and put it in,
  • 22:42 --> 22:46that can have a much better effect on your actual training of these things,
  • 22:46 --> 22:50but the problem you run into is what happens if somebody updates their software.
  • 22:50 --> 22:54You could just make your algorithm obsolete at that very moment,
  • 22:54 --> 22:56you have to retrain from scratch.
  • 22:56 --> 23:01So the most valuable thing again is what's the data that you're putting in here and how much of it.
  • 23:01 --> 23:03And that's really the key,
  • 23:03 --> 23:05and so are you able to
  • 23:05 --> 23:11use that data in a good way that can be applied throughout the entire population across all sites,
  • 23:11 --> 23:13in hospitals, in the US,
  • 23:13 --> 23:14in the world.
  • 23:15 --> 23:21Because one would think that if you are looking at an MR image.
  • 23:21 --> 23:25at Stanford you would be able to see what you see.
  • 23:25 --> 23:31You could take the same MR image and show it to a radiologist at Yale and they would see the same thing.
  • 23:31 --> 23:35It's like a photograph that I think a lot of this
  • 23:35 --> 23:38has to do with the misnomer of the name of artificial intelligence.
  • 23:38 --> 23:47Those of us who really work with the technology, we kind of cringe at that name because we know that there's no actual intelligence within the model itself.
  • 23:47 --> 23:52All the intelligence comes in from the data that people who created the data, that radiologists,
  • 23:52 --> 23:54the pathologist, the urologist, who created the data.
  • 23:54 --> 23:56That's where the intelligence is.
  • 23:56 --> 23:58So really it's just machine learning.
  • 23:58 --> 24:01This machine is learning to do something that a radiologist does,
  • 24:01 --> 24:05but it is not good at tasks that humans are really good at,
  • 24:05 --> 24:07which is making generalizable performance,
  • 24:07 --> 24:11making inferences very easily that apply to things that it has never seen.
  • 24:11 --> 24:15That's what the problem in our domain is called over training to the data.
  • 24:15 --> 24:18It's only good at things that I've seen,
  • 24:18 --> 24:21and it can't recognize something that has never seen before,
  • 24:21 --> 24:23which is a particular challenge when there's
  • 24:23 --> 24:25any kind of pathology, right?
  • 24:27 --> 24:37I'm just struggling with this because I think about the utility of the technology, before the break we said one of the utilities is really to help
  • 24:37 --> 24:41radiologists, who may not be specific to prostate cancer,
  • 24:41 --> 24:45who maybe the technology can help them to get better,
  • 24:45 --> 24:56but in that case you would be taking this technology out to a site that presumably didn't train it because it was trained by the experts at another
  • 24:56 --> 25:00site. But one would hope that it would be accurate at that second site,
  • 25:00 --> 25:04and if you train it at Stanford and tested at Yale or vice versa,
  • 25:04 --> 25:06and you didn't get any accuracy,
  • 25:06 --> 25:09I wonder what would happen if you trained at Yale,
  • 25:09 --> 25:11and then you took it out to,
  • 25:11 --> 25:20you know, Tuktoyaktuk, and for anybody who's wondering that's a small town in Canada, and it might not work.
  • 25:20 --> 25:22That's absolutely true. But fear not,
  • 25:22 --> 25:22that
  • 25:22 --> 25:28is something that the machine intelligence and machine learning people are trying to work on.
  • 25:28 --> 25:33I mean, that is probably the big problem right now in the community.
  • 25:33 --> 25:36This is especially true in the medical field.
  • 25:36 --> 25:43A lot of research that has gone on in this machine learning artificial intelligence has come out of stuff
  • 25:43 --> 25:48that Google and Apple and all these other big companies are doing with photographs,
  • 25:48 --> 25:50images, those are all good.
  • 25:50 --> 25:57They generalize fairly well. But what happens when human life is on the line when you're trying to work with these algorithms,
  • 25:57 --> 26:02there's a certain bar that we need to clear that is much higher than that.
  • 26:02 --> 26:03So
  • 26:03 --> 26:07we have to be very careful with what we're doing,
  • 26:07 --> 26:13and it is, again, it's a very active field of research that I think is probably the most critical thing.
  • 26:13 --> 26:22And it's also not to say that all these other companies that have their algorithms to recognize your cats and your dogs,
  • 26:22 --> 26:25they face the exact same problem with their cameras.
  • 26:25 --> 26:29What if they change their lens on their camera?
  • 26:29 --> 26:31Most likely that algorithm is going
  • 26:31 --> 26:35to have to be retrained to recognize your cat or dog.
  • 26:35 --> 26:37Interesting, what about the cost?
  • 26:37 --> 26:38I see
  • 26:38 --> 26:42that you sidestep that issue that I raised a while ago.
  • 26:42 --> 26:44It's actually the software.
  • 26:44 --> 26:55Hardware is relatively cheap. The innovations that came out the hardware are actually what really enabled this revolution that we're having now in this machine intelligence,
  • 26:55 --> 26:58it basically came out of video gaming.
  • 26:58 --> 27:05The graphics processing units of your computers are now able to crunch millions of calculations within a second,
  • 27:05 --> 27:07and that's what's really enabled
  • 27:07 --> 27:11this, and what's fascinating is a lot of people have called this
  • 27:11 --> 27:15the democratization of machine learning or machine intelligence
  • 27:15 --> 27:23because Google and Facebook have made these algorithms in these toolkits available that high school students can take.
  • 27:23 --> 27:25They can build these deep learning models.
  • 27:25 --> 27:33These artificial neural networks and get solutions to problems that we previously had to engineer these complex models with.
  • 27:33 --> 27:43And now you can just take these tools out of the box and you can run it and they can get an answer that's surprisingly good.
  • 27:43 --> 27:47But what's really lacking is the understanding of what that model can do,
  • 27:47 --> 27:53and also what are some other things that we can do as researchers or as clinicians?
  • 27:53 --> 27:59What can we add that we already know to improve these models in the training of these things?
  • 27:59 --> 28:06And so that's the challenge, bringing in things that can help them learn in a better way.
  • 28:06 --> 28:06And so
  • 28:06 --> 28:08where are we on that front?
  • 28:09 --> 28:15Well, we are in the midst of it, there's a big investment in this.
  • 28:15 --> 28:18Lot of companies are investing in this and it's just
  • 28:18 --> 28:22burgeoning right now where there's very rapid uptake and research.
  • 28:22 --> 28:25Everybody is doing it now everybody's jumping on the bandwagon.
  • 28:25 --> 28:33There's tons of money out there and I think we're at the point where now we really need to evaluate how good these models are.
  • 28:33 --> 28:37The evaluation of the validation is going to be critical.
  • 28:37 --> 28:41There's a lot of hype right now and trying to apply this,
  • 28:41 --> 28:45especially to medicine, but I think we need to be very careful on how we apply this.
  • 28:45 --> 28:47And there's also the questions of
  • 28:47 --> 28:50is there bias? Are the ethics issues involved in this.
  • 28:50 --> 28:51Where does the data come from?
  • 28:51 --> 28:53How important is that data?
  • 28:53 --> 28:56Again, there's a lot of questions that need to be answered now,
  • 28:56 --> 28:58and it's a very exciting time in the field.
  • 28:59 --> 29:07Doctor John Onofrey is assistant professor of radiology and biomedical imaging and of urology at Yale School of Medicine.
  • 29:07 --> 29:15If you have questions, the address is canceranswers@yale.edu and past editions of the program are available in audio and written form at Yalecancercenter.org.
  • 29:15 --> 29:24We hope you'll join us next week to learn more about the fight against cancer here on Connecticut Public Radio.