For what it’s worth, “AI” in this context is probably not the content-stealing Generative AI that everyone is trying to cram everywhere it doesn’t belong. This is a much more legitimate application of a similar technology.
I’m not mad about the idea of AI in radiology because it’s a really good fit. A human radiologist can’t compare a hundred similar slices and cross-correlate possible anomalies, whereas AI can. This improves detection and outcomes and is exactly where medical technology is supposed to help.
That said, I don’t think we’ll replace radiologists across the board for a long time. This will be a very useful tool and will probably reduce the number of radiologists required and modify their roles significantly, but it’ll be more like how a single worker with editing software can do work that would have required a small team in the pre-digital days of film.
The number of radiological examinations are steadily increasing so there won’t be less radiologist needed but AI is needed to cope with the increasing workload.
AI has much better sensitivity than humans (finding something out of the norm) and humans have much better specificity (basically saying what a certain finding is). So I could imagine AI screening every examination and a radiologist just goes through the findings verifying them. For specific things this is already done for years (eg pulmonary nodules).
Exactly. People keep shoehorning Large Language Models into non-linguistic domains, and that’s dangerous. Human language, with respect to the training sets used, is inherently subjective and imperfect. Healthcare is very fault-intolerant.
The replacing part is the problem. Using a local system to help is fine, but it still requires humans who know what they’re doing and what they’re looking at.
It doesn’t replace any individual directly. It improves one person’s capability to the extent that there may be fewer needed to do a job. And that’s not a bad thing in my opinion, especially because it can improve the quality of that person’s work at the same time.
Edit to elaborate: I am opposed to replacing humans with AI in general. AI is a tool. But if that tool can empower someone to do more and better work, then I’m not opposed. Using stolen intellectual property to replace creatives with an inherently non-creative slop machine is greedy and evil. Using machine learning trained on medical data sets to let a radiologist more comprehensively and deeply review a frankly overwhelming amount of data to better save lives? I’m cool with that. But I also think that, in line with my stance that AI is a tool, there will likely be a well-trained human operating these tools for a long time before radiologists cease to exist.
Sometimes, for example human + AI systems used to be better than either one in isolation, but chess AI improved so much that the human partner is actually not helping anymore
Last time this was in the news, they found that AI had an insanely good accuracy at identifying cancer! Until they realized it was because they included the hospital info in the training data, so it was identifying “cancer” by seeing they were at a cancer treatment facility.
I can’t wait for this bubble to blow up in all their dumb faces.
For what it’s worth, “AI” in this context is probably not the content-stealing Generative AI that everyone is trying to cram everywhere it doesn’t belong. This is a much more legitimate application of a similar technology.
I’m not mad about the idea of AI in radiology because it’s a really good fit. A human radiologist can’t compare a hundred similar slices and cross-correlate possible anomalies, whereas AI can. This improves detection and outcomes and is exactly where medical technology is supposed to help.
That said, I don’t think we’ll replace radiologists across the board for a long time. This will be a very useful tool and will probably reduce the number of radiologists required and modify their roles significantly, but it’ll be more like how a single worker with editing software can do work that would have required a small team in the pre-digital days of film.
The number of radiological examinations are steadily increasing so there won’t be less radiologist needed but AI is needed to cope with the increasing workload.
AI has much better sensitivity than humans (finding something out of the norm) and humans have much better specificity (basically saying what a certain finding is). So I could imagine AI screening every examination and a radiologist just goes through the findings verifying them. For specific things this is already done for years (eg pulmonary nodules).
That assumes it’s done additively.
I think a lot of these AI automation promises come down to:
Are you adding a tool thereby increasing the overall quality of service and cost.
Or are you trying to reduce cost even if it means reducing service quality.
The first one doesn’t take any job away and makes everything just a bit better but more expensive.
The second one is a race to the bottom strategy that just comes down to capitalism doing its thing.
Too many billionaires are salivating over the latter.
Let them eat cake
Yeah, it sounds more like ML. That’s a good thing, For one thing, it’s reproducible.
LLMs are intrinsically unfit for use in any situation where human life or health is at stake.
Exactly. People keep shoehorning Large Language Models into non-linguistic domains, and that’s dangerous. Human language, with respect to the training sets used, is inherently subjective and imperfect. Healthcare is very fault-intolerant.
The replacing part is the problem. Using a local system to help is fine, but it still requires humans who know what they’re doing and what they’re looking at.
It doesn’t replace any individual directly. It improves one person’s capability to the extent that there may be fewer needed to do a job. And that’s not a bad thing in my opinion, especially because it can improve the quality of that person’s work at the same time.
Edit to elaborate: I am opposed to replacing humans with AI in general. AI is a tool. But if that tool can empower someone to do more and better work, then I’m not opposed. Using stolen intellectual property to replace creatives with an inherently non-creative slop machine is greedy and evil. Using machine learning trained on medical data sets to let a radiologist more comprehensively and deeply review a frankly overwhelming amount of data to better save lives? I’m cool with that. But I also think that, in line with my stance that AI is a tool, there will likely be a well-trained human operating these tools for a long time before radiologists cease to exist.
Sometimes, for example human + AI systems used to be better than either one in isolation, but chess AI improved so much that the human partner is actually not helping anymore
But chess is an isolated “system” with clear rules. Reality and especially medicine is so much more complicated.
If it’s done properly, sure.
Last time this was in the news, they found that AI had an insanely good accuracy at identifying cancer! Until they realized it was because they included the hospital info in the training data, so it was identifying “cancer” by seeing they were at a cancer treatment facility.