Even if it were to do pattern recognition as well as or slightly worse than a human, it’s still worthwhile. As the article points out: It’s basically a non-tiring, always-ready second opinion. That alone helps a lot.
One issue I could see is using it not as a second opinion, but the only opinion. That doesn’t mean this shouldn’t be pursued, but the incentives toward laziness and cost-cutting are obvious.
EDIT: One another potential issue is the AI detection being more accurate with certain groups (i.e. White Europeans), which could result in underdiagnosis in minority groups if the training data set doesn’t include sufficient data for those groups. I’m not sure if that’s likely with breast cancer detection, however.
Also if it’s integrated poorly. Like if you have the human only serve as a secondary check to the AI, which is mostly right, you condition the human to just click through and defer to the AI. The better way to do this would be to have both the human and AI judge things independently and review carefully where they disagree but that won’t save anyone money.
if the court system allowed deferring partial fault for “preventable” deaths to the hospital for employing practices that are not in the best interests of the patient it might give them a financial incentive.
Even if it were to do pattern recognition as well as or slightly worse than a human, it’s still worthwhile. As the article points out: It’s basically a non-tiring, always-ready second opinion. That alone helps a lot.
One issue I could see is using it not as a second opinion, but the only opinion. That doesn’t mean this shouldn’t be pursued, but the incentives toward laziness and cost-cutting are obvious.
EDIT: One another potential issue is the AI detection being more accurate with certain groups (i.e. White Europeans), which could result in underdiagnosis in minority groups if the training data set doesn’t include sufficient data for those groups. I’m not sure if that’s likely with breast cancer detection, however.
Also if it’s integrated poorly. Like if you have the human only serve as a secondary check to the AI, which is mostly right, you condition the human to just click through and defer to the AI. The better way to do this would be to have both the human and AI judge things independently and review carefully where they disagree but that won’t save anyone money.
if the court system allowed deferring partial fault for “preventable” deaths to the hospital for employing practices that are not in the best interests of the patient it might give them a financial incentive.
Definitely, here’s hoping the accountability question will prevent that, but the incentive is there, especially in systems with for-profit healthcare.