Sign In

Communications of the ACM

ACM TechNews

Warnings of a Dark Side to AI in Health Care


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Robots are increasingly utilized for medical applications.

Researchers at Harvard University and the Massachusetts Institute of Technology warn new artificial intelligence technology designed to enhance healthcare is vulnerable to misuse.

Credit: Reuters

Harvard University and Massachusetts Institute of Technology (MIT) researchers warn in a recently published study that new artificial intelligence (AI) technology designed to enhance healthcare is vulnerable to misuse, with "adversarial attacks" that can deceive the system into making misdiagnoses being one example.

A more likely scenario is of doctors, hospitals, and other organizations manipulating the AI in billing or insurance software in an attempt to maximize revenue.

The researchers said software developers and regulators must consider such possibilities as they build and evaluate AI technologies in the years to come.

MIT's Samuel Finlayson said, "The inherent ambiguity in medical information, coupled with often-competing financial incentives, allows for high-stakes decisions to swing on very subtle bits of information."

Changes doctors make to medical scans or other patient data in an effort to satisfy the AI used by insurance firms also could wind up in a patient's permanent record.

From The New York Times
View Full Article - May Require Paid Subscription

 

Abstracts Copyright © 2019 SmithBucklin, Washington, DC, USA


 

No entries found