dismiss

Mark your calendars: the next Clean Sweep Live Auction will be on Thursday, June 21st Click to view the full catalogue

DOTmed Home MRI Oncology Ultrasound Molecular Imaging X-Ray Cardiology Health IT Business Affairs
News Home Parts & Service Operating Room CT Women's Health Proton Therapy Endoscopy HTMs Mobile Imaging
SEARCH
当前地点:
>
> This Story


注册记数器 to rate this News Story
Forward Printable StoryPrint Comment

 

 

Health IT Homepage

Philips-commissioned Future Health Index measures progress toward value-based healthcare FHI 2018 looked at data across 16 countries

Radiotherapy IT company, CruxQS acquires RDS Will expand footprint for FlowBoard workflow management system

How can an insider threat interact with HIPAA regulated information? No hospital can afford an avoidable violation

Inspirata acquires GE company Caradigm Will enhance unstructured data processing and speed up CIDT development

Four tough questions about patient data that hospitals must answer How Cambridge Analytica's use of Facebook data highlights ambiguities

Provenance and privacy: Healthcare execs talk AI best practices Providers need to understand what the algorithms are doing and how it will help them

First post-Maria permanent health clinics roll up on Puerto Rican shores Equipped with X-ray and telemedicine capabilities

Secure texting to become top mode for patient data exchange, says survey 94 percent of physicians see mobile technology as improving patient data exchange

Fujifilm and Availity partner on revenue cycle solutions Bringing a more seamless insurance experience via Synapse RIS

The moving image – imaging informatics’ opportunity to improve process Insights from Dr. Geraldine McGinty

Study team tricks AI programs into misclassifying diagnostic images

John W. Mitchell , Senior Correspondent
With machine learning algorithms recently approved by the FDA to diagnose images without physician input, providers, payers, and regulators may need to be on guard for a new kind of fraud.

That’s the conclusion of a Harvard Medical School/MIT study team comprising biometric informatics, physicians, and Ph.D. candidate members, in a paper just published in IEEE Spectrum. The team was able to successfully launch “adversarial attacks” on three common automated AI medical imaging tasks to fool the programs up to 100 percent of the time into misdiagnosis. Their findings have imaging implications related to fraud, unnecessary treatments, higher insurance premiums and the possible manipulation of clinical trials.

Story Continues Below Advertisement

THE (LEADER) IN MEDICAL IMAGING TECHNOLOGY SINCE 1982. SALES-SERVICE-REPAIR

Special-Pricing Available on Medical Displays, Patient Monitors, Recorders, Printers, Media, Ultrasound Machines, and Cameras.This includes Top Brands such as SONY, BARCO, NDS, NEC, LG, EDAN, EIZO, ELO, FSN, PANASONIC, MITSUBISHI, OLYMPUS, & WIDE.



The team defined adversarial attacks on AI imaging algorithms as: “…inputs to machine learning models that an attacker has intentionally designed to cause the model to make a mistake."

“Adversarial examples have become a major area of research in the field of computer science, but we were struck by the extent to which our colleagues within healthcare IT were unaware of these vulnerabilities,” Dr. Samuel Finlayson, lead author and M.D.-Ph.D. candidate, Harvard-MIT told HCB News. “Our goal in writing this paper was to try to bridge the gap between the medical and computer science communities, and to initiate a more complete discussion around both the benefits and risks of using AI in the clinic.”

In the study, the team was able to manipulate the AI program to indicate positive findings in pneumothorax noted in chest X-rays, diabetic retinopathy observed in retinal mages and melanoma based on skin images. In the chest X-ray examples, the degree of accuracy based on the AI manipulation to indicate pneumothorax was 100 percent.

“Our results demonstrate that even state-of-the-art medical AI systems can be manipulated,” said Finlayson. “If the output of machine learning algorithms becomes a determinant of healthcare reimbursement or drug approval, then adversarial examples could be used as a tool to control exactly what the algorithms see.”

He also said that such misuse could cause patients to undergo unnecessary treatments, which would increase medical and insurance costs. Adversarial attacks could also be used to “tip the scales” in medical research to achieve desired outcomes.

Another member of the study team, Dr. Andrew Beam, Ph.D., instructor, Department of Biomedical Informatics, Harvard Medical School believes their findings are a warning to the medical informatics sector. While the team stated they were excited about the “bright future” that AI offers for medicine, caution is advised.

"I think our results could be summarized as: 'there is no free lunch'. New forms of artificial intelligence do indeed hold tremendous promise, but as with all technology, it is a double-edged sword,” Beam told HCB News. “Organizations implementing this technology should be aware of the limitations and take active steps to combat potential threats.”

Health IT Homepage


You Must Be Logged In To Post A Comment

做广告
提升您的品牌知名度
拍卖+私人销售
获得最好的价格
买设备/配件
找到最低价格
每日新闻
阅读最新信息
目录
浏览所有的DOTmed用户
DOTmed上的伦理
查看我们的伦理计划
金子分开供营商节目
接收PH要求
金子服务经销商节目
接收请求
提供保健服务者
查看所有的HCP(简称医疗保健提供商)的工具
工作/训练
查找/申请工作
Parts Hunter +EasyPay
获取配件报价
最近证明
查看最近通过认证的用户
最近额定
查看最近通过认证的用户
出租中央
租用设备优惠
卖设备/配件
得到最划算
服务技术员论坛
查找帮助和建议
简单的征求建议书
获取设备报价
真正商业展览
查找对设备的服务
对这个站点的通入和用途是受期限和条件我们支配 法律公告 & 保密性通知
物产和业主对 DOTmed.com,公司 Copyright ©2001-2018 DOTmed.com, Inc.
版权所有