International Finance
Healthcare Magazine November- December 2019 Issue

European healthcare: AI faces data and black box challenges

European hospitals are using AI software-based recommendations for diagnosis. Can it be susceptible to bias?

After years of experimenting with it, the European healthcare system and the healthcare experts in Europe are seeking to adopt a more value-based approach in healthcare delivery using artificial intelligence (AI). The primary nature of AI applications in healthcare science is to study the correlation between prevention or treatment techniques and patient outcomes to make accurate clinical decisions and to build a robust body of research for the future.

Currently, the UK is dubbed the “heartland of European healthcare AI,” while Germany and France are the two flourishing hubs. The UK government has committed to invest $300 million in the AI, which will be used by the public healthcare system — also known as the National Healthcare System (NHS). The NHS is setting up an exclusive lab that will work toward enhancing AI tools within its healthcare delivery. The lab will act as an interface for both experts and academicians to drive innovation and study the biggest healthcare challenges, including early cancer detection, new dementia treatments, and enhanced personalised care.

AI in healthcare is still in the development stages, although there are many areas in which the technology could be useful: imaging, ophthalmology, genomics and intensive care. “At University Hospital Zurich, we are working on projects regarding using AI on our images. These projects are still in work in progress at the present time,” Andreas Boss, professor and doctor of medicine at the Department of Diagnostic and Interventional Radiology, University Hospital Zurich, said in an emailed interview with International Finance.

Consulting firm LEK published a report on AI: Six challenges for the European Healthcare Sector, which stated that the technology is being developed to work with multiple data types. Citing its versatility, the report said that AI has the potential to perform across the entire patient care pathway — starting from the point of early detection to diagnosis to treatment management and to monitoring of ongoing treatment.

Commonly, radiology and oncology are the two health branches that see more types of AI algorithms. “In radiology, we are currently using AI for standardisation and quality control of mammograms,” Dr Boss says. “The technology is not applied for diagnostic purposes or treatment monitoring. However, a lot of research is going on in that direction.”

In another example, Geneva University Hospitals (HUG) is using IBM’s Watson for Genomics in the field of diagnosis. In theory, Watson for Genomics is an AI tool for oncologists to provide patients with more personalised, evidence-based cancer care. “Patients who need and are able to undergo additional treatment after having exhausted the standard treatments can be candidates for extended genomic analyses,” a HUG spokesperson tells International Finance.

In practice, the spokesperson explains, genomic data is compiled in a text file containing descriptions of the gene alterations, their location, and their frequency. The file is analysed securely by Watson for Genomics which scans nearly three million publications to find articles evaluating potential treatments. After that, the oncologist will receive a multi-page report reviewing the literature, including article references with abstracts and direct links to publications. Clinical trials are also suggested based on the tumour profile matching the inclusion criteria.

eHealth professionals in Europe predict AI to become dynamic, useful and widespread by 2023, according to HIMSS Analytics. The prime reason for that is because the technology has capabilities that can demonstrate robust performance in both frontline care and back office tasks in hospitals. “AI’s biggest potential is seen in workflow improvements and standardisation. If repetitive tasks are performed using AI, more resources can be used for interaction with patients,” Dr Boss says.

The NHS of UK, for example, has 45,000 clinical job vacancies and 50,000 non-clinical open roles — and a similar situation can be seen in hospitals across Europe. Usually, hospitals tend to alleviate staff shortage using a temporary solution that only puts them under further financial strain. So the possibility of using AI applications to conduct triage before patients arrive at the hospitals will not only speed up the healthcare delivery process, but allow overstretched clinicians to focus on interacting with patients effectively.

Transformative with unintended effects

LEK in its report has classified AI to have ‘transformative capabilities’, but involving algorithms and systems, can cause unintended effects in both clinical legality and decision-making.

Data sharing is a case in point. Hospitals in the NHS system offer a treasure trove of patient data, that is built on an extensive medical history of each patient. Inevitably, data sharing between partners is expected to increase as connected devices, data volumes, and applicability of AI continue to evolve in healthcare delivery. This form of interconnectedness is a major cause for concern for AI developers and healthcare facilities because failing to comply with the General Data Protection Regulation (GDPR) while developing or using the software is a liability.

In a nutshell, data privacy in healthcare gives patients the right to control how their data is used, which is expected to become the industry norm over time. Programming that leads to control over evolving technology platforms will allow AI developers to preemptively avoid serious consequences.

“Sharing patient data between different institutions for the development of AI solution is a danger to patient privacy. Patients need to be protected against unauthorised sharing of medical data with AI companies, such as Google, Facebook or Chinese companies even. The Swiss Personal Health Network (SPHN) is implementing the required infrastructures among universities,” Dr Boss explains.

AI has to survive Europe’s stringent standards

Europe’s protection standards are stringent. “In Europe, medical software requires a CE marking with a strict approval process,” Dr Boss says. Algorithms to be used in European healthcare must apply for CE marking, which is a certification mark that conforms with health, safety, and environmental protection standards for products sold within the European Economic Area — and have to be categorised according to the Medical Device Directive. Also, independent algorithms that are not fed into a physical medical device must be classified as a Class II medical device.

In the big picture, actual data is more valuable than clinical data. “Today, therapy decisions are often made empirically, based on the experience and knowledge of those involved. It would be desirable to support the decisions with real-time data analyses and state-of-the-art medical knowledge from other sources, such as globally harmonised databases,” Emanuela Keller, professor and doctor of medicine, Institute of Anaesthesiology, University Hospital Zurich tells International Finance.

With so much data on hand, identification and cleaning of credible information to form core data sets is a complex feat in the development phases of AI programmes. “Conventional monitoring systems trigger around 700 alarms per critical patient each day and a significant proportion of those alarms are false,” says Keller. For that reason, the neurosurgical intensive care unit of the University Hospital Zurich, ETH Zurich and IBM Research, as part of the ICU Cockpit Project, are working to reduce data volume, increase accuracy in critical situations and improve patient safety.

Keller, who is also the principal investigator describes the project’s long-term goal is to “initiate a fundamental development in emergency and intensive care medicine — and thus, significantly improve the way hospitals work in day-to-day practice.”

Many European hospitals and research institutions are wary of cloud platforms and choose to use their own servers because patient data is typically not allowed to exit Europe. The use of AI in patient care at HUG does not involve sharing large volumes of data. “At the HUG we are customers who do not need to provide large amounts of data to use the solutions because we need answers for one patient at a time by using only the patient’s data. Even if Switzerland is not in the European Union, we tend to apply the GDPR rules and fully respects its constraints in order to remain EU compatible,” says the HUG spokesperson, emphasising on the fact that “companies providing AI solutions need big volumes of data to train their models.”

For AI developers, another subsequent concern while developing healthcare tools is the black box issue, which typically stems from incomplete information. A blurry image, for example, can make the algorithm arrive at an inaccurate conclusion. Sometimes, what happens is that AI technologies result in key algorithms that are not exposed to enough peer review or a detailed scientific analysis.

“Before algorithms for automated detection of critical complications can be implemented into clinical practice they have to be extensively tested in clinical studies and validated according to the directives for medical device software,” Keller explains. A limited testing process is highly consequential because it can lead to malpractice risk, an important factor that cannot be overlooked by chance.

Physicians cannot impulsively rely on clinical software recommendations alone, as they consistently do not match physicians’ judgement in accordance with the standard of care. “One of the most important challenges is the validation of the software. AI software should not be implemented in the clinical workflow until it is properly tested and validated,” Dr Boss says. “I am deeply sceptical, when I hear of 99 percent accuracy of AI for reading mammograms. It sounds very much like propaganda. But, I admit that there is large potential for AI reading X-ray images.”

The European healthcare industry aims to create tailored treatment interventions with higher first-time success rates for patients using AI. The LEK report suggests that AI developers will have to work closely with lead adopters to ensure transparency in clinical software and to build sound approaches in liability management for algorithms to consistently reproduce results.

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.