News & Media

News

Aug 22, 2022

AI diagnostic apps for skin cancer: spotting potential red flags

A recent survey found that 41% of people in the UK would trust a smartphone application (app) that uses Artificial Intelligence (AI) to spot potential skin cancers. In the same survey, 51% of people said that they did not feel confident in their ability to judge whether these apps that diagnose skin cancers can do what they claim to do. So, should you trust AI apps or not?  

While apps should be judged on a case-by-case basis, we urge people to be cautious when using diagnostic apps – many are not fit for purpose, and some openly breach the regulations required for medical devices (which regulators state that AI diagnostic apps are).  

The term artificial intelligence (AI) is used to describe a wide range of technologies. In dermatology, AI can be trained to spot the signs of skin diseases such as skin cancer. In simple terms, the AI is provided with a large number of verified images of skin diseases so that the computer can ‘learn’ how to recognise cases.  

Unfortunately, AI-based apps which do not appear to meet regulatory requirements crop up more often than we would like. Additionally, the evidence to support the use of AI to diagnose skin conditions is weak which means that when it is used, it may not be safe or effective and it is possible that AI is putting patients at risk of misdiagnosis.  

To help app-users we have produced a list of red flags for AI smartphone apps. If you spot apps which display some of these red flags, we encourage users to consider the risks of using the app. If you have concerns about whether an AI app is being used appropriately, you can raise a query with the Medicines and Healthcare products Regulatory Agency (MHRA)

Red flags 

This guide is to help people spot potential signs that an app claiming to use AI to check or diagnose skin cancer may be unreliable or unproven. This guide reflects the opinions of experts of our AI Working Party Group but cannot cover all eventualities. Apps which display some of these red flags may be proven and reliable; similarly, apps which do not display any of these red flags may not be proven or reliable. As of July 2022, our position is that the public should avoid all skin cancer detection apps which do not have a class IIa or above CE, UKNI, UKCA mark as it means that they may not be safe – you can find out more about these marks below

  1. Where’s the evidence: Developers make claims about being able to make a diagnosis without supporting evidence  

If a developer makes claims of being able to make a diagnosis of a disease, then they need to provide evidence supporting these claims and should display the appropriate regulatory conformity assessment mark. All medical devices (which includes AI diagnostic apps) sold in the UK must be certified by one of three marks: the CE (Conformité Européene), UKCA (UK Conformity Assessed) or UKNI (UK Northern Ireland). The MHRA have produced information on conformity assessment, UKCA marking, and provided information on App labelling in guidance for manufacturers of software medical devices. Details of any CE/UKCA/UKNI mark that the app holds may be found in the app itself, on the app store from where the app is downloaded, or on the developer’s website. 

Members of the public can check whether a medical device manufacturer is registered using the Public Access Registration database. Currently, there is no specific regulatory pathway for AI-based medical devices in the USA or Europe. So, even if a device has regulatory approval, it does not mean the device has been tested in real-world settings, and in the type of skin that it is being used for (skin age, skin colour, or dryness are just some aspects that may confuse AI).  

  1. Disclaimers: Marketing or app descriptions suggest a medical and diagnostic feature, but disclaimers contradict this 

Some apps will state that they can detect, classify or diagnose skin conditions, but this may be contradicted by disclaimers (also known as the ‘small print’). It is not enough for app descriptions to avoid the word ‘diagnose’ – any word which suggests that a diagnosis can be made is only considered to be safe and appropriately regulated if the app is rated as at least Class IIa and has a CE, UKCA or UKNI mark. 

  1. Class I devices: Advertising a CE Class I mark as evidence of meeting requirements for making a diagnosis  

Some app developers will advertise the fact that they have a CE mark as a sign of reliability or safety. However, be wary of apps which cite a CE mark class I as a safety feature as app developers can ‘self-certify’ apps as class I, without providing evidence that the app can perform as it claims to. Any app that claims to diagnose should have at least a Class IIa CE/UKCA/UKNI mark. If the mark is not clear, then it is reasonable to assume that it is not a Class IIa mark. 

  1. Expertise: No medical professionals, dermatologists or patients involved in the development  

Dermatologists, medical experts, and people who have skin conditions bring a very important perspective to the development of these apps and make them more credible. It is, however, possible that good apps can be developed without dermatologist input, and poor ones can be developed with dermatologist input. 

  1. App descriptions: Unclear or poorly written app descriptions  

Unclear or poorly written app descriptions can be a sign of a lack of professionalism, of medical involvement, or an attempt to mislead 

  1. Negative reviews     

As with any app, look at the app store reviews. If an app has multiple negative reviews, then it should be approached with caution. 

  1. Data policy: The app does not have clear policies on how it uses your data  

When giving consent regarding the use of your data, make sure you understand what you are consenting to. App developers should be very clear about how they plan to use your personal data, including the pictures that you submit. Some apps will use personal and/or medical information as well as pictures of the skin submitted by users to further develop their AI algorithms. This can mean that when the information is analysed together it is no longer truly anonymous. Further analysis like this is considered research which would require formal ethical approval by a regulatory body which should be clearly displayed. You may also be contributing to improving a commercial product, hence its value, for no reward. If it is not clear to you how your data are being used, it is best not to provide it.