The adoption of safe and effective artificial intelligence in health and social care
In this blog, Dr Mani Hussain, Director of Primary and Community Care, talks about CQC’s involvement in the multi-agency advice service on artificial intelligence.
Artificial intelligence (AI) and Data-driven technologies have exciting potential to improve the quality of care for people using services. For example, hospitals are now using AI to support radiologists with their decision making. In diagnostics, AI can help analyse x-rays leading to the quick identification of abnormalities. In research, AI is used to analyse large swathes of data which helps to discover and validate new drugs. And AI is also used to streamline administrative tasks such as appointment scheduling and identifying staffing requirements.
However, we know that navigating the regulatory system, with different bodies and regulators involved at different steps of the regulatory pathway, can be complex, both for developers and providers looking to adopt AI. Ensuring AI is used safely and effectively is paramount.
Therefore, together with the National Institute for Health and Care Excellence, the Health Research Authority and the Medicines and Healthcare products Regulatory Agency, we are building a one-stop shop for adopters of AI and data-driven technologies called the multi-agency advice service (MAAS). Funded by the NHS AI Lab, these four bodies responsible for regulating and evaluating AI in health and social care, can offer clear guidance for those looking to adopt this technology.
“Our vision is that this robust and newly streamlined regulatory pathway will lead to safer and more effective development and adoption of data-driven technologies, such as AI, which improve the quality of care and the outcomes for those in receipt of that care. By bringing together the four partners to think through the overall regulatory and access pathway, the MAAS will ensure there is a feedback loop to help to test, adjust and improve.”
Our role here at CQC is to make sure services meet fundamental standards of quality and safety. We have to make sure that we can also do this where services are using AI. This does not mean that we are looking to set up new regulations or assessment frameworks specifically for AI, but we are looking to clarify existing regulation by bringing different regulators and stakeholders together.
Together, the four MAAS partners have engaged with developers and adopters of AI to understand the difficulties they have faced when trying to understand and navigate the current regulatory pathway. So far, this user research has helped us to understand some of the regulatory challenges that those developing and/or adopting AI have faced and what information stakeholders need to be better able to navigate regulatory pathways.
Following this engagement, the MAAS has set out to develop an informational website that brings together content from the different regulatory bodies in one place, making it easier for developers and adopters to find information. They will also be able to access specialist support from the MAAS partners.
As well as drawing on user research, we have been guided by external evaluation findings from our independent evaluation partner, RAND Europe. They have been involved from the beginning of our endeavour and have been able to help us achieve a deeper understanding of our individual perspectives, our unique roles and priorities and how MAAS can help achieve these.
We have produced an initial prototype of the informational website, pulling together information for developers and adopters from all four MAAS partners. We expect to go live with the first iteration of this informational service towards the end of this year, followed by the transactional service in the summer of 2023.
This is an exciting time for the development of AI technologies in health and social care, and as the health and social care regulator we are keen to help foster these developments.