Artificial intelligence (AI) and machine learning (ML) models are quickly becoming more common in the medical space. One such use, for example, is a clinical decision support system (CDSS) that helps physicians determine a personalized treatment plan for a patient. A CDSS involves uses of software that matches the characteristics of a patient to a computerized database of other patients and cases. The software then presents an individualized recommendation to a clinician, who often combines his personal knowledge and experience with the information received from the CDSS. When operating as intended, these systems are able to consume and interpret large amounts of electronic health data to generate the optimal method of treatment for an individual patient.
However, the relative novelty of the health information technology behind CDSSs leads to unintended consequences. The courses of action suggested by the CDSS while monitoring the patient may not always be accurate. Some alerts can be false negatives or false positives, though the latter appears to be more common. False negative alerts suggest no action when action may be needed, and can lead to potentially critical issues to be overlooked in a patient. False positives, on the other hand, can lead to prescriptions that are not only unnecessary but also may even interfere with other treatment plans that the patient is on. These false alarms make patient care inefficient and inaccurate, while also increasing physician distrust in the systems.
When an error in the operation or implementation of CDSS results in harm to the patient, who bears liability? And what legal framework should be used to determine that liability? Perhaps surprisingly, copyright law can provide some guidance. Copyright law has well-established doctrines of secondary liability, and vicarious liability may readily be applied to the use of CDSSs in the medical space.
Liability for vicarious copyright infringement requires a showing the party (1) received a financial benefit from the infringing activity and (2) had the right and ability to supervise or control the infringing activity. It is helpful to think of infringement in the copyright context as corresponding to a mistake made by an AI-based system that leads to patient harm. Of course, copyright law cannot blindly and universally be applied to every situation – but there are notable similarities between copyright and patient care. In both areas, there is an undesired activity which has a primary actor, but is facilitated by many other secondary actors. In addition, both healthcare and copyright involve some sort of right held by the damaged party. In copyright, it is the original artist’s ownership of the work; in healthcare, it is the patient’s rights to accurate diagnosis and privacy. Let’s begin, as a first step, by applying this doctrine to regulatory bodies and physicians.
Regulatory Bodies
Vicarious liability may be found for the FDA and other similar regulatory bodies that play a more passive “gatekeeping” role in the use of medical technologies. First, there is a financial link between the regulatory functions of the FDA and the medical devices submitted for clearance: user fees. These are fees associated with the approval process, entirely paid for by the entity submitting the device. In 2021, the Devices and Radiological Health regulatory activities accounted for 10% of the FDA’s budget, and 35% of this was paid for by industry user fees. Not only does this raise the glaring concern of lobbying, but it also provides the FDA with a financial incentive to sometimes loosen up its assessments for safety and efficiency in order to encourage more submissions, and thus, accumulation of greater user fees. Perhaps the potential for these “loose” evaluations encourages companies to submit their devices for approval, and pay the user fee, sooner rather than later.
Second, the FDA has the ability to control and monitor the safety of AI-based technologies, if anything. It has control over what algorithms are safe enough to enter the market and reach patients. Even after clearance, regulatory bodies bear the responsibility of continued monitoring, and the FDA may pull the product out of the market at any time that it is no longer deemed to be safe or effective.
Physicians
Because the physician is the direct point of contact with the patient, as well as the user of the various AI-based technologies, it seems intuitive to place some sort of liability on physicians and the hospitals that employ them – the question is when and how much. When an CDSS, for example, is used to aid a physician’s decision-making process, two different situations may give rise to physician liability: (1) when the system recommends a course of action within the standard of care that the physician ignores, or (2) when the system recommends a course of action outside the standard of care that the physician follows.
The financial benefit prong of vicarious liability can be found in different ways. In some cases, the hospital or physician may have a partnership with a medical device company employing an AI-based technology, which gives them the financial incentive to continue using the medical device despite its potential flaws. Although the hope is that hospitals do not forgo the safety of the medical devices they employ, this financial incentive is significant. According to one study, the medical device industry provided doctors with benefits and payments worth over $904 million between 2014 and 2017. For physicians engaging in private practice, the fact that CDSSs reduce the time it takes to make clinical decisions might itself provide the financial incentive, as it allows for the treatment of more patients in a shorter amount of time.
Physicians also have the ability to supervise and control the effect of inaccurate algorithm-generated alerts or recommendations. The AI itself does nothing to the patient – it merely provides the physician with the support to take action more quickly or make a more informed judgment. At the very end, the physician is the one who is treating the patient, whether or not he chooses to follow the advice given by the AI-based technology. Furthermore, the FDA has provided four criteria for a CDSS that may be exempt from regulatory oversight: the software (1) does not analyze medical images, (2) displays information normally communicated between healthcare professionals, (3) provides recommendations only to healthcare professionals, and (4) provides the healthcare professional with the ability to independently review the basis of the decision. The fourth criterion is akin to the ability to supervise and control under the doctrines of copyright law, and places even greater responsibility of the physician since it can exempt FDA regulation.
As such, copyright law can provide a helpful initial framework for determining secondary liability for mistakes made by AI-based medical technologies like CDSSs. This is not to say that the analogy is perfect. Secondary copyright infringement often involves parties that knowingly allow infringement to occur, or turn a blind eye to the illegal activity. Given the higher stakes and ethical considerations involved in patient care, if a physician or a regulatory body were to similarly act with intent, they would be liable for much more than just secondary contribution to the harm. Though imperfect in these ways, copyright law provides some guidance on navigating an issue not yet frequently subject to litigation in courts, and only time will tell whether this analogy can be fruitful in practice.