Abstract
Chatbots are a form of artificial intelligence that can read text, or translate voice to text, and provide a response. While they have existed since the 1960’s, they have recently been used to provide a form of therapy through software applications (“apps”). However, unlike licensed professionals who provide traditional mental health services, chatbots are not subject to confidentiality obligations. Currently, federal and state regulations that impose confidentiality obligations in the healthcare context do not apply to chatbots, and the regulations that do apply to chatbots do not impose such obligations. Because users engaging with these apps disclose information similar to information disclosed during a therapy session, this Article proposes a new regulatory framework, through new legislation, to limit the use and disclosure of information received by softwarebased therapy technologies.