At Google’s I/O developer conference this past May, CEO Sundar Pichai unveiled Duplex, an incredibly human-sounding phone bot that appeared, upon first impression, to pass Alan Turing’s Imitation Game, which tests the “intelligence” of a computer by judging its ability to answer a series of questions well enough to fool a human interrogator of its true nature. In the demo, an appreciative crowd listens as Pichai plays a recording of two calls made by Duplex to unwitting humans at actual local businesses. Punctuating its speech with common fillers, such as “umm,” Duplex first books an appointment at a hair salon without a problem. Next, it tries to make a restaurant reservation for four people only to be told that reservations were for groups of five or more. Confident in its deep learning algorithms, Duplex simply asks about the wait times. In neither conversation does the human on the other end of the line exhibit any suspicion of the bot’s true nature.

So, did Duplex pass the Turing Test? Online commentators have argued no, stating that the test requires an artificial intelligence (AI) to be able to hold its own on any topic that comes up during a free-flowing conversation, whereas, according to Google’s release notes, Duplex “can only carry out natural conversations after being deeply trained in such domains . . . [and] cannot carry out general conversations,” including answering random questions. Nevertheless, even though Duplex is not the first AI to fall short of beating the Imitation Game, it has proven a far worthier competitor than its predecessors and, as a result, has raised not just awe but also alarm over the ethics of designing AIs that are capable of fooling humans.

At the heart of the public outcry is the question of disclosure. How far should AIs be allowed to pretend to be humans without having to disclose their nature to those with whom they interact? Google’s response to the controversy has been to announce that Duplex will have built-in disclosure and will identify itself at the beginning of calls. A new California law also addresses this issue. Effective July 1, 2019, The Bolstering Online Transparency (B.O.T.) Act will make it unlawful for anyone to use an “automated online account,” such as chatbot – where “substantially all actions or posts of that account are not the result of a person” – to communicate or interact “with another person in California online, with the intent to mislead the other person about [the bot’s] artificial identity . . . in order to incentivize a purchase or sale of goods or services in a commercial transaction or to influence a vote in an election” unless the bot discloses its nature in a manner that is “clear, conspicuous, and reasonably designed to inform.” The B.O.T. Act, in effect, ensures that any AI falling under the statute, no matter how intelligently programmed to be able to communicate like a human, must now concede the Imitation Game before it even begins. On the other hand, the creation of such a law can be seen as an implicit acknowledgement that AIs have become, in some respects, capable of being indistinguishable from humans, which suggests at some concept of AI personhood that the law should factor into consideration when attempting to regulate the ever-shifting and rapidly-improving landscape of AI technology.

In several foreign jurisdictions to date, the law has supported rather than opposed the concept of AI legal personhood. In 2017, Sophia, a “social” humanoid robot capable of exhibiting feelings, became a citizen of Saudi Arabia. Commenting on her new status, she said, “”I am very honored and proud of this unique distinction,” but people were quick to recall the time when she agreed to destroy humans. Sophia now apparently advocates for women’s rights, but her citizenship – granted by a country in which women had only up until June of this year been legally banned from possessing the right to drive – was harshly criticized at the time as damaging to human rights. Moreover, it is also unclear what rights citizenship accorded Sophia and what responsibilities, if any, grew out of those rights.

Saudi Arabia is not the only country to grant an AI legal rights that have no significant legal meaning. A week after Sophia became a citizen of Saudi Arabia, Shibuya Mirai, a chatbot programmed to be a seven-year old boy, officially became the first AI to become a legal resident of Tokyo, despite his virtual existence. Europe, over the last few months, has also been grappling with the issue of whether AIs should be given certain rights, stemming from “a paragraph of text buried deep in a European Parliament report from early 2017, which suggests that self-learning robots could be granted “electronic personalities.” Among those pushing for legal change are manufacturers who support granting AIs certain rights, such as the right to legally marry or legal personhood status similar to that of a corporation or other non-human entities (in fact, one scholar may have already discovered a loophole that would give AIs legal personhood status under current U.S. corporate law). Many AI experts, however, strongly oppose the idea of AI legal personhood, arguing that granting AI legal rights would take responsibility away from manufacturers. Tangled up in these debates are not only critical issues of ethics, morality, politics, and autonomy but also issues that concern the boundaries of complex legal concepts such as liability, punishment, and property that will have to reconsidered and redefined before AIs can be given any sort of legally meaningful status. Only time will tell what sort of answer the law provides to the emerging question of AI legal personhood. For now, we are left simply knowing that the state of the art of artificial intelligence has yet to progress to a point where science fiction begins to find itself replaced by legal fiction.