Every day, autonomous systems powered by AI (artificial intelligence) give personalized shopping recommendations to citizens, make hiring recommendations, evaluate loan applications, and assess legal defendants’ risks of reoffending (1). However, there is evidence, commonly cited and noted by the media, that people have put much more trust and power in AI than it should have. For instance, ProPublica’s 2016 prediction system for criminal re-offense was revealed to have discriminated against African Americans. In 2017, Uber’s self-driving cars caused fatal accidents when encountering unfamiliar sensory feedback (1). It is clear that society must implement a standard code of ethics for AI in the law and establish technological development practices to prevent such social issues in the future.

In order to address the ethical issues AI poses, scientists, policymakers, and companies have come together to research both formal policies and technological development practices that build safeguards and accountability around AI decision-making.

Recent research hopes to shape AI developmental practices by encouraging developers to instill ethical principles in their design process. Organizations such as the Future of Life Institute suggest guiding AI research and development with established ethics standards and boards similar to the ethics boards present in biomedical research review (2). These AI-focused organizations have converged upon several main values to aim for: failure transparency, human value alignment, maintained human control, development responsibility, personal privacy, and avoidance of an arms race (2). Experts have concluded that constructing a common ethics code and addressing these goals is especially important because they raise great uncertainty about the social responsibilities of future AI applications (3).

Researchers have also discussed how to enforce AI ethics as there is no effective way to hold AI accountable for its decisions. Even if an AI is treated like a human in the legal system, punishments such as prison time, fines, and compensation hold no real significance to it. Thus, an AI needs a human to hold legal responsibility. Despite the difficulty of understanding AI’s data-based decisions, its developers are the ones who are most likely to foresee and prevent its future consequences. Just as a child’s parents are legally responsible for the child’s actions even though they didn’t do the damage and don’t really understand why the child misbehaved, developers should be held legally responsible at first instance for decisions their AI makes (4). Additionally, according to Harvard Law School’s Berkman Klein Center, it is technically feasible to have an AI produce a legally acceptable explanation of its decision (5). Although that could force an AI to downgrade the complexity of its data analysis and reveal trade secrets, a legal explanation often only has to describe the decision-making process behind a specific incident. Then, if the AI’s explanation system is distinct from the AI itself, the system would only have to provide explanations when required to and be allowed to use more complex logic at other times. This means that “at present, AI systems can and should be held to a similar standard of explanation as humans currently are” (4).

 

But since the fast-paced technology field is vulnerable to being stifled by regulations, there is an emphasis on integrating, rather than policing, ethics standards into developmental practices (3). Researchers have created technologies that can be incorporated into AI applications to help them adhere to ethical values. For instance, Google has developed federated learning, a form of AI that utilizes user data on a secure need-to-know basis to protect private information (6). At Boston University, the RISE approach demonstrates explanation of highly complex AI models are possible by evaluating an AI’s image description output for randomly masked input images and perform better than AIs that generate explanations as they compute (7). Finally, IBM has created a movie recommendation AI that can follow a basic code of ethics after training on appropriate recommendation examples and receiving feedback from the user. This demonstration indicates that making AI choose ethical decisions in the first place could be possible in the future (8).

 

With the potential AI has to transform numerous areas of society, it’s critical to solve AI ethics issues before more powerful AI applications are released. Current research has shown that there are feasible policies and technological development practices to uphold AI ethics principles. Therefore, scientists, policymakers, and companies should continue studying potential solutions while balancing maintained social responsibility and restricted technological development. Then, we might begin to trust that AI will bring more benefit than harm to its users and those who are affected by its decisions. Once AI can respect human values, society can better continue its technological progress and potentially make breakthroughs in the world’s critical issues by utilizing AI’s ability to discover data and patterns that humans cannot. 

 

References:

  1. Shaw, J. (2019, January 01). Artificial Intelligence and Ethics. Retrieved from https://harvardmagazine.com/2019/01/artificial-intelligence-limitations
  2. The FLI Team. (2018, June 05). A Principled AI Discussion in Asilomar. Retrieved from https://futureoflife.org/2017/01/17/principled-ai-discussion-asilomar/
  3. Canca, C. (2019, March 29). A New Model For AI Ethics In R&D. Retrieved from http://www.forbes.com/sites/insights-intelai/2019/03/27/rethinking-ethics-in-ai-rd/
  4. Bartlett, M. J. (2019, April 05). Solving the AI Accountability Gap. Retrieved from https://towardsdatascience.com/solving-the-ai-accountability-gap-dd35698249fe
  5. Petsiuk, V., Das, A., Saenko, K. (2018, June 19). RISE: Randomized Input Sampling for Explanation of Black-Box Models. Retrieved from https://arxiv.org/abs/1806.07421
  6. Dean, J. (2019, June 28). Responsible AI: Putting Our Principles Into Action. Retrieved from https://www.blog.google/technology/ai/responsible-ai-principles/
  7. Doshi-Velez, F., Kortz, M., Budish, R., Bavitz, C., Gershman, S., O’Brien, D., Schieber, S., Waldo, J., Weinberger, D., & Wood, A. (2017, November 06). Accountability of AI Under the Law: The Role of Explanation. SSRN Electronic Journal. doi:10.2139/ssrn.3064761
  8. Dickson, B. (2018, July 16). IBM Researchers Train AI to Follow Code of Ethics. Retrieved from https://venturebeat.com/2018/07/16/ibm-researchers-train-ai-to-follow-code-of-ethics/