Artificial Intelligence (AI) is rapidly reshaping arbitration, from increasing research efficiency to automating document review. While AI saves time and money, it also raises concerns about transparency, bias, privacy, and understandability. While an AI guideline accepted by the major arbitration institutions may not solve all of these concerns, it could mitigate some salient ones. It would thus be beneficial to the whole of the arbitration industry to follow the example of the Silicon Valley Arbitration and Mediation Center, which drafted and published 2023 AI Guidelines. First, however, these institutions require a clear grasp of the largest problems with AI in arbitration.

Key Challenges

  1. Transparency and Explainability: AI algorithms often operate in difficult to understand ways. When an AI tool is used in arbitration, parties must understand how and where it was used so that they may verify its training set information and have confidence in its results. Opaque AI algorithms, sometimes called "black box" systems, threaten transparency due to their lack of explainability.
  2. Bias and Fairness: AI systems learn from historical data, which may contain biases. If an AI model is trained on biased data, it could produce biased results. One challenge is that the best, least biased data set candidates are confidential. This problem is unlikely to be solvable with masses of low-quality general data because arbitration is a highly specialized field where mass data is more likely to degrade program usability than improve it.
  3. Data Privacy and Confidentiality: Arbitration information is generally sensitive and confidential. Some AI tools may introduce risks related to data security and breaches of confidentiality, whether accidental data exposure or intentional data breaches. There may even be a problem with competing arbitration institutions seeking each other's confidential data to improve their own tool's training set. 
  4. Accountability and Liability: When AI tools contribute to arbitration decisions, regardless of the tool used, questions of responsibility arise. Clear guidelines should outline liability for AI-related errors, whether for tool developers, users, or arbitrators relying on AI insights.

Potential solutions and approaches

  1. Explainable AI: Explainable AI (XAI) is a form of AI that "implements specific techniques and methods to ensure that each decision made during the ML [machine learning] process can be traced and explained." XAI models allow arbitrators and parties to trace their decision paths and feel more secure in their usages. For example, institutions may limit the scope and modes of thought available to the AI through techniques such as DeepLIFT, "which compares the activation of each neuron to its reference neuron and shows a traceable link between each activated neuron and even shows dependencies between them."
  2. Excluding some tools: Limiting the set of acceptable AI tools used in arbitration may reduce fears about losing confidentiality. Institutions may want to go further and choose to only allow internally built tools to be used. 
  3. Data Augmentation: The need to limit the uses of external AI tools or limit training sets to internal data may result in small data sets, a common problem with AI. This, however, may be solved organically over time as more disputes come in or if Arbitration institutions come to some sort of agreement to share data for the limited and explicit purpose of training AI arbitration tools. However, if this is not a viable solution because of confidentiality concerns, institutions may choose to augment their data. Data augmentation is a strategy to train an AI with a shallow data pool by manually feeding it additional data. Arbitration institutions may, for example, create hypothetical disputes for actual arbitrators to decide in order to add high-quality training data. 
  4. Admission of bias: AI bias is generally damaging because it's made of human biases and "rules" that humans create. However, in law, unlike other fields, those biases are somewhat a part of the system due to subtle, inherent human biases. As such, rather than completely removing these biases that may exist in the dispute history (which is likely impossible), parties using AI should preface their findings with a disclaimer that AI trained on human biases is naturally biased.
  5. Data protection: Given the confidential nature of arbitration, protecting parties' data and identities is paramount. This means that any guidelines must prioritize the safety and security of data regarding both the parties currently at arbitration and parties whose past arbitration decisions are being used as training data. This may take multiple forms, from institutions only making their data available to their own internal tools to the creation of new organizations with the sole purpose of training arbitration tools on cross-institutional data while maintaining inter-institutional confidentiality. 

Conclusion

With the continual growth of AI and with governing bodies such as the European Union calling certain AI systems in alternative dispute resolutions "high risk," the need for broadly accepted AI guidelines becomes ever clearer. Though these guidelines will inevitably fall short of fixing all AI errors and concerns within arbitration, they may provide a framework that can be used to address unforeseen problems in the future. Thus it is imperative to the health of the arbitration industry that a uniform set of standards be created as soon as possible. In doing so, arbitration institutions may get a handle on the situation while the technology is still in its infancy rather than wait to create standards once the technology has matured and becomes too complex to govern effectively.