This blog is directed at AI Law professionals with an obligation to understand AI models in relation to data governance and privacy laws.
Technology is revolutionizing both legal and regulatory compliance, crystalizing new reporting and monitoring standards for AI model developers. While the European Union has transmuted different AI Governance principles into binding regulation, other nations, like the U.S. are more circumspect. These nations appreciate that regulatory paralysis could easily asphyxiate innovative AI research, [1] like Quantum Computing (QC). Still, U.S. data protection and privacy laws do impose considerable obligations on institutions. Moreover, regulators are working to achieve compliance, transparency, and explainability [1,2,3,4,5] in the Machine Learning (ML) model algorithms that drive critical higher-level Artificial Intelligence (AI) tasks.
Law firms have slowly started adopting AI systems for legal research [6]. Currently, legal AI systems are restrained in scope due to their inability to adhere to the strict professional and moral code specified by the American Bar Association’s (ABA) Model Rules of Professional Conduct (Rules) [6]. Unlike the mathematics behind the AI system’s ML algorithms which possess an acceptable statistical margin of error, lawyers instead rely on discretion.
When a conflict arises, a lawyer is bound by a duty to serve the greater good, rather than personal gain. This requires lawyers to reflect on the situation, aligning their professional and moral judgment in accordance with the Rules’ ethical principles. This directly conflicts with many ML algorithms, specifically Artificial Neural Network (ANN) models like Generative Adversarial Networks (GAN), which are specifically designed for a Zero-sum game [2], where the goal of the model is instrumental gain over another entity (equation, model, monetary instrument, company, person, etc.).
With experts lacking consensus on the definition of AI, there also remains no legal definition of AI [1] either. However, AI standards do now exist for governed industries, such as Finance. While it can be argued that technology has been revolutionizing regulatory compliance for decades; the convergence of AI and Financial Legal Compliance, known as RegTech is relatively new, having only established a foothold after the 2008 global economic crisis [1,2].
Machine Learning (ML) model accountability is achieved through transparency, explainability, and fairness. Moreover, accepted AI Governance Framework protocols further require a human in the ML development-loop [1,8,9]. This step works as an assurance that the people who created the model are held accountable for the model’s decisions and results. The Litigation Rabbit-Hole [10] that this leads down presents another discussion for a future blog.
With policymakers turning to AI Legal teams, untrained in real-world AI model development, the AI Lawyers seek out ML model experts, who neither understand legal concepts nor policy. Herein lies the problem since the world currently lacks adequate regulation for protecting the fundamental rights of individuals in contemporary AI models. With deficiencies already evident, educational awareness needs to be addressed before quantum-inspired innovations trigger a reactive regulatory crisis.
References:
- J. Truby, R. Brown, and A. Dahdal, “Banking on AI: Mandating a Proactive Approach to AI Regulation in the Financial Sector,” in Law & Financial Markets Review, vol. 14(2), 2020, pp. 110–120. Available: https://doi.org/10.1080/17521440.2020.1760454
- F. Eckerli, and J. Osterrieder, “Generative Adversarial Networks in Finance: An Overview,” arxiv:2106.06364[cs], 2021.
- J. Gesley, “Legal and Ethical Framework for AI in Europe: Summary of Remarks,” in Proceedings of the Annual Meeting, Published by the American Society of International Law, vol. 114, 2020, pp. 240–242.
- A. Carter, “The Moral Dimension of AI-Assisted Decision-Making: Some Practical Perspectives from the Front Lines,” in Daedalus, vol. 151(2), 2022, pp. 299–308. Available: https://doi.org/10.1162/daed_a_01917
- G. Currie, K. E., Hawk, and E. M. Rohren, “Ethical Principles for the Application of Artificial Intelligence (AI) in Nuclear Medicine,” in European Journal of Nuclear Medicine & Molecular Imaging, vol. 47(4), 2020, pp. 748–752. Available: https://doi.org/10.1007/s00259-020-04678-1
- C. Nunez, “Artificial Intelligence and Legal Ethics: Whether AI Lawyers Can Make Ethical Decisions,” in Tulane Journal of Technology and Intellectual Property, vol. 20, 2017, pp. 189–204.
- H. G. Escajeda, “The Vitruvian Lawyer: How to Thrive in an Era of AI and Quantum Technologies,” Kansas Journal of Law & Public Policy, vol. 29(3), 2020, pp. 421–521.
- W. O’Quinn and S. Mao, “Quantum Machine Learning: Recent Advances and Outlook,” in IEEE Wireless Communications Magazine, vol. 27, 2020, pp. 126–131.
- S. E. Rasmussen and N. T. Zinner, “Multiqubit State Learning with Entangling Quantum Generative Adversarial Networks,” arxiv:2204.09689[cs], 2022.
- K. R. Adamo, B. P. Ray, E. Goryunov, and G Polins, “Avoiding the Litigation Rabbit Hole: Federal Circuit Endorses Pre-Answer § 101 Challenges,” Ultramercial, Inc. v. Hulu, LLC, Case No. 2010-1544 Fed. Cir., Accessed: Nov. 14, 2014. [Online]. Available: https://ipo.org/wp-content/uploads/2014/12/Ultramercial_adamo.pdf.