Artificial intelligence and machine learning algorithms are increasingly employed by the private and public sectors to automate simple and complex decision-making processes, but the hype has blinded people to the resulting discriminatory and biased decisions. To hold algorithms accountable for such decisions as well as to determine the underlying factors, researchers have turned to a methodological tool known as the audit study. Quite a few studies on algorithmic decision-making have made strong claims about the causality between algorithms and biased decisions. Multiple protective measures have also been enacted against the discriminatory and biased algorithmic decision-making practices. Nevertheless, they are persistent because of algorithmic obscurity, biased training data, and the false belief that algorithms are neutral. This paper proposes a rational counterfactual framework for algorithm audits. The framework draws on the counterfactual theories of causation. It aims at identifying obvious and obscure decision factors engendering certain decisions from the rational counterfactuals for a given factual. The power of the framework lies in its ability to determine the algorithmic decision factors that could lead to certain rational or irrational decisions, which in turn allows us to use the identified combinations of decision factors to perform algorithm audits.
Lee, Seung C.
"Auditing Algorithms: A Rational Counterfactual Framework,"
Journal of International Technology and Information Management: Vol. 30:
2, Article 5.
Available at: https://scholarworks.lib.csusb.edu/jitim/vol30/iss2/5