How Accountable Should Algorithms Be?

Whilst AI today is largely the preserve of pilots and prototypes, these early forays suggest that the technology will be increasingly capable of undertaking very serious tasks.  As such, it’s likely that AI has to perform to certain levels of transparency in terms of how explainable its decision making abilities are, and accountability for those decisions.

It is on this latter point that a recent paper from the AI Now Institute devotes its energies, as it attempts to create a framework to gauge the impact of AI technologies.

“Automated decision systems are currently being used by public agencies, reshaping how criminal justice systems work via risk assessment algorithms and predictive policing, optimizing energy use in critical infrastructure through AI-driven resource allocation, and changing our employment and educational systems through automated evaluation tools and matching algorithms,” the authors say.

There is a growing desire to assess the short and long-term impact of AI systems, particularly in terms of their effectiveness in understanding the complex social and historical contexts within which they’re applied. Answering the difficult questions this raises has hampered by the ‘black box’ nature of so many AI systems today.

Desire for accountability

To help rectify matters, the authors advocate an Algorithmic Impact Assessment (AIA) framework to support communities and stakeholders affected by AI decision systems.  The framework has five core elements:

  1. Agencies should conduct a self-assessment of existing and proposed automated decision systems, evaluating potential impacts on fairness, justice, bias, or other concerns across affected communities;
  2. Agencies should develop meaningful external researcher review processes to discover, measure, or track impacts over time;
  3. Agencies should provide notice to the public disclosing their definition of “automated decision system,” existing and proposed systems, and any related self-assessments and researcher review processes before the system has been acquired;
  4. Agencies should solicit public comments to clarify concerns and answer outstanding questions; and
  5. Governments should provide enhanced due process mechanisms for affected individuals or communities to challenge inadequate assessments or unfair, biased, or otherwise harmful system uses that agencies have failed to mitigate or correct.

The paper argues that the use of unaudited ‘black box’ systems should be banned in public agencies. Such agencies are depended upon by the public to uphold basic values, such as fairness, justice and due process.

The authors believe that implementing a framework like the one outlined above will help to achieve four key policy goals:

  1. Respect the public’s right to know which systems impact their lives by listing and describing them;
  2. Increase public agencies’ expertise in evaluating the systems they use, and thus get better at anticipating potential issues;
  3. Ensure greater accountability of automated decision systems by providing an ongoing opportunity for external agents to review, audit and assess them; and
  4. Provide the public with a meaningful opportunity to respond and dispute the use of such systems.

“While AIAs will not be a panacea for the problems raised by automated decision systems, they are designed to be practical tools to inform the policy debate about the use of such systems and to provide communities with information that can help determine whether those systems are appropriate,” the authors conclude.

Related

Facebooktwitterredditpinterestlinkedinmail