Up until recently, companies have mainly relied on an internal ethics board to grapple with the impact of A.I. on people’s lives. However, these efforts have been fraught and governments are increasingly starting to take action. Increasingly, organizations are setting voluntary rules for the use of AI, and the EU's recently proposed Artificial Intelligence Act is another harbinger of a coming wave of government regulation. But how can adherence with these rules be assessed and verified?
During this IdeaFlow session, we’ll be discussing how independent algorithmic auditing can offer a beneficial approach to help mitigate the downside risks of the proliferation of A.I. and automation and talk about what features such a system should include.
The MIT Computational Law Report published an example of this approach to independent auditing in the context of COVID-19 Contact Tracing Privacy Principles and you can glance the example audit framework accompanying this piece, here.