AI can battle coronavirus, but privacy shouldn’t be a casualty

26 May 2020

AI can battle coronavirus, but privacy shouldn’t be a casualty

Privacy

Regulators are unlikely to generate special new terms for AI during the coronavirus crisis, so at the very least we need to proceed with a pact: all AI applications developed to tackle the public health crisis must end up as public applications, with the data, algorithms, inputs and outputs held for the public good by public health researchers and public science agencies. Invoking the coronavirus pandemic as a sop for breaking privacy norms and reason to fleece the public of valuable data can’t be allowed.

We all want sophisticated AI to assist in delivering a medical cure and managing the public health emergency. Arguably, the short-term risks to personal privacy and human rights of AI wane in light of the loss of human lives. But when coronavirus is under control, we’ll want our personal privacy back and our rights reinstated. If governments and firms in democracies are going to tackle this problem and keep institutions strong, we all need to see how the apps work, the public health data needs to end up with medical researchers and we must be able to audit and disable tracking systems. AI must, over the long term, support good governance.

The coronavirus pandemic is a public health emergency of most pressing concern that will deeply impact governance for decades to come. And it also sheds a powerful spotlight on gaping shortcomings in our current systems. AI is arriving now with some powerful applications in stock, but our governments are ill-prepared to ensure its democratic use. Faced with the exceptional impacts of a global pandemic, quick and dirty policymaking is insufficient to ensure good governance, but may be the best solution we have.

Read the full TechCrunch piece co-authored by Philip N. Howard and Lisa-Maria Neudert