Powerful artificial intelligence systems can be of enormous benefit to society and help us tackle some of the world’s biggest problems. Machine learning models are already playing a significant role in diagnosing diseases, accelerating scientific research, boosting economic productivity and cutting energy usage by optimizing electricity flows. on power grids, for example.
It would be a tragedy if such gains were jeopardised as a result of a backlash against the technology. But that danger is growing as abuses of AI technology multiply, in areas such as unfair discrimination, disinformation and fraud, as Geoffrey Hinton, one of the “godfathers of AI”, warned last month on resigning from Google.
How to do so will be one of the greatest governance challenges of our age. Machine learning systems, which can be deployed across millions of use cases, defy easy categorization and can throw up numerous problems for regulators. used in diffuse, invisible and ubiquitous ways, at massive scale. But, encouragingly, regulators around the world are finally starting to tackle the issues.
Last week, the White House summoned the bosses of the biggest AI companies to explore the benefits and perils of the technology before outlining future guidelines.The EU and China are already well advanced in drawing up rules and regulations to govern AI.And the UK’s competition authority is to conduct a review of the AI market.
The first step is for the tech industry itself to agree and implement some common principles on transparency, accountability and fairness. Companies should never try to pass off chatbots as humans, for example. employment law, financial and consumer markets, competition policy, data protection, privacy and human rights, to modify existing rules to take account of specific risks raised by AI. risk of industrial capture.
Beyond that, two overarching regulatory regimes should be considered for AI, even if neither alone is adequate for the size of the challenge. One regime, based on the precautionary principle, would mean that algorithms used in a few critical, life-and-death areas, such as healthcare, the judicial system and the military, would need preapproval before use. public health.
The second, more flexible, model could be based on “governance by accident”, as operates in the airline industry. Alarming though this sounds, it has worked extremely effectively in raising air safety standards over the past few decades. International aviation authorities have the power to mandate changes for all aircraft manufacturers and airlines once a fault is detected. Something similar could be used when harmful flaws are found in consumer-facing AI models, such as self-driving cars.
Several leading industry researchers have called for a moratorium in developing leading-edge generative AI models. with governments and civil rights organizations to help write them. After all, cars can drive faster around corners when fitted with effective brakes.