Among ‘safety incidents’ compiled by the OECD was Google’s Gemini portraying German second world war soldiers as people of colour.Among ‘safety incidents’ compiled by the OECD was Google’s Gemini portraying German second world war soldiers as people of colour.The UK needs a system for recording misuse and malfunctions in artificial intelligence or ministers risk being unaware of alarming incidents involving the technology, according to a report.
The report cites 10,000 AI “safety incidents” recorded by news outlets since 2014, listed in a database compiled by the Organisation for Economic Co-operation and Development, an international research body. The OECD’s definition of a harmful AI incident ranges from physical harm to economic, reputational and psychological harms.“Incident reporting has played a transformative role in mitigating and managing risks in safety-critical industries such as aviation and medicine.
Some models may only show harms once they are fully released, despite being tested by the UK’s AI Safety Institute, with incident reporting at least allowing the government to see how well the country’s regulatory setup is addressing those risks.