A report published by the Center for the Governance of AI at Oxford University’s Future of Humanity Institute finds that American people have mixed views on the development of artificial intelligence (AI) and view surveillance and privacy as primary concerns about the technology.
The survey, carried out by Baobao Zhang and Allan Dafoe of the Future of Humanity Institute (FHI), is one of the most comprehensive focusing on the American public’s opinions on AI to date, featuring 2,000 respondents via the survey firm YouGov.
Baobao Zhang said: ‘The impact of AI technology on society is likely to be huge. While the technology industry and governments currently dominate policy conversations on AI, we expect the public to become more influential over time. Understanding the public’s views on AI will, therefore, be vital to its future governance.’
Key findings from the report include:
· Americans express mixed support for the development of AI. After reading a short explanation, a substantial minority (41%) somewhat support or strongly support the development of AI, while a smaller minority (22%) somewhat or strongly oppose it.
· Among 13 AI governance challenges, Americans prioritise preventing AI-assisted surveillance from violating privacy and civil liberties; preventing AI from being used to spread fake and harmful content online; preventing AI cyber attacks; and protecting data privacy.
· Americans have discernibly different levels of trust in different organisations to develop AI for the best interests of the public. The most trusted are university researchers and the US military; the least trusted is Facebook.
· The median respondent predicts that there is a 54% chance that high-level machine intelligence will be developed by 2028. We define high-level machine intelligence as when machines are able to perform almost all tasks that are economically relevant today better than the median human (today) at each task.
Allan Dafoe said: ‘Our results show that the public regards as important the whole space of AI governance issues, including privacy, fairness, autonomous weapons, unemployment, and other extreme risks that may arise from advanced AI. Further, the public’s support for the development of AI cannot be taken for granted. There is no organisation that is highly trusted to develop AI in the public interest, though some are trusted much more than others. In order to ensure that the substantial benefits from AI are realised and broadly distributed, it is important that we work to understand and address these concerns.’
Thanks to funding from the Ethics and Governance of Artificial Intelligence Fund, the FHI’s Center for the Governance of AI plans to release regular similar reports based on survey research in the US, China, and the European Union.