Existential Threat Risk from AI - Prediction

Existential Threat Risk from AI - Prediction

Where we got the data from:

For Stanford's 2023 Artificial Intelligence Index Report (released in April 2023) a team of researchers conducted a comprehensive survey within the natural language processing (NLP) community. The survey encompassed various topics, including the state of artificial general intelligence (AGI), NLP advancements, and ethical considerations within these fields.

NLP, a branch of artificial intelligence, focuses on enabling computers to comprehend written and spoken language in a manner akin to human understanding.

The survey received responses from 480 individuals, with 68% of them having authored at least two papers for the Association for Computational Linguistics (ACL) between 2019 and 2022. As a result, this poll provides valuable insights and one of the most comprehensive perspectives on the sentiments of AI experts regarding the development of artificial intelligence.

The Report states:

"AI decisions could cause nuclear-level catastrophe: 36% of respondents agree"

How we display the data

We assume a simplified system of three risk levels: low, moderate and high risk, each being allocated an equal 33.3% share. As 36% of respondends agree with the above sentiment regarding an existential thread posed by AI, a "moderate" risk level is advised.

Change log

We added this source on May 31, 2023

Considerations

Please note that our dashboard displays findings, predictions and data from different sources, which may at times overlap and contradict each other. Each visualized data set is therefore intended to be viewed in isolation. When citing the data in your work, you may link to our website but you must attribute the data to its original source, outlined in the box below.

Source Details
Source:
Stanford University HAI
Publication Date:
April 1, 2023
No items found.