This project aims to create a more focused tracking mechanism for a specific AI risk sector: misinformation and disinformation created or spread using artificial intelligence tools. Many existing AI incident and policy trackers are very broad or indiscriminate, which can be overwhelming for lawmakers and the public. Having a tracker focused on a more narrow section of AI policy will allow the information to be more digestible and actionable.
An AI-related misinformation or disinformation incident refers to any event or series of events where artificial intelligence tools are utilized to create, enhance, or distribute false, misleading, or defamatory content. This includes, but is not limited to, AI-generated text, deepfake videos, manipulated images, and synthetic audio that deceives or misinforms the public.
An AI misinformation or disinformation policy refers to any policy on the local, state, or federal level that contains mechanisms to curb or prevent the harms of AI-powered misinformation or disinformation. Combatting AI-powered misinformation does not have to be the sole expressed goal of the policy.