How to Protect Schools from Malicious Deepfake Incidents

How to Protect Schools from Malicious Deepfake Incidents

The proliferation of artificial intelligence (AI)-generated deepfake technology (using video of a person to digitally alter their face or body to they appear to be someone else; typically used maliciously) is increasingly affecting schools, leading to varying responses based on state laws, according to Education Slice

Presently, 20 states have legislated against the nonconsensual distribution of AI-generated pornographic material. However, the specifics of enforcement and the responsibilities of schools differ widely. This patchwork of laws leads to inconsistent protection and disciplinary actions, which can leave victims feeling vulnerable and unsupported. 

Efforts are underway at both state and federal levels to strengthen laws and provide clear guidelines for handling such incidents in educational settings.

States are updating privacy laws and extending criminal penalties to include digital exploitation through deepfakes. Schools are also urged to incorporate these issues into their Title IX protocols, addressing them as forms of sexual harassment. The U.S. Department of Education’s new Title IX rule underscores the need to tackle online sex-based harassment, including deepfakes, indicating a growing recognition of the gravity of these incidents.

Education Slice

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors

Subscribe

Subscribe today to get K-12 news you can use delivered to your inbox twice a month

More Insights