Writing in Fast Company, Jack Lynch, CEO of HMH, states that empathy, safety, and privacy should come first when creating policy for generative AI (GenAI) technology in the classroom.
”Generative AI (GenAI) is best recognized as the chatbots that have exploded into popular awareness. I believe we should publicly state principles to keep students and teachers at the center of GenAI development.
“AI will earn the trust of schools, teachers, families, and education leaders only if it’s used with wisdom, guidelines, and safeguards that ensure it genuinely supports teachers, benefits students, and never compromises children’s privacy or safety. That’s why we’re outlining five recommended principles we believe should guide the responsible adoption of AI technologies in K-12 schools.
1) Keep teachers at the center
“We believe in a “high-tech, high-touch” approach in which technology should support, not mediate, the teacher-student connection. Teachers are closest to the educational experience, and their voices must also inform the development of new technologies intended to serve them.
“Teachers will need support and professional development to build “artificial intelligence literacy” to effectively leverage the technology in the classroom. Most educators (76%) identify a need for education on ethical AI usage and its integration into the classroom.
2) Uphold student privacy, safety and well-being
“Protecting student privacy and data is non-negotiable. Existing federal laws provide strong protections that must apply to the new uses that may be associated with GenAI. Many state laws also protect children’s and students’ privacy, and third-party organizations must uphold and promote data privacy and student safety.
3) Ensure responsible and ethical use
“Families need to understand how GenAI is being used in schools—without being overwhelmed with information that’s too detailed or technical to understand. Federal and state policymakers should work with AI experts to determine appropriate disclosure requirements and provide guidance for how districts and schools can access the information they need about GenAI systems they choose to use.
4) Encourage continuous evaluation and improvement
“Systemic integration of AI into education technology and practice requires analysis of which strategies work, for whom, and why. Ongoing evaluation and improvement will ensure the technologies genuinely support teaching and learning. These trials must include guardrails to protect student privacy, safety, and wellbeing.
5) Prioritize accessibility and inclusivity
“As classrooms become more diverse in demographics and learning needs, GenAI tools can equip teachers with personalized approaches, recommendations and supplemental materials to meet each student’s needs. As new bias, equity, and accessibility considerations emerge with the use of GenAI, regulations need to evolve.
“Our schools face the task of defining guardrails for a field that’s evolving with astonishing speed. Policymakers, and companies like ours, must put empathy, safety and privacy at the forefront to maximize the benefit that these technologies will surely have to elevate teaching and learning,” Lynch concludes.
Fast Company