A concerning revelation emerges as the UK Safer Internet Centre (UKSIC) sheds light on a distressing phenomenon: children using artificial intelligence (AI) image generators to create indecent images of their peers. This disconcerting trend has prompted calls for immediate action from the charity, emphasizing the urgency of addressing the issue before it escalates.
The UKSIC has reported receiving a limited number of incidents from schools, underscoring the need for collaborative efforts between teachers and parents to tackle this unsettling behavior. While children may engage in such activities out of curiosity rather than malicious intent, the charity stresses that, under UK law, creating, possessing, or distributing any form of inappropriate imagery, whether real or AI-generated, is unequivocally illegal.
The potential consequences of this AI-driven content creation are far-reaching. Children might inadvertently circulate these materials online, unaware of the legal repercussions. Furthermore, the images could be exploited for malicious purposes, including blackmail.
A recent study by RM Technology, involving 1,000 pupils, revealed that nearly one-third of students are using AI to explore inappropriate content online. Tasha Gibson, Online Safety Manager at the firm, highlights the commonplace use of AI among students, revealing a knowledge gap where students often surpass their teachers in understanding AI intricacies.
As AI’s popularity grows, closing this knowledge gap becomes imperative to ensure the responsible and secure use of technology by young individuals. Teachers appear divided on whether parents, schools, or governments should bear the responsibility of educating children about the potential harms associated with such materials.
The UKSIC advocates for a unified approach, urging schools to collaborate with parents to address the evolving challenges posed by AI technologies. Director David Wright emphasizes the need for immediate action to prevent the issue from overwhelming schools and escalating further. He emphasizes the importance of anticipating harmful behaviors as AI generators become more accessible.
Victoria Green, CEO of the Marie Collins Foundation, a charity supporting children impacted by sexual abuse, underscores the lifelong damage that could result from such activities. Even if these images were not created with malicious intent, once shared, they could fall into the wrong hands, potentially ending up on dedicated abuse sites.
The risks associated with AI-generated content were starkly demonstrated by an app in September, capable of creating fake nude images. The app was used to generate explicit images of young girls in Spain, highlighting the potential dangers posed by advancements in generative AI.
Javaad Malik, a cyber expert at IT security firm KnowBe4, notes the increasing difficulty in distinguishing between real and AI-generated images. This trend, coupled with the rising popularity of “declothing” apps, underscores the urgent need for proactive measures to protect children from the potential harms facilitated by AI technologies.