fbpx

Online safeguarding has become far more complex for schools, particularly as the intersections between education and digital environments have continued to multiply. While the internet can be leveraged to the great benefit of students, their digital lives (external to online educational environments) can, and often do, also impact their offline lives. Exposure to harmful content and contact with others online, as well as the consequences of students’ own behaviour, can be harmful to children and young people. Increasingly, the algorithms and predictive analytics developed by social media apps compound the problems by, for example, recommending adult strangers connect with children whose content the adult has viewed, and pro-anorexia content to young people with eating disorders. EdTech apps that use trackers can contribute to these problems by sharing or selling data about children that use EdTech apps. The management of such risks and their impacts have come to pose a significant challenge to schools globally.

School staff, including teachers and administrators, are having to support their students as they navigate the world of social media which can involve cyberbullying and sexting. Social norms around such issues are evolving as they become more prevalent online: they are becoming increasingly normalised, to the detriment of children and young people everywhere. The complexities attendant to an evermore digitised social life are being confronted by students and their teachers alike, with school communities having to confront these evolving issues without any common understanding of best practice for or standard approach to handling the online harms that children face.
Existing approaches tend to focus on parental or school oversight, via easily overcome parental controls measures with minimal active engagement: providing and withdrawing consent to children’s access to certain platforms is neither readily offered or easily done when made available. Age-gating as it currently exists is similarly easy for children and young people to bypass, with self-declaration of age being the most widely used and simple to cheat.

If platforms, children and responsible adults (parents and teachers alike) are able to participate in a common approach, whereby, one, platforms could be made aware of users’ age bands in a privacy-preserving manner and, two, parents and educators could give or deny consent to their children’s access to certain features (and/or the processing of their data), many of the problems posed by content, contact and conduct online could be avoided.

Indeed, this was the central proposition of a recent Government-run programme of work entitled the Verification of Children Online (VoCO) project. It took as its guiding hypothesis that, “If platforms could verify which of their users were children, then as a society we would be better empowered to protect children from harm as they grow up online…” The VoCO programme of work involved a series of technical trials involving TrustElevate, BT, the Football Association and Trackd a music app. This project determined that age assurance, provided by TrustElevate, such that platforms know the ages of their users, was desirable, feasible and proportionate. It also ran in 2020, a year in which a major wave of activity in the online harms regulatory and policy spheres began.

Read the full article here-Schools and Online Safeguarding: Tackling Online Harms in the New Era of Digital Regulation | by TrustElevate Team | trustelevate | Apr, 2021 | Medium