Artificial Intelligence (AI), Online Safety & Safeguarding
Ridgeway Secondary School – Keeping Children Safe in a Digital World
At Ridgeway Secondary School, safeguarding is at the heart of everything we do. As part of our whole-school approach to safeguarding, we recognise that digital technology — and particularly the rapid growth of artificial intelligence (AI) — presents both powerful opportunities and serious risks for children and young people.
In line with Keeping Children Safe in Education (KCSIE 2025) and Department for Education guidance, we are committed to ensuring that AI and digital technologies are used responsibly, safely, and productively, while protecting our pupils from harm.
A Whole-School Approach to Online Safety and AI
Safeguarding at Ridgeway is not just about systems — it is about culture, education, and relationships. Our approach includes:
- Strong leadership and governance oversight
- Clear policies and procedures
- Staff training and professional development
- Curriculum education
- Student voice and pupil engagement
- Parental partnership and communication
- Robust filtering and monitoring systems
- Proactive risk assessment and review
This reflects the DfE’s expectation that safeguarding, including online safety and AI, must underpin all aspects of school life .
The Positive Use of AI in Education
AI can be a powerful tool when used responsibly. At Ridgeway, we recognise its educational value and its role in preparing young people for the future world of work and learning.
Positive and productive uses of AI include:
- Supporting learning and revision
- Personalised learning support
- Assistive technologies for SEND learners
- Research and information retrieval
- Creative projects and digital literacy
- Supporting safeguarding through content detection and filtering technologies
- Online safety education tools and platforms
AI is also used nationally to:
- Detect and remove harmful online content
- Identify child exploitation material
- Prevent cyberbullying
- Support online moderation
- Promote safer online behaviours
We actively teach students how to use AI ethically, critically and responsibly, not as a shortcut, but as a learning support tool.
The Safeguarding Risks of AI
Alongside its benefits, AI also presents significant safeguarding risks, particularly for children and young people.
These risks fall into the four recognised online safety categories:
🔴 Content Risks
- Exposure to harmful, violent, extremist or sexual material
- AI-generated misinformation and fake news
- Deepfakes and manipulated images/videos
- Harmful self-harm or suicide content
🔴 Contact Risks
- AI chatbots and fake identities used for grooming
- Sexual exploitation and extortion (sextortion)
- Manipulative AI-driven interactions
- Predatory behaviour disguised as AI support tools
🔴 Conduct Risks
- Creating or sharing harmful AI-generated images
- Online bullying and harassment
- Coercion and exploitation
- Loss of control over personal content
🔴 Commerce Risks
- Online scams and fraud
- AI-generated phishing
- Financial exploitation
- Data harvesting and privacy breaches
National evidence shows:
- A significant rise in misinformation and disinformation
- Rapid growth in online grooming and sexual exploitation
- A major increase in sextortion cases, particularly affecting boys
- Growing concerns around AI-generated child sexual abuse material (CSAM)
- Increasing deepfake exposure among young people
- Children accessing harmful AI tools with little or no built-in protection
What the Department for Education Expects Schools to Do
In line with DfE guidance, Ridgeway Secondary School ensures:
- AI is included in our Online Safety Policy
- Regular risk assessments of digital tools
- Annual online safety and filtering reviews
- Strong governance oversight
- Staff training on AI and digital safeguarding
- Curriculum education on AI literacy
- Pupil voice activities on digital experiences
- Parental engagement and education
- Clear safeguarding response pathways
- Filtering and monitoring systems that are:
- Proportionate
- Effective
- Regularly reviewed
- Actively monitored
We do not block learning — we manage risk responsibly.
Our Message to Students
At Ridgeway, students are taught that:
- You are not to blame if something goes wrong online
- You will never get into trouble for reporting concerns
- You deserve to feel safe, respected, and supported
- What happens online is real life
- AI is a tool — not a replacement for thinking
- Not everything online is true
- You should never feel pressured to share images, information or content
- Help is always available
We work hard to create a culture where students feel safe to speak, not silent through fear.
Our Message to Parents and Carers
We know that parenting in a digital world is complex and challenging. You are not expected to be technology experts.
We encourage parents to:
- Talk openly with children about AI and online safety
- Ask what apps and platforms they use
- Set healthy digital boundaries
- Use parental controls and monitoring tools
- Encourage critical thinking about online content
- Reinforce that mistakes can be fixed
- Report concerns early
- Work in partnership with the school
Safeguarding works best when school and home work together.
Trusted Support & Guidance
For Parents and Carers
- CEOP – Child Exploitation and Online Protection
https://www.ceopeducation.co.uk/parents - Internet Matters
https://www.internetmatters.org - NSPCC
https://www.nspcc.org.uk - IWF (Internet Watch Foundation)
https://www.iwf.org.uk - Think Before You Share
https://www.thinkbeforeyoushare.org
For Students
- CEOP Report
- Childline – 0800 1111
- UK Safer Internet Centre
- Google Interland (online safety education game)
- Report Remove service
Reporting a Concern
If you have a concern about:
- AI use
- Online behaviour
- Online content
- Grooming
- Exploitation
- Harmful material
- Sextortion
- Cyberbullying
- Digital safety
Please contact the school immediately and speak to a member of the Safeguarding Team.