Protecting Children’s Digital Safety and Privacy in Smart Device Interactions

view at night above earth

This project will develop a next-generation content filtering system specifically designed for children’s smart device interactions, integrating state-of-the-art federated learning, Large Language Models (LLMs), blockchain, and Explainable Artificial Intelligence (XAI) to enhance privacy, security, and interpretability.

Award
MRes | MPhil | PhD | MSc by Research
Start Date
Usually February and October - at individual School's discretion

Summary

The widespread adoption of smart devices among children has created significant challenges regarding digital safety and privacy. As frequent users of these devices, children are increasingly vulnerable to harmful online content, including age-inappropriate materials like pornography, violence, and other disturbing content. Current content filtering mechanisms, such as Net Nanny, while effective in blocking harmful websites and applications, predominantly rely on centralized systems. These systems often lack the ability to monitor children’s activities and interactions on legitimate applications, such as social media platforms like Facebook and streaming services like YouTube, raising serious concerns about their effectiveness in detecting harmful interactions. Furthermore, centralized approaches typically involve the collection and processing of children's data on central servers, which poses risks of potential data misuse or breaches. With increasing public and regulatory emphasis on safeguarding children’s digital privacy, there is an urgent need for a decentralized, privacy-preserving solution that effectively filters harmful content while ensuring that user data remains secure and private.

With the introduction of the Online Safety Act, public and regulatory emphasis on protecting children’s digital privacy is growing. The Act requires online service providers to implement protections against harmful content, enforce age verification, and conduct regular risk assessments. This legislation underscores the urgent need for a decentralized, privacy-preserving solution that effectively filters harmful content while keeping user data secure and private.

This PhD proposal aims to develop a next-generation content filtering system specifically designed for children’s smart device interactions, integrating state-of-the-art federated learning, Large Language Models (LLMs), blockchain, and Explainable Artificial Intelligence (XAI) to enhance privacy, security, and interpretability. By addressing both privacy and content safety, this research seeks to balance the need for robust content moderation with the preservation of user data privacy, offering a solution that is effective, ethical, and secure.

Objectives

  • Develop Federated Learning Techniques: Create distributed content filtering methods across devices, minimizing the need for centralized data collection and mitigating privacy risks associated with centralized content analysis.
  • Implement LLM-Based Models: Develop a model based on LLMs to understand and filter textual content effectively. This model will enable the system to accurately identify inappropriate content across various linguistic variations and contexts, thereby enhancing the relevance of content filtering.
  • Utilize Blockchain for Model Weight Sharing: Establish a community-driven, blockchain-based federated learning model to facilitate the sharing of model weights among participating devices. This approach will ensure that updates are decentralized and tamper-proof, enhancing both the security and integrity of the model.
  • Incorporate Explainable AI Techniques: Integrate XAI techniques to ensure the filtering model’s decision-making process remains interpretable. This transparency will allow parents, guardians, and stakeholders to understand why specific content is flagged, fostering trust in the system.

Potential Candidate Qualifications

  • Strong academic background in cybersecurity, computer science, or a related discipline, with at least a 2:1 BSc or equivalent.
  • Proficiency in programming languages such as Python and Java, with hands-on experience in developing machine learning algorithms.
  • Demonstrated interest in research and a commitment to advancing the field of child safety in digital environments.

Supervisors

Discuss this further with a potential supervisor for this research degree:

Research Themes

Find other Research Degrees in the same theme:

About our Research Degrees

We welcome specific ideas from applicants for a potential research degree topic. The University offers a number of potential self-funded topics in each School which you could consider developing your research project around.

It is not unusual for a proposed research degree topic to evolve as you progress through the application stage and even your future studies. This ensures the best match of your academic/professional experience with that of your supervisory team.   

Occasionally, the University will also advertise available funded studentships, for very specific topics or subjects:

If you need any further support prior to applying, please email researchdegrees@rgu.ac.uk

 

Ready to start this Research Degree?

Find out about our entry requirements, application dates and how to apply.

Apply for a Research Degree

Any questions?

Get in touch with our team and we'll do our best to help.

Get in touch

Cookie Consent