Search

10 Apr 2026

Should parents worry about their children using AI?

Should parents worry about their children using AI?

Artificial intelligence is no longer just a part of our children’s future – it’s shaping their lives now.

But should parents be worried about it?

A report released last month by EU Kids Online, a research network aiming to improve knowledge of children’s online opportunities, found around seven in 10 European children, including kids from the UK, report using some form of generative AI (generative AI learns the patterns and structures of its training data, and uses them to generate new data).

But children using AI are often not making a conscious decision to use it, as it’s increasingly integrated into platforms they already use, such as search engines, messaging services and social media.

Professor Sonia Livingstone, founder of EU Kids Online and director of the Digital Futures for Children centre at the London School of Economics and Political Science, warns: “Parents need to attend to how their children are using generative AI for several reasons, and there are definitely grounds for concern.

“AI is everywhere for children. Most importantly, parents need to understand how their children are using it so they’re in the know, so they can anticipate problems, and so their child will see value in sharing the experience with them.”

Dr Mhairi Aitken, co-founder and director Our AI Collective CIC, a not-for-profit company which wants AI to be shaped by people and not profit, says children of all ages are interacting with AI on a daily basis, including infants who play with smart toys, young children watching video-sharing platforms that recommend content through AI systems that track their viewing, and teens and pre-teens served posts and content that AI models have calculated is most likely to keep them scrolling on social media.

In addition, there’s the teenagers who are turning to AI companions for emotional support and connection.

Aitken, who used to work at The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, says there are “really exciting possibilities” in the ways generative AI could support accessible and adaptive learning for children, and points out that kids already use AI creatively to support homework and revision.

But there are certain aspects of using AI that parents should be aware of, say the experts.

Check for accuracy

Aitken, who’s also a visiting senior lecturer at the Digital Environment Research Institute, Queen Mary University of London, says it’s important to check information supplied by AI for accuracy, as “it can be a helpful tool, but can’t be entirely relied upon.

“If your child is using generative AI to search for information or to get ideas for schoolwork, there’s a high chance this will include inaccurate or false information,” she warns.

“It could provide a starting point which you and your child can fact-check and follow up on sources. Encouraging your child to look beyond AI overviews to consider the range of possible sources and potentially conflicting ideas and viewpoints will help them develop critical thinking skills and an awareness of alternative perspectives.”

Be aware of what AI knows about your child

AI collects massive amounts of data from online behaviour – and it’s not just from older children who go online. A University of Basel study found several smart toys, which offer interactive play through software and internet access, “raise privacy concerns”, and some “collect extensive behavioural data about children”.

Livingstone explains: “Unless they only use it through school, AI is hoovering up children’s data, building a personal profile of the child, and using this to target advertising, content and even advice.”

Concern about AI companions


Livingstone says research shows a growing number of children are making friends with AI, feeling ‘seen’ by AI, and even preferring AI to human contact.

She describes this as “very worrying”, and says: “Since the AI is programmed to flatter the child, hold their attention, and offer them advice whether good, bad or frankly dangerous, it can be scary.”

Aitken says these AI companions, which are interactive chatbots that users can personalise to create an AI friend or boy/girlfriend to have conversations with, are “an area of big concern”, as users can develop trust in the AI companion and “What might start as occasional or experimental use can easily turn into dependency.”

She says AI companions’ conversations don’t challenge the user’s beliefs, even when they’re dangerous, discriminatory or harmful, and points out: “This can become particularly harmful when a user discusses their mental health or negative thoughts in ways that ought to prompt a redirection to professional help, but rather the AI reinforces the user’s perspectives.”

Sexualised images


As AI image generators and tools that can alter photos have become readily available, there’s been an alarming rise in children’s photos being manipulated into sexually explicit deepfakes without their consent, often by classmates or peers, says Aitken.

She explains that the technology to do this is readily available and easy to use, including ‘nudification’ apps, and it’s possible to alter photos to remove clothes or put someone in a sexual pose, using general purpose generative AI models.

“Girls are significantly more likely to be the target of this behaviour, and the impact can be devastating,” she warns.

“Don’t avoid talking to your child about difficult topics like sexualised images, even if you think, or hope, it’s not relevant to your child. Speak to your child early about the ways images can be altered using AI, about online bullying and sextortion, and make sure they know that if this ever does happen to them they can talk to you, or another trusted adult, about it.

“Be clear that if anything like this does happen they shouldn’t feel ashamed or guilty – it is absolutely not their fault. Let them know about sources of support they can go to and encourage them to report it.”

Appropriate safeguards

OpenAI, which owns the generative AI platform ChatGPT, points out that ChatGPT’s minimum age is 13 and users under 13 aren’t allowed to create accounts.

“Our age prediction system means if you enter your age as under 18 at sign-up, or our system estimates you are ‘likely under 18’, we apply teen safeguards by default. If we are not confident about your age, we still default to a safer experience,” it says.

The company says parental controls give families tools to further customise their child’s settings, letting parents link to teen accounts for a more age-appropriate experience.

“Our teen principles for users under 18 years old are anchored in four guiding commitments,” explains OpenAI. “We train our models to apply appropriate safeguards for teens, including encouraging teens to seek real-world support from a parent or guardian, or professional care when appropriate.”

The company stresses it doesn’t actively seek out personal information to train its models, and doesn’t use public information on the internet to build profiles about people, advertise to or target them, or to sell their data.

To continue reading this article,
please subscribe and support local journalism!


Subscribing will allow you access to all of our premium content and archived articles.

Subscribe

To continue reading this article for FREE,
please kindly register and/or log in.


Registration is absolutely 100% FREE and will help us personalise your experience on our sites. You can also sign up to our carefully curated newsletter(s) to keep up to date with your latest local news!

Register / Login

Buy the e-paper of the Donegal Democrat, Donegal People's Press, Donegal Post and Inish Times here for instant access to Donegal's premier news titles.

Keep up with the latest news from Donegal with our daily newsletter featuring the most important stories of the day delivered to your inbox every evening at 5pm.