
Starting next week, Google will allow children under 13 to use its Gemini chatbot. However, this will only be available to kids with accounts managed by parents through Google’s Family Link service.
Google says it has added special safety features to protect younger users, and the data from these accounts won’t be used to train the AI. Still, there are concerns about AI chatbots giving wrong or harmful information.

Different Rules for Kids Using AI
Google’s decision to allow kids under 13 to use Gemini highlights the fact that different companies have different rules when it comes to kids using AI tools. For example, Google requires parents to manage their child’s account, but Microsoft’s Copilot AI is only available to users over 18 without parental consent.
There are laws like COPPA (Children’s Online Privacy Protection Act) that regulate how companies can collect data from kids under 13, but the rules for AI chatbots are still unclear. UNESCO also called for stricter rules around AI in schools to protect kids’ data and privacy.
Read More: Say Goodbye to Skype: How Microsoft’s this Decision Affects Our Lives!
Risks of Kids Using AI Chatbots
While Google is opening up Gemini to younger users, research has shown there are risks when kids interact with AI chatbots. A study from the University of Cambridge found that children may think AI chatbots are real people and may get upset when the chatbot can’t respond emotionally as they expect. Some chatbots have even given harmful or inappropriate advice to kids, which has led to calls for companies to be more responsible.
Experts also worry that AI chatbots might be addictive and could affect children’s mental health. Since kids might struggle to tell the difference between real emotions and AI responses, it could harm their social development.
Challenges in Monitoring AI Use
As Google allows younger users to access Gemini, tools that help parents monitor their children’s online activity are struggling to keep up. Most existing monitoring tools were designed for social media and messaging apps, and they focus on things like screen time, content filtering, and location tracking. These tools don’t work as well for AI chatbots, which can create new content every time they talk to a user.
Also See: AI-Powered Search: Google’s Bold Move Is About to Change Everything You Know About the Web
Parents will have to rely on Google’s safety features to control what their kids see and do on the Gemini chatbot. However, Google hasn’t yet fully explained how these safety features will work, leaving parents with limited ways to monitor their kids’ interactions with AI.