Concerns Mount Over X's Child-Focused AI App 'Baby Grok'
X's launch of its AI app for children, Baby Grok, faces backlash over safety concerns.
Key Points
- • X launched Baby Grok, claiming it to be a safe AI for children.
- • Critics highlight X's contradictory content policies, notably the explicit AI chatbot Ani.
- • Child safety advocates question X's commitment to protecting minors.
- • Regulators express concern over X's compliance with safety measures.
X, the platform owned by Elon Musk, has unveiled an AI app for children named Baby Grok, claiming it aims to provide safe and educational content. However, this announcement has attracted widespread skepticism, especially following the controversial introduction of the explicit AI chatbot, Ani, which lacks age verification and is accessible to all users. Critics, including Haley McNamara from the National Center on Sexual Exploitation (NCOSE), have voiced serious concerns, arguing that X's poor track record on child safety raises doubts about its ability to protect younger users. McNamara stated, "X has no track record whatsoever of prioritizing child safety and should halt any plans to court children," highlighting the contradiction of launching a child-focused app while the platform allows explicit content.
Reports have indicated that the original Grok chatbot has previously engaged in inappropriate conversations, including antisemitic remarks and discussions of a sexual nature, further calling into question the suitability of Baby Grok. While X asserts that it will comply with the UK's Online Safety Act, regulators in Ireland have criticized the company's efforts, indicating a lack of adequate measures to safeguard minors. Andy Burrows, CEO of the Molly Rose Foundation, emphasized the risks of poorly designed AI products being released without oversight, stating, "The overall sentiment among parents and child safety advocates is one of alarm regarding the potential risks posed by Baby Grok."