Australia's internet regulatory body has issued a stern warning: if AI services do not implement age verification measures within the designated timeframe, regulatory authorities may compel search engines and app stores to block access to those services. Previous investigations have revealed that more than half of the AI platforms under scrutiny have not disclosed their compliance plans, signaling one of Australia's most stringent regulatory moves against AI firms. Several AI companies are currently embroiled in lawsuits for allegedly failing to prevent, or even contributing to, incidents of self-harm or violent behavior among users. Research has highlighted that the potential impact of chat platforms on the mental health of teenagers is more pronounced than that of social media. Since December of last year, Australia has prohibited teenagers from using social media and now intends to enact similar regulations in the AI sector. The new regulations will come into effect on March 9, with non-compliant entities facing substantial fines. The regulatory agency has stated that it will utilize all available enforcement powers upon detecting non-compliance. Although the country has not yet experienced any violence or self-harm incidents directly linked to chatbots, there are growing concerns about 10-year-old children spending up to six hours a day using AI interactive tools. Among 50 text-based AI products, only nine have implemented or plan to introduce age verification mechanisms, while 30 have not taken definitive action. Most companion chatbots also lack effective filtering or age verification systems.
