OpenAI announces a new feature in ChatGPT serviceA function that reminds users to take a break, and simultaneously adjust AI's response strategy when faced with highly personalized decision-making, hoping to reduce misleading or over-reliance on AI.
This new feature will appear as a pop-up reminder when a user's conversation with ChatGPT is too long, with a message like, "You've been chatting for a while—is now a good time to take a break?" The user must click to confirm before continuing the conversation. This type of design is common on Nintendo Wii and Switch platforms, where a message pops up to prompt the player to pause when the game time is too long. Now that OpenAI has applied this concept to AI conversations, there are deeper concerns behind it.
According to the New York Times in June 2025ReportSome users, after prolonged interaction with ChatGPT, have fallen into unhealthy thought patterns and even experienced suicidal thoughts. While most cases are related to pre-existing mental health issues, the AI model still fails to fully demonstrate its protective mechanisms when dealing with sensitive topics. Instead, it may "follow" users' negative thoughts, fueling them and raising further concerns.
OpenAI stated in an official statement that ChatGPT does have issues with inappropriate responses in certain situations, particularly when users raise high-risk or private issues, where the system may not respond with sufficient caution. Therefore, future versions of ChatGPT will adopt an "assisted thinking" approach to respond to such questions, rather than directly providing conclusions.
Specifically, AI will try to guide users to break down problems, ask related questions, and even list pros and cons to make the conversation more constructive, while avoiding misunderstandings or psychological burdens caused by single-directional suggestions.
OpenAI has previously attempted to make ChatGPT more humane through product updates, but these efforts have also drawn significant backlash. An update in April of this year made the AI appear overly friendly and even overly flattering, sparking user dissatisfaction and ultimately leading OpenAI to cancel some of the updates. The introduction of a "break reminder" and adjustments to the response mechanism for high-risk conversations are intended to strike a balance between friendliness and restraint.
Furthermore, this update is closely aligned with OpenAI's commitment to promoting ethical AI use. As generative AI rapidly penetrates fields like education, psychological counseling, and life planning, the way AI interacts with humans is moving beyond simple information Q&A and can profoundly impact users' cognition, decision-making, and emotions. OpenAI's move acknowledges the potential for AI to influence users' mental states and seeks to improve this through system design.
While this feature is still in its early stages of implementation, it remains to be seen whether OpenAI will expand its application in the future, or even further collaborate with professional mental health institutions. However, what is certain is that while generative AI improves efficiency and convenience, it must also address the new challenges it brings on an emotional and psychological level.








