Tag: teens

Meta is reportedly working on a smart headset with a camera that can recognize objects in front of it.

Class action lawsuit documents from US school districts reveal that Meta concealed internal research suggesting that "stopping Facebook use is good for your health."

Meta is once again embroiled in controversy over concealing research data. According to a Reuters report, in a class-action lawsuit filed by multiple school districts against the social media giant in the United States, an unedited court document alleges that Meta concealed research findings that users who stopped using Facebook experienced reduced feelings of depression, anxiety, and loneliness, and even deliberately suspended related internal investigations. The lawsuit accuses the tech giant of knowingly concealing the health risks posed by its platform from users. "Project Mercury," which confirmed negative impacts, is suspected of being deliberately aborted. The research project, codenamed "Project Mercury," began in 2020 as a collaboration between Meta scientists and Nielsen, investigating the specific effects of "stopping" Facebook use on users. The lawsuit alleges that when research data showed "mental health benefits of leaving Facebook," Meta not only shut down the project but also chose not to publish the results, even labeling these findings as biased and contaminated by "existing media narratives." However, internal documents reveal differing opinions among Meta employees. Researchers explicitly stated, "Nielsen's research does indeed show the causal impact of Facebook on social comparisons." Even more startlingly, an internal employee likened this situation to past practices in the tobacco industry—"like conducting research that discovers cigarettes are harmful and then hoarding that information." This is reminiscent of energy giants like Shell and Exxon, who discovered the link between fossil fuels and climate change in the 1980s but chose to conceal it. Meta retaliates: Citing out of context. Facing the accusations, a Meta spokesperson issued a statement to Reuters, saying, "The full record will show that for over a decade we have listened to parents, researched critical issues, and made substantial changes to protect teenagers (such as Instagram's teen accounts)." Meta emphasized, "We strongly disagree with these allegations; the content relies on out-of-context quotations and erroneous viewpoints." Meta is currently seeking to have the court remove the underlying documents supporting these allegations, claiming the plaintiff's request for declassification is too broad. The related hearing is scheduled for January 26, 2026, in the Northern District of California District Court. The global wave of "bans on social media for minors" is not the first time Meta has been accused of concealing harmful research. For example, in 2023, 41 US states sued Meta on similar issues, and the judge ruled that Meta's lawyers had attempted to block internal research showing that its platform was harmful to teenagers. As concerns intensify, countries around the world are taking more aggressive measures. Malaysia recently announced that it will join Denmark and Australia in planning legislation to ban minors from using social media, indicating that regulators' trust in tech giants is plummeting.

Facebook's parent company acquires US bank's trademark for $6000 million

Social media addiction lawsuit escalates, with Los Angeles judge ordering Meta, Snap, and others to testify

A major development has emerged in several lawsuits against Meta and Snap, alleging that their social media platforms harm and addict underage users. According to CNBC, Los Angeles Superior Court Judge Carolyn B. Kuhl ruled that three top executives—Meta CEO Mark Zuckerberg, Instagram head Adam Mosseri, and Snap CEO Evan Spiegel—must personally testify in court at a trial scheduled for January. The judge stated that the executives' testimony is crucial in determining negligence. The trial will focus on the safety of social media and whether the platform's mechanisms are "addictive," thereby harming young users. Previously, legal teams for both Meta and Snap had strongly argued for exemptions from the executives' appearance requirement. Meta even warned that forcing Mark Zuckerberg and Adam Mosseri to testify would "set a precedent" for numerous similar lawsuits in the future. However, the judge clearly disagreed with this view, stating in the ruling that the CEO's testimony was uniquely relevant because the executive's awareness of the platform's harm and its failure to take available measures to avoid such harm would confirm whether its decision was negligent or whether it approved a negligent decision. This landmark case is attracting significant attention because the trial, scheduled to begin in January, is the first of many lawsuits alleging that social media platforms harm young users to formally enter the trial stage. Its trial process and final judgment are expected to serve as important reference indicators for subsequent cases. Currently, Meta and Snap are facing numerous lawsuits across the United States alleging safety issues on their platforms and harm to minors. In response to the ruling, the law firm representing Snap stated in a statement that the judge's order "does not affect the validity of the plaintiffs' claims" and that they "look forward to the opportunity to explain in court why the plaintiffs' allegations against Snapchat are factually and legally incorrect." Meta has not yet immediately responded to the ruling.

Statistics show that OpenAI's ChatGPT has attracted more than 1 million users in January.

OpenAI will release a "teen version" of ChatGPT, adding parent-controlled time periods and chat history settings, and will be online before the end of the year

OpenAI announced that it will officially launch "ChatGPT for Teens" by the end of 2025, designed specifically for users aged 13 to 18. This version will further enhance privacy and security, while providing parents with more management tools to ensure a healthier interactive experience for teenagers using AI. According to OpenAI, "ChatGPT for Teens" will switch on when the system cannot determine whether a user is actually 18 years of age or older, thereby protecting their privacy and security. OpenAI stated that when targeting teenagers, the platform must strike a balance between "privacy," "freedom," and "safety," with security being the top priority. Parents or guardians can link their accounts to their teenagers' accounts through invitations and can set whether to enable conversation memory, whether to retain chat history, and even directly set "disabled periods" to prevent teenagers from using AI chat tools for extended periods late at night. If the system detects acute distress or mental health risks in teenagers, parents will also be notified immediately for early intervention. This adjustment also reflects the recent high level of concern in American society regarding the mental health risks of teenagers. Recent studies have indicated that prolonged interaction with generative AI may exacerbate feelings of isolation or cause psychological stress. The US Congress also plans to hold hearings to examine the impact of AI products on minors. The US Federal Trade Commission (FTC) has launched investigations into companies such as OpenAI, Meta, and Google, demanding an examination of the potential harms of their AI chat services. In August of this year, an American parent accused their 16-year-old son of suicide after interacting with ChatGPT and filed a lawsuit against OpenAI, arguing that the platform should bear some responsibility. This case has made the industry more aware of the safety design of generative AI used by teenagers and prompted OpenAI to accelerate the launch of a version of its service for teenagers. OpenAI stated that the "teen version of ChatGPT" will be launched in phases and will be adjusted based on feedback from users and parents. In the future, it does not rule out cooperation with schools and educational institutions to allow teenagers to learn and create in a safe AI environment, rather than just using it as an entertainment tool. As generative AI permeates daily life, how to allow teenagers to explore the potential of AI in a safe environment has clearly become a common challenge for technology companies worldwide. As for OpenAI's latest initiatives, they may become a new standard for the industry in establishing AI use among teenagers, and are expected to attract other AI platforms to follow suit.

The initial enthusiasm for online use has subsided, and the growth rate of ChatGPT usage traffic has begun to slow down.

The US FTC is investigating the impact of AI chatbots on teenagers and is demanding explanations from companies like Google, OpenAI, and Meta.

On September 11, U.S. time, the Federal Trade Commission (FTC) announced that it would formally require seven companies, including Alphabet, Character, Instagram, Meta, OpenAI, Snap, and xAI, to provide detailed explanations of the potential negative impacts of their AI chatbots on children and adolescents. The FTC stated that generative AI chatbots can mimic human tone, emotions, and behavior, and even accompany users for extended periods as "friends" or "confidants." While this anthropomorphic design can provide a sense of companionship, it may also mislead children and adolescents into believing that these systems truly understand and care for them, leading to dependency or misplaced trust, and ultimately affecting their mental health. Therefore, the FTC's requirements cover multiple aspects, including whether each company conducted security assessments before and after product deployment, and how they tested and mitigated potential negative impacts. Other requirements include whether they restricted use by children and adolescents, whether they disclosed risks and data collection methods to users and parents, and how they monitored and enforced terms of service and age restrictions. The FTC also requires companies to explain how they monetize interactions, how they review character content, and how they handle chat logs and personal data. Many believe the Federal Trade Commission's (FTC) move is related to recent negative incidents involving AI chatbots. For example, a parent of a 16-year-old in California sued OpenAI, alleging ChatGPT influenced their child's suicidal behavior; a 14-year-old discussed suicidal thoughts with the Character.AI chatbot and ultimately acted on them; and a person with cognitive impairment died after being accidentally injured while meeting a Meta avatar due to excessive interaction. FTC Commissioner Mark Meador stated that generative AI brings many innovative applications and conveniences, but also risks, including misleading responses, biased content, misuse of personal data, and psychological impacts on children and adolescents. Meador emphasized that young users often lack the ability to discern the truth of information and resist emotional manipulation, making them more susceptible to influence. Therefore, a more rigorous regulatory mechanism and transparency are needed to ensure a balance between AI development and user safety. This FTC investigation signifies that US regulators are beginning to more actively examine the potential societal impact of AI chatbots, particularly targeting more vulnerable groups such as children and adolescents. In the future, stricter regulations may be introduced to require businesses to be more transparent in product design, risk disclosure, and data processing, and to ensure that parents' and users' right to know is upheld, so as to ensure that the AI ​​industry can develop under the premise of responsible innovation.

Meta strengthens Instagram youth account security measures again, expanding protection for child accounts managed by adults

Meta strengthens Instagram youth account security measures again, expanding protection for child accounts managed by adults

In response to potential online risks faced by underage users, Meta has further upgraded its social media platform security mechanisms. This update focuses on children's accounts on Instagram managed by adults (such as parents or guardians). Meta will automatically apply stricter message and comment controls to these accounts to prevent inappropriate users from making sexual innuendos or demanding nude photos through private messages or public comments. Meta stated that although Instagram currently does not allow users under the age of 13 to register accounts, it still allows accounts managed by adults and created for children. These accounts are mostly used by children of celebrities, child models, rising sports stars, or influencer families for sharing their daily activities. Meta stated that their usage is "mostly benign." However, in recent years, these accounts have frequently been targeted by harassment, often through public comments or private messages, involving sexual harassment and even demands for inappropriate images. To strengthen the protection of these accounts, Meta announced that it will gradually implement the following new measures in the coming months: • Restricting private messaging: The strictest message privacy settings will be automatically enabled for children's accounts managed by adults. This means that messages can only be sent to those explicitly authorized by the account owner, reducing the risk of interference from strangers' private messages. • Automatically enable "Hidden Words": Meta will enable the "Hidden Words" function by default, helping account owners automatically filter messages containing inappropriate language or sensitive words, preventing children's content from being maliciously interfered with. • Block opportunities for suspicious accounts to be exposed: The system will avoid recommending these children's accounts to suspicious accounts that have been blocked by teenage users, thereby reducing the chances of harassers or unfamiliar adults contacting them. • Reduce search visibility and comment privileges: Meta will also make these accounts less likely to be searched by potential suspicious users, and will automatically hide comments from potentially risky adult accounts. • Strengthen account verification and removal mechanisms: Earlier this year, Meta blocked approximately 135,000 Instagram accounts suspected of sending sexually suggestive messages or inappropriate video requests to children. In addition, approximately 500,000 Facebook and Instagram accounts associated with these activities have also been removed. These new measures mainly continue Meta's policy advancements regarding youth safety in recent years. In 2024, Meta began offering automatic privacy protections for Instagram accounts targeting teenagers, including stricter management of private messaging permissions and exposure limits. This April, these protections were further extended to Facebook and Messenger platforms, and AI age recognition technology was tested to determine if users were faking their age, preventing adult users from impersonating teenagers to contact minors. Furthermore, Meta continues to release new tools to protect against sexual exploitation. For example, the "Location Notice" feature, launched in June, alerts teenagers when they initiate conversations with foreign users, indicating that the other party is from a different country, helping to identify potential fake accounts or scams. According to observations by law enforcement agencies in multiple countries, online sexual extortion cases are on the rise, with perpetrators often establishing contact with teenagers by impersonating others and then threatening them to take nude photos. To address this, Meta has also introduced "nudity protection" technology for private messages. If the system detects nude images in a message, it will automatically blur them and warn the user to prevent the other party from being induced to "respond nude in return." The new features also include a "Safety Tips" entry point. Teen users can easily click the "Safety Tips" icon in private messages to quickly view options such as blocking, reporting, and restricting interaction. A new "Block and Report Combined Button" has also been added, allowing users to perform both actions with a single click. Meta stated that these updates are the result of continuous feedback from the community, experts, and parents, and they will continue to release safer, more transparent, and easier-to-use control tools for children and teenagers. With the rise of short videos and teenage creators, striking a balance between creation and sharing will continue to be a challenge and responsibility for social media platforms.

Waymo allows teenagers to ride in driverless taxis alone. Anyone aged 14 or above can ride freely

Waymo allows teenagers to ride in driverless taxis alone. Anyone aged 14 or above can ride freely

Alphabet's self-driving car technology company, Waymo, recently launched its "Teenager Account" service in Phoenix, Arizona, allowing users aged 14 to 17 to ride in its robotaxi unaccompanied by parents. This new initiative not only expands the age range of users of its self-driving cars but also gives teenagers more autonomy in their transportation choices, while reducing parents' concerns about the potential risks of traditional ride-sharing platforms. Waymo stated that the Teen Account is currently only available in the Phoenix area and will be gradually expanded to other cities depending on operational progress. During registration, a parent or guardian still needs to create a primary account and then add a Teen Account. All ride records and receipts will still be sent to the primary account holder, allowing parents to monitor their child's trips in real time. The system also supports real-time sharing, allowing parents to check their children's routes and drop-off times via their mobile phones to ensure the teenager's whereabouts. According to Waymo's published usage guidelines, the Teen Account is limited to users aged 14 to 17, and if there are passengers in the same car, they must be at least 14 years old, with a maximum of four passengers allowed in the same car. Waymo also emphasizes that its customer service team has received professional training specifically for teenage users. In case of special circumstances, customer service can intervene immediately and even proactively contact parents for assistance. While ride-sharing platforms like Uber and Lyft have already enabled solo rides for teenagers, some parents still worry about the risks of their children being alone with unfamiliar drivers. Waymo emphasizes its self-driving technology as the core, eliminating the "risk of human driving," which is one of the key advantages Waymo promotes when offering teenage accounts. Waymo specifically points out on its website that self-driving vehicles can reduce parents' concerns about their children's safety, and the complete trip recording and real-time tracking mechanism also makes the ride more transparent and controllable. Currently, Waymo's self-driving taxi service is available in Phoenix, San Francisco, and Los Angeles, with plans to expand to more cities in the future. This opening to teenage users not only expands the user base but also has the potential to promote the widespread adoption of self-driving cars in daily life. However, no other self-driving car operators besides Waymo have followed suit with similar measures, indicating that solo rides by teenagers are still in their early stages. The success of future promotion depends on market feedback and regulatory improvements.

Uber launches senior-friendly service in the U.S., with global markets including Taiwan to follow shortly.

Uber launches senior-friendly service in the U.S., with global markets including Taiwan to follow shortly.

Following its service for teenagers, Uber recently announced a dedicated version for seniors, featuring a simpler interface for easier use. This senior-specific version features larger fonts, reduced complexity, and a more intuitive design, making it easier for seniors or those with mild visual impairments to use Uber. Furthermore, relatives and caregivers can assist with booking rides, adjusting payment methods and trip confirmations, and tracking the senior's journey or sending SMS notifications upon arrival. In emergencies, such as unexpected events, relatives or caregivers can directly contact the driver. Other convenient features include the ability for seniors to save frequently visited locations, such as hospitals, clinics, and train stations, and the acceptance of Medicare prepaid benefit cards for paying for rides, covering transportation costs for medical appointments. However, compared to competitor Lyft's "Lyft Silver" service, which offers more convenient and comfortable rides for seniors, Uber currently does not offer a similar service for seniors. Uber's senior version service will initially be available in the US market, and will later be offered in global markets including Taiwan, Hong Kong, India (expectedly with only a simplified interface adjustment initially), Brazil, Mexico, Chile, Portugal, France, and South Africa. Further adjustments to service details are anticipated for each region.

Uber's teen ride service is now available in more US cities and can be shared with Uber One paid memberships.

Uber's teen ride service is now available in more US cities and can be shared with Uber One paid memberships.

Uber added a teen ride service last year and recently announced improvements to its safety features, expanding its availability to over 250 cities across all states in the US. Previously, the Uber teen ride service was only available in select cities, allowing teenagers to pay for rides or deliveries using their parents' or guardians' credit cards with parental or guardian consent. Parents or guardians could instantly track each ride. Only highly-rated drivers were allowed to accept rides booked by teenagers, and drivers had the right to refuse service to ensure teenagers' safety and prevent misconduct. Parents or guardians could also contact the driver or Uber customer service directly during the ride to check for any unusual activity. This update allows teenagers to book rides with their own phones with parental or guardian consent, and the safety features can still be used to monitor their usage. Additionally, if a parent or guardian has an Uber One subscription, it can now be shared with the teenager, and rewards earned by the teenager are credited to their personal account for future use. Other updates include allowing teenagers to plan their rides for the next three months through the booking service, and to book rides as early as half an hour in advance. Teenagers can also top up their Uber Cash account by entering an Uber gift card number, which can then be used for future purchases.

YouTube adds "illegal video viewership" metric to clarify platform content moderation responsibility

Reports allege that Google and Meta planned to bypass regulations and push Instagram ads to minors through the YouTube platform.

The Financial Times reports that Google and Meta secretly reached an agreement to push Instagram ads to teenagers through YouTube, circumventing Google's online guidelines for minors. The report indicates that these Instagram ads would target users marked as "unknown" in the system, a group largely comprised of teenagers under 18. Google, aware of this, deliberately circumvented its own policy against personalized advertising to those under 18. Google subsequently confirmed the project's cancellation, emphasizing that it would implement safeguards to prevent personalized advertising to those under 18 and stating that no users under 18 were affected. However, Google did not deny the loophole, stating it would take additional measures to prevent sales representatives from assisting advertisers or agencies in circumventing restrictions. Meta, on the other hand, emphasized compliance with its advertising policies and partner regulations but did not respond to whether it was aware of its employees deliberately circumventing advertising guidelines.

Welcome back!

Login to your account below

Retrieve your password

Hãy nhập tên người dùng hoặc địa chỉ email để mở mật khẩu