Tag: watermark

Starting in September, all AI content on Chinese social media platforms must be watermarked.

Starting in September, all AI content on Chinese social media platforms must be watermarked.

The "Measures for Identifying AI-Generated Content," announced in March this year by the Cyberspace Administration of China (CAC) and other departments, officially came into effect on September 1st. These measures require social media platforms in China to add both explicit and implicit labels to AI-generated content to facilitate identification. Explicit labels are clearly identifiable prompts for users, such as text tags or voice prompts, while implicit labels are attached to the content's metadata for system tracking and identification. These new regulations apply to major social media platforms such as Weibo, Xiaohongshu, Bilibili, and Douyin. Currently, each platform handles unlabeled AI-generated content differently, including warnings, tagging, traffic restrictions, removal, and even account suspension. Douyin has the strictest regulations, with violators potentially facing bans on uploading and publishing videos, removal of followers, or cancellation of monetization privileges. However, due to the limitations of current automatic identification technology, some unlabeled AI content still exists on the platforms, and misinformation can still spread widely online. However, market analysts believe that as the technology matures and users' ability to identify AI-generated content improves, the transparency and regulatory effectiveness of AI-generated content will gradually increase.

Statistics show that OpenAI's ChatGPT has attracted more than 1 million users in January.

OpenAI revealed that it has the technology to check whether the content of an article is generated by AI, but it does not plan to make it public.

OpenAI previously explained that while it possesses the capability to identify whether article content was generated using its ChatGPT technology, it does not currently intend to provide this capability to the public. In further explanation, OpenAI stated that in addition to using watermark technology to determine if article content was generated using ChatGPT, methods for determining whether content was generated by artificial intelligence also include classifiers and metadata. While watermarking can quickly compare whether content was generated using artificial intelligence, it still has a certain degree of inaccuracy. For example, translating the generated content into different languages ​​using translation tools, aggregating it using another set of artificial intelligence tools, or even making significant changes to the generated text can render the watermark system ineffective. Furthermore, OpenAI believes that strictly scrutinizing whether text content was generated by artificial intelligence could negatively impact the willingness to use artificial intelligence applications and could even exacerbate situations where the originality of an individual's writing is questioned. Therefore, considering these factors, it will currently only provide verification tools for audio and video content, while the provision of verification tools for text content will be considered more cautiously.

Welcome back!

Login to your account below

Retrieve your password

Hãy nhập tên người dùng hoặc địa chỉ email để mở mật khẩu