New York TimesCiting sources, Google has demonstrated its artificial intelligence technology, called "Genesis," to News Corp, the parent company of The Times, The Washington Post and The Wall Street Journal, which will be able to produce more responsible, fact-based news reports.
In the past, media outlets such as CNET and Red Ventures have used artificial intelligence to "assist" in writing news reports. However, just as humans can make mistakes when writing content, artificial intelligence may also produce incorrect reporting content due to citing incorrect references.
However, compared to the situation where humans write erroneous reports, there is currently a wider view that once humans begin to rely on artificial intelligence to produce large amounts of news reports, false information may spread at an unimaginable speed, thereby having a greater impact. Therefore, it is difficult to examine this issue simply as a simple erroneous operation, and some even believe that artificial intelligence should not make mistakes.
In addition to applying artificial intelligence to news reporting, Google has recently begun testing artificial intelligence models in the medical system.Med-PaLM 2Google expects that "Med-PaLM 2" will have better information understanding and application effects in the medical system than its own "Bard", competitor Microsoft's Bing, and OpenAI's ChatGPT.
However, because current large-scale natural language models still encounter information inaccuracies when interacting with artificial intelligence technologies, the application of technologies like "Med-PaLM 2" in the medical system may cause considerable skepticism and panic. After all, misjudgments can lead to subsequent medical misdiagnosis and even worsen a patient's condition.
However, if we consider that doctors themselves can make misdiagnoses during the diagnostic process, the errors in medical judgments driven by artificial intelligence "Med-PaLM 2" seem to be relatively reduced, but it may still take more time to verify.
From the perspective of labor demand and work types, AI is clearly a tool resource that many industries hope to use to fill labor shortages and knowledge gaps. Therefore, whether AI can make mistakes and whether it can correct them on its own has become one of the key areas for the development of automated generative AI.


