Responses generated by AIs are so convincing that errors and false data go unnoticed; texts produced by these technologies have already caused problems on CNET
The CNET website had to recant itself in investment publications because the AI used to generate the news was missing the result of calculations – and the vehicle also omitted the information that it was generating news through an AI. The problem is not only the error, but the text sounded convincing about profit.
This is an example of how artificial intelligences that generate texts based on questions or “orders” from humans have a serious problem: the conviction with which they produce a response . Increasingly popular, the subject of AIs like ChatGPT helping with questions is starting to bring problems because they pass wrong answers but with a lot of confidence on the subject — or just in a well-structured text.
AI Talk: Correct Me If You Can
The “lip” and confidence in what is said is one of the factors used by scammers to apply their crimes. See the case of Frank Abagnale Júnior, whose story inspired the film Arrest Me If You Can . However, an AI (as far as we know) does not have the malice to purposely want to deceive someone. To paraphrase Crack Daniel: AI technology misses with conviction .
In the case of CNET, the text explained the calculation of an investment of 3% per year. In the example, the AI said that by investing $10,000, the profit would be $10,300.
In fact, the profit is $300 .
To earn $10,300 in the year on a $10,000 investment, the interest would have to be 103%.
The error is detectable by a reader with knowledge of finance, but those who are not very fond of numbers can suffer from it, failing to make a greater investment because they trusted an AI text that explained it wrong — without knowing that the author is a robot.
Here we run into another problem: CNET did not report that it was using AIs for some texts. The author appeared only as “CNET Finance Team”. Now, the website informs that the text is produced by a robot and revised by a human.
And that wasn’t the only problem with an AI missing trust. In the tweet below, @FEhrsam comments on how ChatGPT gave a more accurate and “concise” answer on running pace calculation. However, he is corrected by another user explaining that the ChatGPT was wrong and that the correct account is Google’s. Unfortunately, we no longer have the original tweet to see which account it was.
However, I tested ChatGPT with a composite rule of three. The problem, taken from Wikipedia in English, was the following:
If 6 bricklayers build 8 houses in 100 days, 10 bricklayers build 20 houses in how many days in the same “efficiency”? The result is 150 days. As I thought, ChatGPT would have problems with this type of account. For some reason, he even made a correct reasoning, but ignored one of the factors in his account and resulted in 200 days.
AI in journalism can be useful for more “mechanical” texts
The use of artificial intelligence in journalism (in my opinion) is not harmful to the profession, but in some cases the “responsibility for the error” needs to be a human one — as in the case of texts about investments, where you can influence readers to use your own money somehow .
In other cases, AI may end up making it easier to produce a news outlet. An example of this is in electoral coverage. Not every newsroom of a newspaper or website will have the free time to write the result of every municipal election.
In 2020, G1 used an AI to write the news about the results of all prefectures . The technology retrieved data from the TSE website and was able to write a “mechanized” text , informing the winner, party, percentage and runner-up. With information: Mashable and Futurism