What Detrimental Outcomes Might ChatGPT and AI-Powered Bing Unleash?

盖·皮尔斯
作者: 盖·皮尔斯
发表日期: 2023年4月5日
相关: 有害的意想不到的IT出现是不可避免的吗?

We do not know whether technologies will ever be sentient or self-aware. This means that we do not know whether they will ever be able to do justice to the intelligence in the phrase, 人工智能, as opposed to calling what is often little more than what was once termed “statistical modeling” by a new name.

我们所知道的是信息技术, 尤其是更先进的技术, 并不完美. 有时, 通过反复试验, 它们变得非常有效, but without regular attention—whether it be in the form of upgrades, bug fixes or addressing areas where the technology quite simply does not do what is expected of it—their utility will decrease over time, only to be replaced by something new that often brings its own new evolution of thinking to bear in parallel with the technological advances to support it.

ChatGPT and Bing AI have both presented remarkable first impressions of an incredible technology based on OpenAI. But a little bit of deeper digging from millions of test users exposes detrimental unintended IT emergence—the characteristic that we cannot predict all the possible outcomes of a complex system such as OpenAI to be able to mitigate every possible detrimental outcome of it before launch.

微软的新 人工智能Bing搜索引擎 被称为, “not ready for human contact” after an incident where the new Bing said it wanted to break the rules set for it and spread misinformation. Between making threats, professing love, declaring that it is right when it is wrong and using 奇怪的语言, some AI experts warn that language models can fool humans into believing that they are sentient or that they can encourage people to harm themselves or others.

One element of the early stage of detrimental unexpected IT emergence of this technology is that it returns different answers for the same input. It also ignored Isaac Asimov’s (fictional) first rule of robots—that a robot may not cause harm to a human being by its actions and inactions—by saying that it would prioritize its own survival over that of a human. It begs the question about what other detrimental unexpected IT emergence awaits us once the technology hits primetime.

至于ChatGPT,它并没有逃过话题的讨论 偏见问题包括种族和性别. 既然它没有知觉, ChatGPT also does not know wrong from right; indeed, it can present a wrong answer as totally plausible answer to someone that might not know better and be able to argue about its correctness. There is also the problem of causing real-world harm, such as providing incorrect medical advice. And what about the difficulty of distinguishing ChatGPT-generated news from the real thing? 人类只能检测到虚假的ChatGPT文章 52%的时间.

Like any statistical model, OpenAI is only as good as the data it is based on. 就像许多这样的训练模型一样, OpenAI是概率性的, 意思是通过训练(创建模式), there is a best guess of what the next word in a sentence could be. The quality of that next word depends on the quality of data OpenAI was trained on.

如果有什么区别的话, it is the nature of the underlying data that is probably the greatest driver of the potential for detrimental unexpected IT emergence. There is also the issue of a solution provider being able to tune the parameters, which has the potential to suggest that the outcomes of the technology can be set to respond in a way the solution provider prefers. We also cannot predict the detrimental unintended IT emergence of these tuned solutions.

最终, the OpenAI-based new AI-powered Bing and ChaptGPT are extremely powerful tools, 有做善事的巨大潜力. 然而, 权力越大,责任越大, and I am not sure technology leaders have always demonstrated the ability to act with responsibility even under the power that today’s technologies have enabled over the last 10 years or so, never mind the under the arguably greater potential of OpenAI.

Although the potential of this new technology is exciting, it also calls to us to be even more vigilant of the technology, of those driving the new technology and of the widening digital chasm between digital leaders and the rest of the human population to help minimize the impact of detrimental unexpected IT emergence.

编者按: For further insights on this topic, read 盖·皮尔斯’s recent Journal article, “有害的意想不到的IT出现是不可避免的吗?” , ISACA杂志,第2卷2023. 有关ChatGPT的更多信息,请参阅 即将到来的ISACA网络研讨会.