A technology few had heard of until relatively recently suddenly became mainstream in the past year. It seems as if everyone is talking about artificial intelligence, or AI, as it holds the promise of remaking our world. However, for many observers, including top experts in the field, it threatens to turn that world upside down.
This has prompted calls for everyone to cool their heels, and at least make sure that guardrails are in place to ensure that this technological revolution does not turn us all into losers. Public discussion of how to regulate this rapidly evolving technology will probably take a lot of the limelight this year and in the foreseeable future.
In Beijing, Dou Dejing, an AI expert and an adjunct professor in the Electronic Engineering Department of Tsinghua University, said AI’s influence can now be felt in every aspect of our lives.
Of particular interest to many is generative AI, which refers to algorithms that can be used to create new content, including audio, code, images, text, simulations, and videos. One prominent example of this is ChatGPT — software that allows a user to ask it questions using conversational, or natural, language. ChatGPT was released on Nov 30, 2022, by the United States company OpenAI.
Such developments raise the possibility of AI passing the Turing test, named after the celebrated World War II code breaker Alan Turing, and which has long been accepted as the benchmark for judging whether a machine is endowed with human intelligence.
Dou said: I can’t say when exactly this will happen, but it’s quite possible that within a couple of years, an authoritative institution will announce that a conversational AI based on large language models of the type that power ChatGPT has passed the Turing test. This would be a highly significant milestone.
The emergence of generative AI is being seen as transformational because the technology can benefit almost all businesses and individuals by greatly improving their efficiency. For example, white-collar workers can use it to draft reports, generate advertising ideas, summarize documents, and even do coding.
However, the awe generated by AI’s ability to bring a universe once reserved for science fiction into reality is accompanied by a foreboding about the dangers that lurk behind this great advance for humanity.
According to the database of the AIAAIC repository in Cambridge, England, which tracks incidents related to the risks and harm posed by AI, the number of such incidents has risen eightfold since 2016 when AI first came into the public spotlight.
Zhang Xin, associate professor of law at the University of International Business and Economics in Beijing, said the proliferation of generative AI presents two main types of risks.
She said generative AI amplifies problems presented by traditional AI, such as potential violations of personal privacy, threats to data security, and algorithmic bias, making such problems more complex and harder to detect and remedy.
For example, a report by IBM’s data and AI team in the US is said to have found that computer-aided diagnoses relating to black patients are less reliable than those relating to white patients because underrepresented data of women or minority groups can skew predictive AI algorithms.