The popular artificial-intelligence chatbot ChatGPT has been reported as providing users with false or misleading information – a phenomenon that came to light when a cyber-security specialist tested the system using his own professional credentials. The AI-powered tool provided widely plausible responses, but included made-up references to papers and testimony that the specialist, Herbert Lin, had not written or provided. Those seeking information from such chatbots should, therefore, be sceptical of the veracity of specific factual information provided by them.
ChatGPT is an artificial-intelligence chatbot that is widely used to provide information on a host of topics. However, experts have found that the tool can produce false or misleading information in response to questions, as demonstrated by cyber-security specialist Herbert Lin. While much of the information provided by chatbots like ChatGPT is plausible, users would be advised to treat any specific factual information with a degree of scepticism, especially when it comes to academia.
Herbert Lin is a well-known cyber-policy and security specialist, who has provided expert testimony on a range of issues relating to cybersecurity and national security. Among other things, Lin is a senior research scholar for cyber policy and security at Stanford University and has authored a range of publications in this field. While he is a widely respected expert, Lin has pointed out that tools like ChatGPT can generate false information, which could have serious consequences.