A new survey conducted by cybersecurity company Malwarebytes has revealed that the vast majority of people have concerns over the safety and accuracy of ChatGPT, a generative AI tool. The survey shows that 81% of people are worried about possible security and safety risks associated with ChatGPT. In addition, over half of respondents (52%) believe that the platform’s development should be paused until regulations catch up with the technology.
Accuracy of information is also a major issue, with 63% of people not trusting the information produced by ChatGPT. The survey reveals that only 12% of respondents agree with the statement that the information produced by ChatGPT is accurate. Meanwhile, 55% disagree. The study also highlights that over half of the respondents doubt whether AI tools can improve internet safety.
Mark Stockley, the cybersecurity evangelist at Malwarebytes, warns that the results of the survey should not be underestimated. He noted that the findings could have significant implications for the development of ChatGPT and other AI tools in the future. The concerns raised by this survey should not be ignored, he said. AI tools have great potential, but developers need to take seriously the concerns over accuracy and safety.
The survey was conducted six months after the launch of ChatGPT and included responses from Malwarebytes newsletter subscribers. The study shows that people are becoming increasingly wary of the capabilities of AI tools, which are becoming more prevalent in society. As generative AI tools continue to make the news, the concerns over safety, security, and accuracy are likely to grow. The results of this survey show that developers need to work harder to ensure that AI tools meet the expectations of the public and address their fears and concerns.
Frequently Asked Questions (FAQs) Related to the Above News
What is ChatGPT?
ChatGPT is a generative AI tool.
What is the new survey conducted by Malwarebytes about?
The survey is about people's concerns over the safety and accuracy of ChatGPT.
What percentage of people are worried about the safety and security risks associated with ChatGPT?
81% of people are worried about the safety and security risks associated with ChatGPT.
What percentage of people do not trust the information produced by ChatGPT?
63% of people do not trust the information produced by ChatGPT.
What does Mark Stockley, the cybersecurity evangelist at Malwarebytes, warn about the results of the survey?
Mark Stockley warns that the results of the survey should not be underestimated and could have significant implications for the development of ChatGPT and other AI tools in the future.
Should developers ignore the concerns raised by the survey?
No, developers should not ignore the concerns raised by the survey. AI tools have great potential, but developers need to take seriously the concerns over accuracy and safety.
Who participated in the survey?
The survey included responses from Malwarebytes newsletter subscribers.
What do the survey results show about people's perception of AI tools?
The survey results show that people are becoming increasingly wary of the capabilities of AI tools, which are becoming more prevalent in society.
What should developers do to address the fears and concerns of the public?
Developers need to work harder to ensure that AI tools meet the expectations of the public and address their fears and concerns.
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.