The “black box” problem surrounding the system ChatGPT, owned by OpenAI – a company heavily involved in generative artificial intelligence (AI) – has sparked discussion amongst tech experts. Baldur Bjarnason, author of The Intelligence Illusion, recently addressed this issue in a blog post. While OpenAI and companies like Microsoft, Google, and OpenAI refuse to share any parameter information regarding their AI platforms, many are questioning whether companies can be trusted with this technology.
The main concern is that AI is a “black-box” that is vulnerable to attacks. OpenAI doesn’t share information about its language models or diffusion models, raising the possibility that its platform has already been poisoned without users even knowing. This lack of transparency leads to malicious actors having no accountability in how their technology is used, creating cause for concern for the entire industry.
Recently, Bruce Schneier posted one of Bjarnason’s blog posts on the Schneier on Security blog, generating interest and comments from readers. Users IsmarGPT and Peter included argued that random AI models are not intelligent and lack true intention, hence the potential for malicious attacks. Krishna Vishnubhotla, Vice President of Product Strategy at Zimperium, argued that OpenAI should consider collaborating with experts from the research community to gain valuable insights regarding their system and ensure accuracy and prevent misinformation.
The market for AI hardware and services is said to reach a staggering $90 billion by 2025, making this industry incredibly lucrative. To ensure the security of AI systems, the EU has passed a draft regulation that forbids systems deploying manipulative techniques and those exploiting people’s vulnerabilities. In the United States, people like President Biden have only raised a few issues; no serious legislation has been created.
John Bambenek from Netenrich, throws in his views. He suggests that, as it stands, ChatGPT is still relatively harmless. Yet, it is crucial for humans to always remain an active participant in AI-related work and tasks, to prevent any potentially adverse implications. Bambenek also voiced his opinion to treat AI like encryption, with transparency and significant review; but he also expressed his worry that opening up AI would make it easier for attacks.
The problem of OpenAI and other companies failing to disclose parameters of their AI platforms remains a problem that needs to be addressed. There needs to be measures put in place to ensure that people are engaging with this technology in a safe and responsible manner. Only then, can society move ahead with increased trust in the capabilities of AI.