Title: The Impact of AI on Business: Reshaping Rules and Encouraging Responsible Adoption
Over the past few weeks, there has been a global discussion on the risks and regulation of artificial intelligence (AI). Notably, there is a consensus forming among governments, researchers, and AI developers that more regulation is necessary. This alignment is surprising to some, but it signifies a growing acknowledgement of the potential risks associated with AI, including job displacement and privacy concerns.
During a testimony before Congress, Sam Altman, the CEO of OpenAI, proposed the establishment of a new government body that would issue licenses for the development of large-scale AI models. Altman suggested that such a body could impose a combination of licensing and testing requirements on firms like OpenAI, while also advocating for independent audits of AI systems.
Although there is a growing consensus on the need for regulation, there is still little agreement on the specifics of these regulations and the focus of potential audits. At the recently held Generative AI Summit by the World Economic Forum, two key themes emerged: the need to update requirements for businesses developing AI models and the urgency to define clearer and broader standards for AI technologies.
The United Kingdom has been at the forefront of discussions on responsible AI innovation, releasing guidance that emphasizes core principles including safety, transparency, and fairness. Researchers from Oxford have also highlighted the importance of updating our concept of responsibility in the face of advancements in AI, particularly with the rise of large language model AI (LLM).
LLM-powered AI systems present new challenges in understanding and auditing models. Unlike traditional AI, LLM AI often lacks transparency in terms of the data it was trained on, making it difficult to determine biases or subjective hallucinations in its recommendations. For example, when asked to summarize a presidential candidate’s speech, an LLM might provide a summary that is open to interpretation. This underscores the need for AI products to be accountable and auditable, rather than solely relying on LLMs.
Furthermore, regulations in HR departments are extending beyond decision-making AI systems to encompass the development and use of AI technologies altogether. It is crucial for governments to establish standards that are transparent and comprehensible to consumers and employees. For instance, IBM’s chief privacy and trust officer emphasized the importance of informing consumers whenever they engage with a chatbot, highlighting the need for transparency in AI development.
The question of controlling the proliferation of new AI models and technologies requires further debate to strike a balance between risks and benefits. Nonetheless, the consensus is clear: as AI continues to have a profound impact on various industries, the urgency for standards, regulations, and awareness of both the risks and opportunities is intensifying.
In HR teams, the impact of AI is felt swiftly, with demands to provide upskilling opportunities to employees while shaping future workforce plans aligned with evolving business strategies. According to the World Economic Forum’s Future of Jobs Report, an estimated 14 million jobs are at risk over the next five years, with 69 million new jobs expected to be created. This report also highlights the necessity for upskilling and reskilling, as 60% of workers will need to adapt their skillsets by 2027. Unfortunately, only half of the employees currently have access to adequate training opportunities.
To successfully navigate the AI-accelerated transformation, businesses must drive internal changes that prioritize employee engagement and carefully consider how to create compliant and connected experiences that empower employees. Adhering to responsible AI practices and understanding both the technology and regulatory landscape are paramount for business and HR leaders.
The new wave of regulations sheds light on the importance of addressing bias in talent-related decisions, and it emphasizes the need for responsible AI strategies across teams and businesses. As AI technology continues to be adopted by individuals both in and out of work, the responsibility of ensuring its ethical and judicious use has never been greater.