Companies Must Take Responsibility for AI or Face Dire Consequences
In recent times, the discussion surrounding artificial intelligence (AI) has captured the attention of many. Opinions on AI range from it being a gateway to a promising future filled with limitless possibilities, to it leading us towards a dark and dystopian future. However, amid this flurry of conversations, one aspect is being overlooked: the issue of corporate responsibility.
My experience as Nike’s first Vice President of Corporate Responsibility in 1998 provided me with invaluable insights during the company’s major crisis. At the time, Nike was embroiled in a scandal involving labor exploitation in developing countries. This experience has equipped me with important lessons that can guide us in navigating the AI revolution.
One key difference between the Nike crisis and the current AI landscape is the urgency. The Nike controversy unfolded relatively slowly, allowing time for resolution. However, when it comes to AI, time is a luxury we cannot afford. Just a year ago, most people were unaware of generative AI. Yet, it burst into our collective consciousness in 2022, leaving us grappling to comprehend its implications ever since.
Currently, companies developing generative AI technologies face no external constraints. This situation effectively turns us all into guinea pigs. This is far from ideal. If Boeing or Airbus were to introduce an airplane that boasted cheaper and faster travel, but came with substantial risks, we would not accept those risks. Similarly, a pharmaceutical company launching an untested product with known toxic potential would be held criminally liable for any resultant harm or fatalities. So why do technology companies feel justified in bringing AI products to market, despite acknowledging the risk of extinction?
Even before generative AI emerged, Big Tech companies and the attention economy faced mounting criticism for their detrimental effects. Popular platforms like Snapchat, Instagram, and TikTok are designed to trigger addictive dopamine surges in users’ brains, comparable to cigarettes. The scientific consensus indicates that digital media is harming mental health, particularly among children.
AI has significantly amplified the attention economy, posing an entirely new set of risks with uncertain scope. While calls for regulation are growing louder, they often seem like corporate public relations campaigns or stalling tactics. After all, regulators and governments do not possess a complete understanding of AI-based products or the risks they entail; only companies do.
It falls upon companies to ensure that they do not deliberately cause harm and to rectify any problems they create. Simultaneously, governments must shoulder the responsibility of holding these companies accountable. However, traditional accountability mechanisms tend to kick in too late for rapidly advancing technologies like AI.
Consider the case of Purdue Pharma, whose owners, the Sackler family, failed to act responsibly when they discovered the dangers of OxyContin. Had they taken steps to curtail overprescription, the devastating opioid crisis plaguing the United States could have been averted. By the time the government intervened, countless lives had already been lost, and entire communities had been shattered. No lawsuit or fine can undo that irreversible damage.
When it comes to AI, companies must strive for better. They must act swiftly before AI-driven tools become so deeply ingrained in everyday life that their risks are normalized, making containment an impossible feat.
During my tenure at Nike, a combination of external pressure and an internal commitment to doing the right thing led to a fundamental overhaul of the company’s business model. Similarly, the nascent AI industry is currently feeling external pressure, as evidenced by the White House securing voluntary commitments from seven leading AI companies to develop safe and trustworthy products. These commitments align with the Blueprint for an AI Bill of Rights introduced last year. However, vague voluntary guidelines leave too much room for maneuvering.
The collective future of our society hinges on companies deciding to do what is right within their boardrooms, executive meetings, and closed-door strategy sessions. Companies require a clear North Star to guide their pursuit of innovation. Google’s early corporate credo, Don’t Be Evil, embodies this sentiment. No corporation should prioritize profits over knowingly inflicting harm.
It is insufficient for companies to simply boast about hiring former regulators or propose potential solutions. Instead, they must develop credible and effective AI action plans that answer crucial questions:
– What are the potential unintended consequences of AI?
– How are identified risks being mitigated?
– What measures can regulators employ to monitor companies’ mitigation efforts and hold them accountable?
– What resources do regulators require to fulfill this task effectively?
– How will we gauge the efficacy of these guardrails?
Tackling the AI challenge should be approached with the same urgency as any corporate sprint. Requiring companies to commit to an action plan within 90 days is both reasonable and realistic. There should be no room for excuses. Failure to meet deadlines should result in substantial fines. The plan does not need to be flawless initially and will likely require adjustments as we continue to learn. Nevertheless, committing to the plan is essential.
Big Tech must display the same level of commitment to safeguarding humanity as they do to maximizing profits. If corporations remain fixated solely on their bottom line, we are all destined for dire consequences.
Maria Eitel served as Nike’s founding Vice President of Corporate Responsibility before founding the Nike Foundation and Girl Effect.