Ten Rules of Engagement for AI Ethics in Advertising


AI is changing the way we work. Many of us already use tools that have AI baked in; Google and Microsoft both recently announced AI-powered search engines. While AI has the potential to free us up to dive deeper into topics, think more creatively, and relax, it is crucial that we create a set of ethical guidelines for using this powerful tool in the workplace.

At a creative agency, I have witnessed a wide range of debates surrounding when and how to use AI in a professional setting. Questions such as whether it is appropriate to use a ChatGPT to compose a peer review or to generate AI mockups for a presentation remain unresolved. In order to start resolving them, we must establish a set of etiquette to govern the workplace applications of AI.

The following set of principles is meant to provide accessible guidelines for the everyday professionals making decisions about AI in the workplace. These principles should serve as a good first principle to determine whether something is appropriate for an AI related task- if someone wouldn’t admit it or be proud to share it, then it is probably not a good use case for AI. We must recognize that AI is just a tool, and that the responsibility for its use ultimately lies with us.

We must also make sure to put significant thought into inputs for AI, resisting the temptation to ask for results that could be calibrated to produce biased results. We should be open and transparent about how we are using AI and its resulting outputs, educating ourselves to ensure that its use complies with all relevant regulations and ethical codes.

See also  Philippines Urged to Embrace Generative AI for Outsourcing Sector Growth

In addition, as AI is increasingly used to make more and more decisions, companies should make sure to reveal the “logic involved” in decision making, either by complying with GDPR standards or by implementing their own level of transparency. It is our responsibility to both challenge and take responsibility for AI outcomes – we cannot merely assume them as “all-knowing” outcomes.

AI-based tools must also be regularly checked for bias, and companies should prioritize value over efficiency in order to protect the dignity of their employees. We should also be highly aware of how much time we’re spending with machines and each other, using our newfound work time to reflect and pursue the bigger picture.

Finally, the need for ways to challenge AI outcomes and a more rigorous regulatory framework must be recognized. We must all take responsibility for using AI in a respectful manner, and urge elected leaders to find ways to ensure that AI empowers people and not just institutions. Ultimately, it is up to us to make informed, ethical decisions about how we use AI in the workplace.

Frequently Asked Questions (FAQs) Related to the Above News

Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.

Share post:



More like this

Tesla Shareholders Approve $56B Musk Pay Package, Texas Move

Tesla shareholders approve Elon Musk's $56B pay package and Texas move. Will this boost confidence in Musk's leadership at Tesla?

Asian Shares Rise as Investors Eye Bank of Japan Monetary Policy Decision

Asian shares rise as investors await Bank of Japan's monetary policy decision. Market optimism grows amid potential interest rate cuts.

Dispute Over Gene-Edited Crop Patents Engulfs Europe

The heated debate over gene-edited crop patents in Europe is sparking controversy over intellectual property rights in agriculture.

Elon Musk’s Warning on Apple’s Data Sharing Sparks Controversy

Elon Musk sparks controversy with Apple's data sharing warning, while Tamil producer Bava thanks Musk for meme featuring his film poster.