The growing interest in artificial intelligence (AI) has sparked concerns about its ethical use and transparency. Many worry about issues such as fairness, bias, liability, and societal impact as AI takes on decision-making tasks traditionally done by humans. In a recent interview with TechCircle, Ivana Bartoletti, Global Chief Privacy Officer at Wipro, highlights the importance of collaboration between businesses and governments in addressing the challenges associated with data misuse, including misinformation and deepfakes. She emphasizes the need to establish a framework for governing and regulating investments in technologies. Additionally, Bartoletti shares her insights on effective AI practices for CXOs in 2024 and promoting greater representation of women leaders in the AI field.
The rise of AI has had a significant impact on the business landscape. Businesses now recognize the crucial role privacy and data protection play in gaining customer trust and achieving success. With the increase in data collection and a growing awareness of the negative consequences of data misuse, people’s views on privacy have shifted. AI’s reliance on data has further fueled the conversation surrounding privacy. To address these concerns, governments and companies are developing responsible AI frameworks, with more cases expected to emerge in the coming months. By prioritizing data protection, businesses can establish trust and drive growth.
While there is a lot of talk about safe AI practices, implementation has been limited. Wipro takes a proactive approach to AI safety by focusing on responsibility. They have developed a comprehensive framework with four pillars: individual, technical, societal, and environmental dimensions. The individual dimension emphasizes privacy and security, ensuring policies do not discriminate and fostering transparency in data usage. The technical dimension prioritizes robust and safe AI systems, embedding security measures from the beginning. The societal dimension focuses on compliance with privacy laws and regulations, building trust with clients. Lastly, the environmental dimension considers the environmental footprint of AI systems and promotes the use of synthetic data and smart data processing.
Ensuring the proper testing and validation of machine learning (ML) and AI models is crucial. Unfortunately, many instances have occurred where AI models were deployed without sufficient scrutiny, resulting in discriminatory algorithms and the perpetuation of existing inequalities. To address this, due diligence is necessary before rolling out AI models. Monitoring and evaluation must continue afterward to ensure safety and robustness. Initiatives like red teaming can be particularly useful for high-risk AI applications.
Looking ahead to 2024, effective governance will play a critical role in achieving a balance between regulation and innovation in AI. Best practices, codes of conduct, and new legislation are expected to accelerate globally. Companies will take proactive steps to implement responsible AI by investing in upskilling their workforce and fostering collaboration between different teams to drive innovation. The importance of sandboxes, safe spaces for developing AI technologies responsibly, will also be emphasized. These sandboxes ensure compliance with privacy, security, and legal requirements.
As AI platforms like ChatGPT gain popularity, concerns about job replacement emerge. Both companies and governments have a role to play in addressing these concerns. Companies should embrace AI as a productivity-enhancing tool and provide training and support to their workforce. Wipro, for example, is training its entire workforce on the responsible use of generative AI. Governments should assess the impact of AI and invest in digitalization, creating demand and combating exclusion. It is important for both government and business to distinguish real use of AI from hype, ensuring employees and citizens maintain a critical outlook.
Wipro is at the forefront of addressing the risks associated with AI. They have introduced a policy for the responsible use and development of generative AI and rolled out a training program for their entire workforce. A task force brings together leaders from across the company to establish a governance model based on the three lines of defense approach. With a vision of potential AI risks, Wipro’s four-pillar framework addresses privacy, security, unfairness, opacity, inequality, and environmental impact.
Furthermore, Wipro’s initiative, Women Leading in AI, highlights the importance of increasing the representation of women in decision-making and leadership roles in shaping the future of AI. This diversity is essential in developing the necessary tools and determining the purpose and governance of AI. Addressing bias and making significant decisions about AI strategies requires diverse perspectives. The Women Leading in AI Network aims to achieve this by recognizing that AI encompasses a wide range of skills and expertise.
The conversation surrounding ethical AI use and the need for collaboration is growing in importance. Businesses and governments must work together to address concerns and establish frameworks for responsible AI. By prioritizing privacy and data protection, implementing responsible AI practices, and promoting diversity in decision-making, the potential of AI can be harnessed for the benefit of society.