Microsoft recently announced its new approach in keeping the ethics of the ChatGPT AI technology in check. Chief Responsible AI Officer Natasha Crampton discussed why Microsoft chose to disband the AI Ethics & Society team, citing that one team was not going to meet their objectives. The Redmond firm has opted for a different route, embedding responsible AI across all business groups and creating a network of ‘responsible AI champions’. With almost 350 people dedicated to making sure Microsoft is fostering ethical AI practices, Crampton also discussed how some team members from the disbanded team were embedded in other teams and how recent layoffs impacted the company’s efforts.
Microsoft’s focus on ethical AI practices is an important move to ensure that no unethical scenarios arise, such as the ones Bing Chat is known to be associated with. With Geoffrey Hinton presenting his concerns on the rapid pace of AI development and an open-letter released to call for a pause on AI development to further comprehend risks, it is crucial that Microsoft proves they can be trusted to place ethical practices ahead of short-term gains. To achieve this, AI experts and staff across all business groups must be in-sync and well-versed in the implications of responsible AI. Only then can Microsoft ensure that ChatGPT is not associated with similar unethical scenarios in the future.