Machine learning algorithms have the potential to offer efficient and data-driven decisions, but their outputs have often been found to exacerbate existing biases and inequities rather than reduce them. Computer scientists have tried to implement fairness in these systems by designing various fairness principles, but a consistent application is yet to be made possible. Scholars from the University of North Carolina-Chapel Hill, however, offer an alternative approach; their research attaches explicit costs to a model’s classification errors and accounts for different types of costs to train the algorithm to consider context and asymmetries, rather than approach each decision uniformly. This alternative mindset is akin to economists who think in terms of cost-benefit analysis and utility functions. Recently, there has been growing societal concern over AI’s profound risks to society and humanity, prompting prominent industry leaders such as Geoffrey Hinton, Bill Gates, Elon Musk, and Steve Wozniak to call for urgent guardrails into place sooner rather than later.
Achieving Fairness in Machine Learning: New Approach by UNC Researchers
Date:
Frequently Asked Questions (FAQs) Related to the Above News
Please note that the FAQs provided on this page are based on the news article published. While we strive to provide accurate and up-to-date information, it is always recommended to consult relevant authorities or professionals before making any decisions or taking action based on the FAQs or the news article.