這將刪除頁面 "Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy"
。請三思而後行。
Machine-learning designs can fail when they attempt to make predictions for forum.altaycoins.com people who were underrepresented in the datasets they were trained on.
For example, a design that forecasts the best treatment option for somebody with a persistent disease may be trained using a dataset that contains mainly male clients. That model may make inaccurate forecasts for female patients when released in a hospital.
To improve outcomes, engineers can try stabilizing the training dataset by getting rid of information points till all subgroups are represented equally. While dataset balancing is appealing, it typically requires getting rid of large amount of information, harming the design's total efficiency.
MIT scientists established a new technique that recognizes and removes particular points in a training dataset that contribute most to a model's failures on minority subgroups. By removing far fewer datapoints than other approaches, this technique maintains the overall precision of the design while improving its performance regarding underrepresented groups.
In addition, the strategy can determine concealed sources of bias in a training dataset that lacks labels. Unlabeled information are much more widespread than labeled information for many applications.
This approach could likewise be integrated with other approaches to improve the fairness of machine-learning designs deployed in high-stakes circumstances. For example, it might one day assist guarantee underrepresented patients aren't misdiagnosed due to a biased AI model.
"Many other algorithms that attempt to address this issue assume each datapoint matters as much as every other datapoint. In this paper, we are showing that presumption is not true. There are particular points in our dataset that are contributing to this predisposition, and we can discover those data points, remove them, and improve performance," states Kimia Hamidieh, an electrical engineering and computer technology (EECS) graduate trainee at MIT and co-lead author of a paper on this strategy.
She wrote the paper with co-lead authors Saachi Jain PhD '24 and fellow EECS graduate trainee Kristian Georgiev
這將刪除頁面 "Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy"
。請三思而後行。