這將刪除頁面 "Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy"
。請三思而後行。
Machine-learning models can fail when they attempt to make forecasts for individuals who were underrepresented in the datasets they were trained on.
For circumstances, a model that forecasts the very best treatment choice for someone with a chronic disease might be trained utilizing a dataset that contains mainly male patients. That design might make inaccurate predictions for female patients when released in a medical facility.
To improve outcomes, engineers can try balancing the training dataset by removing information points till all subgroups are represented equally. While dataset balancing is appealing, wavedream.wiki it often needs removing large amount of data, injuring the model's overall efficiency.
MIT scientists established a new technique that recognizes and removes specific points in a training dataset that contribute most to a model's failures on minority subgroups. By getting rid of far less datapoints than other approaches, this technique maintains the general accuracy of the design while improving its performance concerning underrepresented groups.
In addition, the technique can recognize surprise sources of predisposition in a training dataset that lacks labels. Unlabeled data are much more common than identified information for many applications.
This technique could likewise be combined with other approaches to improve the fairness of machine-learning designs released in high-stakes scenarios. For instance, it may at some point assist ensure underrepresented patients aren't misdiagnosed due to a biased AI model.
"Many other algorithms that try to resolve this problem assume each datapoint matters as much as every other datapoint. In this paper, we are revealing that presumption is not real. There are particular points in our dataset that are adding to this predisposition, and we can find those data points, eliminate them, and get much better performance," says Kimia Hamidieh, an electrical engineering and computer technology (EECS) graduate trainee at MIT and co-lead author of a paper on this method.
She wrote the paper with co-lead authors Saachi Jain PhD '24 and fellow EECS graduate trainee Kristian Georgiev
這將刪除頁面 "Researchers Reduce Bias in aI Models while Maintaining Or Improving Accuracy"
。請三思而後行。