MIT study finds labeling errors in datasets used to test AI



[ad_1]

A team led by computer scientists looked at ten of the most cited datasets used to test systems. They found that about 3.4% of the data was inaccurate or mislabeled, which could cause problems in AI systems that use these data sets.

The datasets, which have each been cited over 100,000 times, include those based on text from focus groups, and. Mistakes have appeared as a result of issues such as Amazon product reviews that were mislabeled as positive when they were actually negative and vice versa.

Some of the image-based errors are the result of mixing animal species. Others are due to mislabeled photos with less visible objects (“water bottle” instead of the mountain bike it’s attached to, for example). One particularly irritating example that emerged was a baby mistaken for a nipple.

focuses on audio from YouTube videos. of a YouTuber speaking to the camera for three and a half minutes was tagged as a “church bell”, even though it could only be heard in the last 30 seconds or so. Another error arose from a misclassification as an orchestra.

To find possible errors, the researchers used a framework called, which examines the data sets for label noise (or irrelevant data). They validated possible errors using and found that about 54% of the data reported by the algorithm had incorrect labels. The researchers found that the most errors were present with around 5 million (around 10% of the dataset). The team so everyone can go through the label errors.

Some of the errors are relatively minor and others appear to be a case of hair splitting (a close-up of a Mac command key labeled as a “computer keyboard” is still okay). Sometimes the confident learning approach was also wrong, such as mistaking a properly labeled picture of tuning forks for a menorah.

If the labels are even a little staggered, it could have huge ramifications for machine learning systems. If an AI system can’t tell the difference between a grocery store and a bunch of crabs, it would be hard to trust it to pour you a drink.

[ad_2]

Source link