Ultimate magazine theme for WordPress.

Twitter To Offer Prizes Of Up To $3,500 As Bug Bounty If Someone Fixes Ai Bias In Its Cropping Mechanism

0

Twitter, a microblogging networking site, has got a new way to deal with AI problems. Over the past few days, Twitter saw some problems with its cropping feature that is intended to crop the posts of users and show the most relevant part of the photos while scrolling the feeds to attract users towards the post.

This is an automated feature often called the saliency algorithm. Twitter started using the saliency algorithm in 2018 to crop images and works by estimating what a person might want to see first within a picture to attract them.

Twitter saw certain AI bugs in the cropping mechanism and found that the system was color biased and favored white people more than black people. This automatic cropping system was thus criticized by many users as it preferred white people more than colored people. The bug was found last year.

To fix this bug Twitter chose an intelligent option where it held a bounty contest where it invited outsiders to detect the AI problems in the mechanism and whoever found it would get a prize up to $ 3,500. The outsiders need to find the problems and demonstrate how Twitter mishandles and misinterpreted individual photos.

Twitter wants to use the bug bounty aspect over here to bring in outsiders and global experts to showcase their talents and skills. It plans on rewarding them so that other companies engage in this practice and help these bug bounties to grow and excel. Twitter plans on starting a legacy and wants other companies to follow it too.

These bug bounties earlier were limited to sharing and reporting security defects and bringing them to examine these AI bugs is an innovation Twitter wants to engage in.

This approach will help Twitter get more people involved in their little project, people who earlier had no resources and time but had a lot of skills but never got the right platform to display those skills, said Rumman Chowdhary, the director of Twitter’s  Ethics, Transparency and Accountability Machine Learning Program. He also said that the company wants to build a community consisting of ethical AI hackers that can boost the company’s performance.

AIs can cause certain problems and due to bias can cause a lot of damage to the personal feelings of individuals even if it never intended to. These biases and stereotypes are embedded into the system due to a lack of training and experimentation and the need to change over time by fixing AI bugs. This project plans on uniting the ideas of different people and does not plan on influencing people with the wrong set of standards and stereotypes.

AI devices have always been thought that they need to understand the feelings of other individuals rather than work according to certain programming rules coded for the device. They are promoted to use their own thinking abilities to detect problems and solve them. This includes tedious and time-consuming tasks like understanding language, screening spam and identifying faces, and unlocking the device.

Google too suffered from a similar problem where its AI mechanism referred to black people as gorillas that needed to be fixed. Twitter plans on fixing its AI photo cropping mechanism soon.

Leave A Reply

Your email address will not be published.