Connect with us

The Plunge Daily

Twitter initiates bug bounty programme to root out algorithmic bias

Twitter initiates bug bounty programme to root out algorithmic bias
Twitter has launched a bug bounty programme to fish out algorithmic bias in its artificial intelligence (AI) algorithms.

Technology

Twitter initiates bug bounty programme to root out algorithmic bias

Twitter has launched a bug bounty programme to fish out algorithmic bias in its artificial intelligence (AI) algorithms. The microblogging platform will reward those who find as-yet undiscovered examples of bias in its image cropping algorithm.




In April, Twitter said it would study potential unintentional harms created by its algorithm in 2018 in an attempt to focus on the most interesting parts of images in previews. This came after some users criticized how the platform handled automated cropping, claiming that the algorithm tends to focus on lighter-skinned people in photos.

In a blog post, Twitter said it shared its approach to identify bias in the platform’s algorithm (also known as image cropping algorithm). “We made our code available for others to reproduce our work. We want to take this work a step further by inviting and incentivizing the community to help identify potential harms of this algorithm beyond what we identified ourselves.”

The platform shared that this is the industry’s first algorithmic bias bounty competition and is offering cash prizes of up to $3,500. Rumman Chowdhury, director of Twitter’s Machine Learning Ethics, Transparency and Accountability team, in a tweet said the company is running the contest because it believes people should be rewarded for identifying these issues. “We can’t solve these challenges alone.”


Also Read: World Bank provides $4 billion for COVID-19 vaccines for 51 developing countries


AI can cause problems, including denigrating particular populations or reinforcing stereotypes, if the software isn’t trained effectively. Twitter’s project is designed to solidify standards around ideas like representational harm. According to CNET, AI has revolutionized computing by teaching devices how to make decisions based on real-world data instead of rigid programming rules. That helps with messy talks like understanding speech, screening spam and identifying the user’s face to unlock their phone. The report stated that the algorithms that power AI can be opaque and reflect problems in training data. That’s led to problems like Google mistakenly labeling Black people as gorillas in photos. Fixing AI problems is important as we rely on the technology to run more and more of our digital lives. It also can be important within companies – Google acknowledges that its handling of an AI ethics issue hurts its program’s reputation.


8 Comments

8 Comments

  1. Pingback: MoEngage raises $32.5 mn from Multiples Alternate Asset Mgmt, others

  2. Pingback: IPOs fundraise tops Rs 27,000 crore from April to July

  3. Pingback: Zoom to pay USD 85 mn to settle lawsuit over user privacy, zoombombing

  4. Pingback: Google won't let users sign-in on these Android devices from Sep 27

  5. Pingback: Twitter teams up with AP, Reuters to curb misinformation on its platform

  6. Pingback: Social investment start-up StockGro raises USD 5 mn in pre-series A round

  7. Pingback: IT Minister invites G20 nations for cooperation in digital inclusion

  8. Pingback: USD 300-400 mn investment expected in latest oil, gas bid round

Leave a Reply

Your email address will not be published.

To Top
Loading...