How to create a AI Unbiased Firms | US Computer Academy
TECHNOLOGY FIRMS SCRAMBLE TO MAKE AI UNBIASED:
Artificial Intelligence (AI) has established itself as a guiding force in our lives. The growing importance of this technology, however, has also made people aware of the biases that have become part of it, increasing the pressure on technology companies to make amends.Google, for example has now established an external advisory council to help the tech giant develop AI technology in an ethical and responsible way.
E-commerce giant Amazon this month announced that it was working with the National Science Foundation (NSF) to commit up to $10 million each in research grants over the next three years focused on fairness in AI.“We believe we must work closely with academic researchers to develop innovative solutions that address issues of fairness, transparency, and accountability and to ensure that biases in data don’t get embedded in the systems we create,” Prem Natarajan, Vice President of Natural Understanding in the Alexa AI group at Amazon, wrote in a blog post.Biased automation tools could push disadvantaged communities further to the margins. Imagine, for example, an AI recruitment tool that considers women to be less intelligent. If a job portal employs such a tool, it is more likely to recommend males to an organisation planning to hire new people.
What about the AI assistants that we have on our devices? If Google Assistant, Siri, or for that matter Alexa, talks to us in a female voice, it could make our kids believe that women – not men — are supposed to be assistants.Making AI unbiased has therefore become essential for human freedom and for ensuring equal opportunities for all and fighting discrimination. But why do AI tools show bias and reflect the prejudices which are already existing in our society?
This is partly because the community that builds AI does not adequately reflect the diversity in the world. According to a 2018 World Economic Forum Report, only 22 per cent of AI professionals globally are female.
“If AI systems are built only by one representative group such as all male, all Asian or all Caucasian; then they are more likely to create biased results,” Mythreyee Ganapathy, Director, Program Management, Cloud and Enterprise, Microsoft, told IANS.She point out,Making AI unbaised may not ultimately make the world fair for all, and yet it could be an important step towards fighting prejudices and creating equal opportunities.
Artificial Intelligence (AI) has established itself as a guiding force in our lives. The growing importance of this technology, however, has also made people aware of the biases that have become part of it, increasing the pressure on technology companies to make amends.Google, for example has now established an external advisory council to help the tech giant develop AI technology in an ethical and responsible way.
E-commerce giant Amazon this month announced that it was working with the National Science Foundation (NSF) to commit up to $10 million each in research grants over the next three years focused on fairness in AI.“We believe we must work closely with academic researchers to develop innovative solutions that address issues of fairness, transparency, and accountability and to ensure that biases in data don’t get embedded in the systems we create,” Prem Natarajan, Vice President of Natural Understanding in the Alexa AI group at Amazon, wrote in a blog post.Biased automation tools could push disadvantaged communities further to the margins. Imagine, for example, an AI recruitment tool that considers women to be less intelligent. If a job portal employs such a tool, it is more likely to recommend males to an organisation planning to hire new people.
What about the AI assistants that we have on our devices? If Google Assistant, Siri, or for that matter Alexa, talks to us in a female voice, it could make our kids believe that women – not men — are supposed to be assistants.Making AI unbiased has therefore become essential for human freedom and for ensuring equal opportunities for all and fighting discrimination. But why do AI tools show bias and reflect the prejudices which are already existing in our society?
This is partly because the community that builds AI does not adequately reflect the diversity in the world. According to a 2018 World Economic Forum Report, only 22 per cent of AI professionals globally are female.
“If AI systems are built only by one representative group such as all male, all Asian or all Caucasian; then they are more likely to create biased results,” Mythreyee Ganapathy, Director, Program Management, Cloud and Enterprise, Microsoft, told IANS.She point out,Making AI unbaised may not ultimately make the world fair for all, and yet it could be an important step towards fighting prejudices and creating equal opportunities.
No comments:
Post a Comment
saqlainm329@gmail.com