Friday 27 October 2023

What is AI bias? [+ Data]

What is negative bias in AI? What is a bias in data?, What are the 3 types of machine learning bias?, How do you measure AI bias?, What is AI bias?

 What is AI bias? [+ Data]

According to our State of AI Survey Report, one of the biggest issues marketers have with generative AI is that it can be prejudiced.

Additionally, because AI technologies might occasionally create biased information, marketers, salespeople, and customer care representatives say they are hesitant to employ them. 

Business executives obviously worry about bias in AI, but what exactly causes prejudice in AI in the first place? We'll talk about the risks associated with employing AI, real-world instances of AI bias, and ways society may lessen those risks in this post.

What is AI bias?

The concept of AI bias refers to the possibility that machine learning algorithms may exhibit prejudice when performing preprogrammed functions, such as content creation or data analysis. Usually, AI is prejudiced in ways that reinforce negative preconceptions, such as those pertaining to gender and ethnicity. 
The Artificial Intelligence Index Report 2023 states that AI is biased when it generates results that support and uphold negative prejudices about particular groups. When AI produces predictions or results that do not prejudice against or favor any particular group, it is considered fair.

AI may be prejudiced due to the following factors in addition to bias and stereotypes: 

Sample selection, Sample selection prevents predictions and recommendations from being generalized or applied to groups that aren't included since the data it utilizes isn't representative of all populations.
measurement, in which a biased data collection procedure results in biased findings from AI.

How does AI bias reflect society's bias?

Because society is biased, AI is biased as well. 

Because society is prejudiced, a large portion of the data used to train AI systems involves societal biases and prejudices. As a result, AI picks up on these biases and creates outcomes that support them. For example, because of the historical bias in unemployment in the data it learned from, an image generator asked to create an image of a CEO might create images of white males. 

Many individuals believe that if AI grows more widespread, it will magnify the prejudices that now exist in society and cause harm to a wide range of social groups. 

AI Bias Examples

The number of newly reported AI incidents and controversies was 26 times more in 2021 than it was in 2012, according to the AI, Algorithmic, and Automation Incidents Controversies Repository (AIAAIC).

Let's review a few instances of prejudice in AI.

Approval rates for mortgages are a prime illustration of bias in AI. Due to the fact that historical lending data disproportionately demonstrates minorities being denied loans and other financial opportunities, algorithms have been proven to be 40–80% more likely to deny borrowers of color. With every application that AI receives in the future, the historical data teaches it to be prejudiced.

In the medical domain, sample size bias may also exist. Let's say a medical professional use AI to examine patient data, identify trends, and suggest courses of action. The recommendations aren't based on a representative population sample and might not be able to fulfill everyone's specific medical needs if that doctor primarily treats White patients.

Certain firms have implemented algorithms that lead to biased decision-making in real life or have increased the likelihood of it. 

1. The Recruitment Algorithm on Amazon

An algorithm for hiring was created by Amazon and trained using ten years' worth of job history data. The algorithm learned to be prejudiced against applications and penalized resumes from women or any resumes featuring the word "women('s)" because the data represented a male-dominated workforce.

2. Cropping Images on Twitter

In 2020, a widely shared post revealed that when cropping photos, Twitter's algorithm gave preference to White faces over Black ones. A White user posted images with his face, a Black coworker's face, and other Black faces on numerous occasions; the images were always cropped to display his face in image previews.

"While our analyses to date haven't shown racial or gender bias, we recognize that the way we automatically crop photos means there is a potential for harm," Twitter said in response to criticism of the algorithm's bias. When we were first creating and developing this product, we ought to have done a better job of foreseeing this potential.

3. Racist Facial Recognition in Robots

In a recent experiment, researchers trained robots to identify features on people's faces and classify them into one of three boxes: doctors, criminals, or housewives. 

Due to bias, the robot frequently classified Black males as criminals, Latino men as janitors, and women of all ethnicities as less likely to become doctors. It also frequently recognized women as homemakers.

4. The Monitoring Software from Classroom Technologies and Intel

One aspect of the Class software from Classroom Technology and Intel watches students' faces to identify emotions while they learn. Many have stated that there is a good chance that pupils' emotions will be mislabeled due to varying cultural conventions regarding how they communicate their feelings. 

Teachers risk penalizing pupils for feelings they aren't truly exhibiting if they use these descriptors to discuss with them about their degree of effort and comprehension. 

How can bias in AI be fixed?

AI ethics is a popular subject. This makes sense because artificial intelligence's prejudice has been shown in numerous real-world scenarios. 

In addition to bias, AI can disseminate harmful false information, such as deepfakes, and generative AI technologies have the ability to generate factually inaccurate material. 

How can we better understand AI and lessen the possibility of bias?

Human error: When bias is evident, people can examine data, keep an eye on outputs, and make necessary modifications. For instance, to make sure generative AI results are fair, marketers should carefully review them before incorporating them into marketing materials.

Examine the possibility of bias: There is a greater chance that some AI use cases will be discriminatory and detrimental to particular communities. In this situation, individuals have the opportunity to evaluate the possibility that their AI will yield biased outcomes, such as when banks use data that has been historically skewed.

Investing in AI ethics: So that people can come up with practical solutions to lessen AI bias, one of the most crucial ways to do so is to keep funding AI research and ethics.

Diversifying AI: People bring their own lived experiences to the table, and having a diversity of perspectives in AI helps establish unbiased procedures. More chances arise for people to identify potential bias and address it before harm is done in a diverse and representative field.

Recognize human bias: Due to varying life experiences or confirmation bias in research, all people are susceptible to bias. As with researchers ensuring sure their sample sizes are representative, people utilizing AI can be aware of their own biases and take steps to guarantee that their AI is not biased.

Being open and honest: With new technology in particular, openness and honesty are crucial. By simply disclosing their usage of AI, for as by including a note beneath an AI-generated news story, people can foster mutual understanding and trust.

Responsibly using AI is extremely achievable. 

The best approach to keep aware of the potential for harm is to learn how AI might reinforce negative prejudices and take steps to make sure your usage of AI doesn't add gasoline to the fire. AI and interest in it are both just increasing. 
Do you have any questions about artificial intelligence? Examine this educational route.

Variance vs. bias

In addition to bias, variance must be taken into account by data scientists and other professionals who develop, train, and use machine learning models in order to build systems that can reliably produce accurate results.

Similar to bias, variance is a mistake that arises from incorrect assumptions made by machine learning from the training set. Variance, as contrast to bias, is a response to actual, valid oscillations in the data sets. The system may nonetheless employ this noise for modeling even though these fluctuations, or noise, shouldn't have an impact on the intended model. Stated differently, variance refers to an issue with sensitivity to slight changes in the training set, which, like bias, can lead to erroneous outcomes.

Despite their differences, bias and variance are related in that bias can reduce variation and variance can help reduce bias. If the population of data is sufficiently diverse, the variance should overwhelm any biases.
Because of this, the goal of machine learning is to strike a balance, or tradeoff, between the two in order to create a system that generates the fewest errors possible.

How to avoid prejudice

Machine learning bias can be avoided with the aid of governance and awareness. When an organization acknowledges the possibility of prejudice, it can take the following actions as part of best practices to combat it:
  • Choose training data that is sufficiently large to offset common forms of bias in machine learning, such as sample and prejudice bias, and suitably representative.
  • To make sure that bias resulting from algorithms or data sets is not reflected in machine learning system results, test and validate.
  • As machine learning systems continue to learn as they go, keep an eye on them while they're working to make sure biases don't gradually creep in.
  • Examine and inspect models using other resources, such as IBM's AI Fairness 360 open source toolkit or Google's What-If Tool.
  • Make a data collection strategy that takes dissenting viewpoints into consideration. There may be several appropriate label choices for a single data point. Considering those alternatives throughout the initial data collection process makes the model more flexible.
  • Recognize the training data sets being utilized, as they may include labels or classes that introduce bias.
  • Review the ML model frequently and make plans to tweak it in response to further input.

Bias in machine learning history

Trishan Panch and Heather Mattie originally defined the phrase "algorithmic bias" in a Harvard T.H. Chan School of Public Health program. Although machine learning bias has been recognized as a risk for many years, it is still a challenging issue that has not been easily solved.

Indeed, bias in machine learning has already been found in real-world situations, with some biases having serious, even fatal, effects.

One such instance is COMPAS. The Correctional Offender Management Profiling for Alternative Sanctions algorithm, or COMPAS for short, employed machine learning to forecast the likelihood that criminal defendants will commit new crimes. The program was used by several states in the early years of the twenty-first century before its prejudice against persons of color was discovered and made public in news reports.

In 2018, Amazon, a recruiting behemoth whose hiring practices influence those at other businesses, abandoned their hiring algorithm upon discovering that it was picking up word patterns. The algorithm mistakenly punished resumes that contained specific words, including those of women. This prejudice resulted in male candidates being given preference over female candidates since women's resumes were deemed less relevant than those of males.

Academic researchers also revealed the same year that commercial facial recognition AI systems have biases based on skin tone and gender.

Bias in machine learning has also been observed in the medical domain. For instance, a 2019 study revealed that an AI-based system that determined which patients need care across multiple hospitals had racial bias. Black patients were classified as being sicker than White patients who were suggested for the same therapy, demonstrating racial bias in the AI algorithm.

According to a 2021 The Markup research, 80% of black mortgage applications were turned down because of AI bias. Similarly, compared to comparable white applicants, lenders were 40% more likely to reject applications from Latinos, 50% more likely to reject applications from Asian/Pacific Islanders, and 70% more likely to reject Native American applicants.

























































0 comments:

Post a Comment