What is the prototype theory and what are examples for it?
Theory from the field of cognitive linguistics that looks at how people categorize objects or concepts
People learn in categories and each category has a prototype, the membership to this category is then evaluated by the shared characteristics of an item with the prototype
An example is the mum-doctor riddle or yellow bananas
What is the difference between a prototype, stereotype and a bias?
Prototype –> Most typical representative of a concept/category
Stereotype –> Perceived characteristics of an entire group of people
Bias –> Disproportionate weight in favor of or against an idea, thing or group of people, often in a way that is unfair
What are job-related stereotypes in search engine results?
Male-dominated professions tend to have even more men in their results than would be expected if the proportions reflected real- world distributions
This effect is also seen when people rate the quality of search results or select the best image from a result: They were found to prefer images with genders that match the prototype of a particular occupation
What are typical masculine and feminine stereotypes?
Men are typically perceived with the attributes of indepence, instrumental competence or leadership confidence, which all can be summarized with the term agency
Women are typically perceived with the followeing attributes: concern for others, sociability or emotional sensitivity, which all can be described with the term communality
But according to social role theory, gender stereotypes are dynamic constructs influenced by the actual perceived changes
What are unconscious human biases?
Unconscious biases are social stereotypes in favor of or against certain groups of people that individuals form outside their own conscious awareness
Biases are of course not limited to gender-specific stereotypes. Biases may exist toward any social group, e.g. ethnicity, age, physical ability, religion, sexual orientation, weight, and many other characteristics
Everyone holds unconscious beliefs about various social and identity groups, which stem from our tendency to organize social worlds and reduce complexity by categorizations
Unconscious biases may influence the decisions and actions we take and can have negative or positive consequences for the person or group concerned
Unconscious bias is far more common than conscious prejudice and often incompatible with one’s conscious values
How are unconsious biases measured?
With the Implicit Association Test
The IAT measures attitudes and beliefs that people may be unwilling to report
The idea is to detect the stregth of the associations of oncepts, evaluation and prejudices
How are biases reproduced by AI applications?
Via translation systems, e.g. GoogleTranslater –> turkish gender-neutral noun is translated according to stereotypical gender in that context
By recruting algorithms –> Amazon’s resume-reading AI system were sexist
With Machine Learning –> since AI only mirrors the societal beliefs of humanity there can be underrepresented groups, confusing causation with correlation or an overgeneralization accroding to the training data
By wordlviews of annotators –> most of the people have specific knowledge, e.g. know how a westernized wedding looks but don’t know how a african wedding looks and therefore label accronglingly
With face recognition software or voice assistents –> underrepresented groups are sometimes not identified as persons or their voice is not recognized correctly
What are some important thoughts regarding AI biases?
AI bias doesn’t come from the systems themselves, it comes from people and is therefore not by default objectively neutral or fair
No technology is free of its creators
All technology comes from and is designed by people, which means that it’s not automatically more objective than we are
Data and math don’t equal objectivity
What are causes of biases?
Specification of the problem to be solved in ways that affect classes differently
Failure to recognize /address statistical bias
Reproduction of past societal prejudice and decision-making
Consideration of an insufficiently rich set of factors
Incomplete or skewed data –> selection bias
What are different types of selection biases?
Coverage bias –> The population represented in the data set does not match the population that the machine learning model is making predictions about, e.g.: Consumers who instead bought a competing product were not surveyed
Sampling bias –> Data is not collected randomly from the target group, e.g.: surveyer chose the first 200 consumers that responded to an email
Participation bias –> Members of certain groups opt-out of surveys (or labeling tasks) at different rates than members of other groups, e.g.: Consumers who bought the competing product were 80% more likely to refuse to complete the survey; Moral machine experiment
How to keep biases out of AI?
Awareness and understanding of the various causes of algorithmic biases is the first step
More diversity in teams developing AI would be very beneficial
Integration of bias detection strategies in AI development
Creators and operators of AI algorithms need to ask: Will we leave some groups of people worse off as a result of the algorithm’s design or its unintended consequences?
Bias impact statements as a self-regulatory practice of AI creators/companies
Ethical governance standards
How to identify potential biases?
Missing Feature Values –> feature value is missing for one or more features with a large number of examples might lead to underrepresentation
Unexpected Feature Values –> look out for uncharacteristic or unuasial feature values because these might lead to inaccuricies or biases
Data Skew –> any data were groups are underrepresented or overrepresented might lead to biases
What are the effects of feminized digital assistents and how could they prevented?
Countermeasures
adding the option of male voices
voices that sound less humalike
genderless voice
Effects
Reflecting,reinforcingand spreading stereotypical images of traditional gender roles
Feminine voice standing for servility and dumb mistakes
Tolerance of verbal insults towards feminized characters
Last changed2 years ago