Why is implementing ethics in AI difficult?
Since AI is an emerging market there is a race of all nations of the world to get better than the others in the field of AI –> so Tech is crucially important ethics don´t matter as much
Ethics are difficult to explain. We humans do have a certain intuition on what is right or wrong. But most of the time we can’t explain them proper to, e.g. a machine
What are the different paths of dealing with AI in Europe, US and China?
Europe:
high-quality AI talents and research
struggles to translate research into business applications
Ethics as an enabler
US: best job market
talent attractor
most AI companies/start-ups
most private equity and venture capital funding
China:
rapidly catches up
less strict data protection
large data bases due to Social Scoring
EU ethics guideline for trustworthy AI?
There are seven key elements of trustworthy AI in Europe
52 experts worked on the skript
In summary AI should be lawful , ethical, robust
What are the seven keyfactors of trustworthy AI in Europe?
Human agency and oversight
Technically robustness and safety
Privacy and data governance
Transperancy
Diveristy, non-discrimination and fairness
Societal and environmental wellbeing
Accountability
What does human agency and oversight mean?
Fundamental rights –> there should be mechanisms which feedback if a fundamental human right is insured by an AI
Human agency –> human dicision-making should be informed and atonomous ragarding AI-systems
Human oversight –> there should be governant oversight over AI-systems (HITL, HOTL, HIC)
What does diversity, non-discrimantion and fairness mean?
Avoidence of unfair bias –> discriminating biases should be removed, diverse working teams and diverse data can prevent biases
Accesibility and universal design –> AI-apllications should be user-friendly independent of the users age, culture or gender
Stakeholder participation –> communication with all people of interest of a AI-project
What does societal and environmental wellbeing mean?
sustainable and environment-friendly AI –> SDGs, help adapt/fighting Climate Change
societal impact –> AI should not endager the phiscal and mental wellbeing of a single individual or communities
society and democracy –> when it comes tp political decisions with the help of AI it should be often times checked
Should we define ethical rules for self-driving cars based on the majorities opinion of a democratic vote?
There is a moral machine expermiment of MIT
It is the largest AI study of the world –> over 40 Million people participated from all over the world
The participants should choose in various scenarios and ethical dilemmas, how they would choose themselves
These ethical dilemmas were created based on the difference of peoples age, gender, social status, fitness and the legal/illegal behavior of involved people or the amount of number. There were even scenarios were one should chose between an animal and a human
Most people spared more frequently:
humans over pets
mroe over few
young over old
There were als differences between the different cultural clusters which participated in the survey –> Westeren (young, fit), Eastern (lawful), Southern (higher status, females)
In the survey mostly there were mostly young, male, academic participants –> bias
Since the results of such surveys have the risk of potential biases, or ambigous rules of morality (e.g. we agree that 5 lives or worth more than one…but our own life ist most-precious to us) and the result often conttradicts to our basic law of equality of all human beings (e.g. sparing a young or a female human rather than an old male) it is probably not a good idea
What is an ethical dilemma?
It’s a decision between two moral imperatives, when neither of which is in some way more preferable or accaptable
What two basic ethical approaches exist with the Trolley-problem?
The Deontological ethics –> only considering if the action in it self is wright or wrong
The Utilitarian ethics –> decide on how many people will die
What does the German Ethics Comission say to such ethical dilemmas?
There should be no distinction between people regarding their age, gender, religion, ehtnicity, etc.
If there is the possibility to spare human lives one should do that –> 5 lives are worth more than one
People who are not involved should be spared –> who defines involvment
What are different exemplary dilemmas of AMAs?
scholarships of students
humans would agree that the distribution of scholarships schould be fair
but how do we define “fair”? Is it equality (same ressources) or equity (same opportunities)
care-robot
user begs for painkillers but robot does not get approval of a doctor
how should the robot decide? User-based or expert-based
househould-robot
teenager of a family does something viscious, robot witnesses it
How should the robot react? Should he tell the parents or should he withold the information on purpose
fatal car crash
A car cannot stop and rushes towards a pedestrian-passing and either kills a young girl or an old lady
How should the selfdriving car decide? Should it spare young life? But then again, every human life should be equal
What is the difference between a self-driving car and a human driver?
self-driving cars - when completed - cause probably less car crashes
but it somehow feels wrong to make a decision based on rules which were made beforehand and not deciding intuively like a human would do
Human: intuively, without reasoning, random
Machine: has to decide which life to spare and whom to kill, resoning, ethical rules
What is morality?
Behavior or decision-making which is commonly accepted in society –> morally right means basically means “good”
What are challenges of AMAs?
If tasks are morraly charged or an ethical dilemma AMAs can’t really make a good decision
It is unclear how one should program ethical code and which ethical approach one should chose, since humans do not have consens on what is right and what is wrong
It is really necessary that AI becomes moral?
What are consequences or moral AI? How can we punish AI if it made wrong decisions
Risk: people who built AMAs are not moral themselves –> selfinterest, bias
Can and should AI make moral judgements?
Humans are considered moral agents –> they have a intuition of what is wrong or right and feel responsibility. No other non-human thing or animal has this ability
There exists research about Artificial Moral Agents (AMA) –> machine ethicists
Intuition of good and bad
Reflecting on potential risks and consequences
Ability to make the “right” choice
Advantages of AMAs:
reduction of potential harm towards humans
higher objectivity than humans
Potentially higher acceptance since it is idependent
Avoidance of improper use –> machine cannot do anything what the user tells it to do
Better understandability of own “moral code”
What does accountability mean?
Auditability –> AI should be able to be audited by anyone
Minimasation and reporting of negative impacts –> AI can help to minimise negative impacts, however if there are certain concerns of the decision-making of an AI-model whistleblowers, NGOs, etc. should be able to critizise
Redress –> there has to be responsibility behind an AI-model. If something goes wrong it should be redressed
What is ethics?
Moral principles which determine a persons behavior and decision-making
Sub-discipline of Philosophy which deals with morality
What does transperancy mean?
Traceability –> data gathering and decision-making process of an AI-model should be protocolled
Explainability –> decision-making of an AI-model should be understandable by a human being
Communication –> AI-systems should identify themselves as non-human
What does technical robustness and safety mean?
Resilience to attack and security –> AI-systems should be protected of their vulnarabilities and not be exploited
Fallback plan and general safety –> their should always be a way out in case of problems, e.g. the machine should ask for a human opinion if it is unsure
Accuracy –> decisions, predictions, classifications should be accurate
What does privacy and data government mean?
Privacy and data protection –> privacy should be maintained for the whole process. Input, on-going and output data
Access to data –> there should be data protocols which state who does have access to data and why
Quality and integrity of data –> data should not be biased or wrong
Last changed2 years ago