Definining Intelligence
Intelligence = ability to think, learn, adapt, and make optimal decisions.
Two main approaches:
Human-like: Based on psychology and observable behavior.
Rationalist: Based on mathematics and decision-making.
Four Classical Approaches
Acting Humanly: Turing Test – machines imitate human behavior.
Thinking Humanly: Cognitive modeling – replicate human thought processes.
Thinking Rationally: Logic-based reasoning – follow formal rules.
Acting Rationally: Rational agent – goal-directed actions under uncertainty.
Acting Humanly
Turing Test: Machine passes if indistinguishable from human in text conversation.
Required capabilities:
Natural language processing
Knowledge representation
Reasoning
Machine learning
Extensions: Computer vision, speech recognition, robotics.
Early examples: ELIZA (1960s), Loebner Prize.
Thinking Humanly
Methods:
Introspection
Psychological experiments
Brain imaging
Key developments:
General Problem Solver (Newell & Simon)
Cognitive architectures: ACT-R, Soar
Goal: Model human cognition, not just mimic behavior.
Thinking Rationally
Rooted in formal logic (Aristotle → Boolean algebra → Predicate logic).
Key figures: Aristotle, Boole, Ladd-Franklin, Frege, Gödel.
Limitations:
Gödel’s incompleteness theorem
Real-world uncertainty → fuzzy logic, probabilistic reasoning.
Acting Rationally
Rational agent = acts to maximize expected utility.
Advantages:
Handles uncertainty
Uses probability theory, decision theory
Applications: Robotics, game theory, control systems.
Concept of limited rationality for practical constraints.
Birth of AI
Dartmouth Conference: Term “Artificial Intelligence” coined by John McCarthy.
Early programs:
Logic Theorist (symbolic reasoning)
Samuel’s checkers program (machine learning)
1970s: Knowledge representation → expert systems (MYCIN, DENDRAL).
AI Progress and AI Winters
AI progress has been cyclical: periods of optimism followed by stagnation.
First AI Winter (1970s–1980s)
Symbolic AI and rule-based systems failed to scale.
Limitations: computational power, natural language processing, knowledge representation.
Lighthill Report (1973) criticized AI goals as unrealistic.
Brief revival with expert systems (e.g., MYCIN, SHRDLU) in specialized domains.
Decline due to high costs and inability to adapt beyond predefined rules.
Second AI Winter (1987–1993)
Collapse of expert systems → skepticism and loss of funding.
Shift toward probabilistic reasoning and machine learning:
Bayesian networks (reasoning under uncertainty).
Neural networks (pattern recognition).
No immediate breakthroughs due to computational limits, but laid foundation for modern AI.
Expert Systems (1970s–1980s)
Mimicked human decision-making in narrow domains.
Relied on large rule-based knowledge bases.
Examples:
SHRDLU: Natural language commands in a “blocks world.”
MYCIN: Diagnosed bacterial infections using ~450 rules.
Hard to update and scale.
Could not handle uncertainty or complex cases.
Neural Networks Revival (1986–Present)
Inspired by brain structure; layers of artificial neurons.
Key milestones:
McCulloch & Pitts (1943): Neural unit model.
Rosenblatt (1958): Perceptron.
Backpropagation (1986): Enabled learning from errors.
Growth fueled by:
Increased computational power.
Large datasets.
Applications: image recognition, speech processing, NLP.
Debate: Connectionism (neural nets) vs. Symbolism (rule-based).
Probabilistic Reasoning & Machine Learning
Bayesian networks (1988): Probabilistic relationships, conditional dependencies.
Hidden Markov Models (HMMs): Speech recognition, sequential data modeling.
Classical ML:
Supervised learning: Classification, regression.
Unsupervised learning: Clustering, dimensionality reduction.
Reinforcement learning: Agents learn via rewards/penalties.
Key datasets: MNIST (digits), SQuAD (NLP), UCI ML Repository.
Big Data Era (2001–Present)
Explosion of data from internet, sensors, social media.
Five Vs: Volume, Velocity, Variety, Veracity, Value.
Technologies:
MapReduce, Hadoop, NoSQL databases.
Apache Spark, Kafka for real-time processing.
Cloud platforms (AWS, Azure, Google Cloud).
Big Data AI pipelines: collection → processing → feature engineering → training → deployment.
Applications: fraud detection, healthcare analytics, retail forecasting.
Deep Learning & Generative AI (2011–Present)
Breakthrough: AlexNet (2012) → ImageNet competition.
Architectures: VGG, GoogLeNet, ResNet, DenseNet.
Applications: autonomous vehicles, vision, speech.
Transformers (2017): Self-attention → revolutionized NLP.
Models: BERT (2018), GPT-3 (2020), ChatGPT (2022).
Features:
Few-shot and zero-shot learning.
Multimodal capabilities (text, image, audio).
Generative AI: text, image, music creation; widespread industry adoption.
Key Questions in Responsible AI
How to ensure human autonomy as AI becomes more capable?
Are AI systems conscious or just pattern recognizers?
Who is legally/morally responsible for AI-caused harm?
Should AI make life-altering decisions without human oversight?
Can AI reflect ethical human values or will it amplify biases?
How do we build trust in systems we cannot fully predict or explain?
Building Responsible AI
AI challenges are technical, legal, social, and ethical.
Goal: Prevent misuse, abuse, unintended consequences.
Requires clear regulation, oversight, and stakeholder involvement.
Legal Responsibilities
Compliance with existing laws:
GDPR (EU), CCPA (California) for data privacy.
Anti-discrimination laws: Civil Rights Act, Equal Credit Opportunity Act, Fair Housing Act.
New frameworks:
EU AI Act: Risk-based categories (minimal → unacceptable risk).
China: Rules for recommendation systems.
US: State-level AI laws.
Liability questions:
Autonomous vehicle accidents, biased facial recognition.
Proposals:
Legal personhood for AI.
Public liability funds for AI-caused harm.
Social Responsibilities
Risks:
Job displacement, inequality, bias amplification.
Cultural and demographic exclusion.
AI hallucinations → false or fabricated info.
Filter bubbles and misinformation.
Deepfakes → realistic synthetic media.
Solutions:
Fairness-aware algorithms.
Inclusive datasets.
Transparency and participatory design.
Upskilling programs for workforce adaptation.
Equitable access to AI benefits.
Ethical Responsibilities
Core principles:
Respect autonomy, rights, societal norms.
Embed ethics in design, data, training, deployment.
Challenges:
Balancing accuracy vs fairness.
Defining fairness: equal treatment, outcomes, or opportunities?
Key issues:
Bias in large models (Timnit Gebru’s work).
Opacity of deep learning → need for Explainable AI (XAI).
Transparency in high-stakes decisions (jobs, loans, healthcare).
User autonomy:
Informed consent and awareness of AI interactions.
Environmental sustainability:
High energy use in training LLMs.
Solutions: smaller models, optimized training, renewable energy.
Existential risk:
Hans Jonas’ “imperative of responsibility” → ensure tech aligns with human survival.
Approach to Responsible AI
Combine:
Technical measures: Explainable AI, privacy, fairness.
Regulatory frameworks: Standards for safety and accountability.
Stakeholder engagement: Ongoing debate on societal impact.
Zuletzt geändertvor 11 Tagen