Neuroscience & the Human Brain
Brain = biological basis of intelligence (learning, memory, problem-solving)
Neuroscience studies brain & nervous system
Brain vs. computers:
Brain → neurons + chemical signals
Computers → circuits + algorithms
Both aim to:
perceive environment
process information
act intelligently
Insight into brain → helps design better AI systems
Brain Structure & Functions
Brain:
~1.4 kg
~86 billion neurons
Controls:
movement, reasoning, memory, emotions, language
Cerebral Cortex (higher cognition)
4 lobes:
Frontal lobe → planning, decision-making, personality, motor control
Parietal lobe → sensory processing (touch, pain, spatial awareness)
Temporal lobe → hearing, language, long-term memory
Occipital lobe → vision
Subcortical Structures
Hippocampus → forms long-term memories, spatial navigation
Amygdala → emotions (fear, reward), emotional memory tagging
Cerebellum → coordination, balance, error correction, learning
Example (playing piano)
Frontal → plan movements
Parietal → sense finger position
Temporal → process sound/music
Occipital → visual input
Hippocampus → store memory
Cerebellum → coordination
Amygdala → emotional experience
Neurons, Synapses & Communication
Neurons = basic processing units
Functions:
receive, process, transmit signals
Neuron structure
Dendrites → receive signals
Soma (cell body) → processes input
Axon → sends signals
Signal transmission
Action potential = neuron “fires”
Inputs:
Excitatory → increase firing
Inhibitory → decrease firing
Synapses:
gaps between neurons
neurotransmitters carry signals
Synaptic Plasticity & Networks
Synapses change strength over time → learning mechanism
Frequent use → stronger connections
Rare use → weaker connections
Brain = complex adaptive network
Key brain networks
Default Mode Network → introspection
Salience Network → detects important stimuli
Central Executive Network → goal-directed behavior
Neuroplasticity:
brain reorganizes itself
adapts based on experience
Memory & Learning
Memory = storing & using information
Learning = forming & strengthening memories
Types of Memory
Short-term (working memory):
temporary (seconds–minutes)
limited capacity
Long-term memory:
long duration (days–lifetime)
large capacity
Long-term memory types
Declarative (explicit):
Semantic → facts
Episodic → personal experiences
Procedural (implicit):
skills (e.g., riding a bike)
Memory Processes
Consolidation:
short-term → long-term memory
sleep is important
Emotional events → remembered better
Memory types:
Iconic (visual) → ~0.5 sec
Echoic (auditory) → few seconds
Memories change over time (reconsolidation)
Hebbian Learning Principle
“Neurons that fire together, wire together”
Simultaneous activation → stronger connections
Basis for associative learning
Local learning (only involved neurons)
Unsupervised (no external control)
Builds internal representations
Smell + positive experience → strong association
Link to AI
Brain learning principles inspire AI:
neural networks
adaptive learning
Key concepts:
synaptic plasticity
Hebbian learning
network-based processing
Cognitive Science (Overview)
Studies how the mind:
learns, reasons, decides, acts
Focus: information processing, not just brain biology
Interdisciplinary field:
psychology, neuroscience, linguistics, philosophy, computer science
Goal:
understand & model intelligent behavior (human + AI)
Cognitive Science Across Disciplines
Psychology:
behavior, memory, attention, learning, decision-making
uses experiments & observations
Neuroscience:
brain structures & neural activity
Linguistics:
language structure, meaning, communication
important for NLP in AI
Philosophy:
nature of mind, knowledge, reasoning
Computer Science / AI:
builds computational models of cognition
simulates perception, memory, decisions
➡️ AI ↔ Cognitive science = mutual relationship
Perception, Attention & Memory
Core functions of any intelligent system:
Perception → interpret environment
Attention → select relevant info
Memory → store & use information
Perception
Not just sensing → interpreting information
Uses:
context
prior experience
expectations
Handles:
incomplete, noisy, ambiguous data
Gestalt principles:
group patterns, simplify perception
Brain uses internal models + probabilistic reasoning
Attention
Filters information → selects what to process
Limited cognitive resources → prioritization needed
Flexible & goal-driven
Levels of attention
Perceptual → focus on features (color, motion)
Cognitive → mental tasks (memory, problem-solving)
Executive → task management & priorities
Social → interaction with others
Attention theories
Spotlight model:
attention highlights one area
Biased competition:
stimuli compete → most relevant wins
Working Memory
Temporary mental workspace
Supports:
reasoning, planning, language
Phonological loop:
verbal/auditory info
Visuospatial sketchpad:
images, spatial info
Central executive:
controls attention & resources
Information Processing Flow
Sensory input → sensory register (very brief)
→ attention filters information
→ working memory processes it
→ long-term memory stores it
Controlled vs Automatic Processing
Controlled:
slow, conscious, effortful
e.g. learning new task
Automatic:
fast, unconscious
e.g. routine skills
Relevance to AI
Cognitive models → inspire AI systems:
perception (computer vision)
attention (transformers)
memory & reasoning models
Combine:
symbolic reasoning + data-driven learning
Language Processing
Key cognitive ability:
communication, knowledge sharing
Syntax → structure
Semantics → meaning
Pragmatics → context
Incremental processing (word by word)
Context-dependent interpretation
problem-solving types
Well-defined → clear rules (e.g. puzzle)
Ill-defined → unclear goals (e.g. life decisions)
problem-solving strategies
Trial & error → learn from failure
Algorithms → step-by-step rules
Heuristics → mental shortcuts
Insight → sudden solution
Cognitive Limitations
Functional fixedness:
difficulty seeing new uses for objects
Decisions affected by:
emotions
time pressure
limited information
Decision-Making (Kahneman)
System 1 (intuitive):
fast, automatic
System 2 (analytical):
slow, deliberate
➡️ Humans are not fully rational
Relationship: Neuroscience, Cognitive Science & AI
Intelligence = interaction of:
perception, learning, memory, reasoning, action
Neuroscience → biological mechanisms
Cognitive science → mental processes
AI → implementation in machines
➡️ Strong interdisciplinary connection ➡️ Human intelligence → blueprint for AI
Brain-Inspired AI
Brain research → inspired artificial neural networks
AI systems use:
attention
memory
learning from experience
build adaptive, flexible systems
Inspirations from Biology
Aim:
not replicate brain exactly
extract key principles
Approach:
abstraction → simplify biology into models
Artificial Neurons (McCulloch–Pitts Model)
First mathematical neuron model (1940s)
Key idea:
neuron = binary unit (on/off)
Inputs → weighted sum → threshold → output (0 or 1)
Can implement:
AND, OR, NOT logic
➡️ Foundation of neural networks
From Neurons to Networks
Many simple neurons → complex networks
Enable:
logical reasoning
computation
Perceptron (Rosenblatt)
Improved artificial neuron
Key features
Adjustable weights → learning from data
Bias term → shifts threshold
Still binary output
➡️ Can:
recognize patterns
adapt based on experience
➡️ Inspired by synaptic plasticity
Symbolic AI
Knowledge = symbols + rules
Strengths:
interpretability
Weaknesses:
rigid
poor with uncertainty
no learning
Subsymbolic AI (Neural Networks)
Knowledge = distributed in weights
Learns from data
pattern recognition
handles noise & ambiguity
adaptable
low interpretability (“black box”)
Hybrid AI
Combines both approaches:
symbolic reasoning + neural learning
Example:
perception (NN) + reasoning (rules)
Embodied Cognition
Intelligence requires a body + interaction
Learning through:
sensory input
physical action
In AI:
agents with:
sensors (perception)
actuators (actions)
➡️ Example:
robot learning via trial & error
Situated Cognition
Intelligence depends on:
environment
Not static → dynamic & unpredictable
behavior adapts to:
current situation
goals
constraints
delivery robot navigating obstacles
Continuous Interaction Loop
Intelligent systems operate in cycle:
perceive → interpret → decide → act → learn → adapt
Embodiment + Symbolic Reasoning
Full intelligence requires:
physical interaction (embodied)
structured reasoning (symbolic)
➡️ Modern AI = hybrid systems
Key Takeaway
AI is shaped by:
neuroscience (biology)
cognitive science (thinking)
Intelligence emerges from:
interaction with environment
continuous learning & adaptation
Artificial Neural Networks (ANNs) – Overview
Goal: build systems that:
learn from data
adapt to new situations
improve over time
Inspired by:
neuroscience
cognitive science
Used for:
language understanding
image recognition
forecasting
➡️ Key strength: generalization to unseen data
Core Idea of ANNs
Learn from examples, not rules
Create internal representations of data
Learning = adjusting weights between neurons
Structure of an ANN
Basic unit: Artificial neuron
Inputs → weighted sum → activation → output
➡️ Single neuron = limited ➡️ Networks = powerful
Layers in Neural Networks
Input Layer
Hidden Layers
Output Layer
Receives raw data
One neuron per feature (e.g. pixel, value)
No computation
Perform computations:
weighted sum + bias + activation
Extract features:
early layers → simple patterns
deeper layers → complex patterns
Produces final prediction
Depends on task:
Binary → 1 neuron (e.g. spam detection)
Classification → multiple neurons
Regression → continuous value
Feedforward Network
Data flows:
input → hidden → output
No feedback loops
Fully connected structure
Learning in Neural Networks
Learning = iterative improvement using feedback
4 Key Components
Forward propagation
Loss function
Backpropagation
Gradient descent
Forward Propagation
Input passes through network → prediction
Uses current weights (initially random)
Also called inference phase
Loss Function
Measures prediction error
MSE (Mean Squared Error):
for continuous values
Cross-Entropy:
for classification
➡️ Output = single error value
Computes how each parameter contributed to error
derivatives
chain rule
Propagates error backwards through network
➡️ Determines how to adjust weights
Gradient Descent
Updates weights to reduce error
Move downhill in error landscape
Use gradient (direction of steepest increase)
Update rule:
adjust weights in opposite direction
Small → slow but stable
Large → fast but risky
Training Process
Repeat:
predict → compute error → adjust weights
Gradual improvement over time
Deep Neural Networks (DNNs)
Many hidden layers
complex pattern recognition
hierarchical feature learning
face recognition
speech translation
text understanding
autonomous driving
Deep Learning Architectures
Examples:
AlexNet → improved performance with depth
VGG → deeper structure
GoogLeNet → efficient architecture
ResNet → very deep networks (100+ layers)x
Applications of Neural Networks
Virtual assistants
Translation & NLP
Robotics & autonomous vehicles
Healthcare (diagnosis support)
Finance (fraud detection)
Recommendation systems
Challenges of Neural Networks
Require:
large datasets
high computational power
Long training time
Low interpretability (“black box”)
Hard to explain decisions
Adversarial vulnerability:
small input changes → wrong outputs
Lack of true understanding
Weak in:
symbolic reasoning
Strengths vs Limitations of ANN
scalability
adaptability
opaque decisions
data & resource intensive
Zuletzt geändertvor 23 Tagen