Grundlegendes Konzept von SVM erklären
Problem: Classification - There are many possibilities to seperate two sets, but what is the optimal solution for the problem?
Intuition: Size of margin determines the generalization capability
Solution: Find the best seperating line / hyperplane with maximum margin to the classes
Was sind Support vectors?
Data points of the training that have direct impact on the position of the hyperplane and would change it if removed
Data points closest to the hyperplane
Data points are the hardest to classify
Formel Optimalitätskriterium der optimalen Hyperebene.
Optimale Bedingung: min |wx(i)+b| = 1
Was ist der Kerneltrick?
Idea for Nonlinear Kernel Methods (SVM): Transform data into another (high dimensional) space where the data can be seperated linearly and solve the problem there
Instead of calculating the complex nonlinear transformation ϕ( ) and the scalar
product, we use a kernel that does this implicitly, thus saving a lot of computational operations.
Was versteht man unter Dualität des Hypothesenraums
Points in feature space correspond to hyperplanes in H and vice versa
Nennen Sie jeweils zwei Vor- und Nachteile von SVM.
Vorteile
Optimal hyperplane —> Good results
Processing of high-dimensional data —> Rapid evaluation
Nachteile
Memory and computing power
Finding an optimal kernel - unsolved/research
Zuletzt geändertvor 2 Monaten