perjantai 25. tammikuuta 2019

Jan 21: KNN and linear classifiers

Today we started our work with classification theory and related sklearn tools.

The first classifier was K-Nearest Neighbor, which searches K (e.g., K=5) nearest samples and copies the most frequent class label to the query sample. The benefit is its simplicity and flexibility, but a major drawback is the slow speed at prediction time.

Linear classifiers are a simpler alternative. A linear classifier is characterized by a linear decision boundary, and described by the decision rule:

F(x) = "class 1" if wTx > b
F(x) = "class 0" if wTx <= b

Here, w and b are the weights and constant offset learnt from the data and x is the new sample to be classified.

Ei kommentteja:

Lähetä kommentti