MathIsimple

Learning Vector Quantization (LVQ)

Explore semi-supervised clustering with LVQ. Learn how prototype vectors are adjusted to form subclasses within categories using labeled training data.

Module 6 of 9
Advanced Level
70-90 min

What is LVQ?

Learning Vector Quantization (LVQ) is a semi-supervised clustering algorithm that uses labeled prototypes to form subclasses within categories. Unlike k-means, LVQ prototypes have predefined class labels and are adjusted during training to better separate classes.

Key Characteristics

  • Semi-supervised: Uses labeled training samples to guide clustering
  • Class-labeled prototypes: Each prototype has a predefined category label
  • Subclass formation: Creates multiple prototypes per class (subclasses)
  • Adaptive learning: Prototypes move toward/away from samples based on label match

Use Cases

  • • Forming subclasses within known categories
  • • When some labeled data is available
  • • Fine-grained classification tasks
  • • Pattern recognition with prototypes

Key Insight

LVQ learns to separate classes by moving prototypes toward samples of the same class and away from samples of different classes.

LVQ Algorithm

LVQ iteratively adjusts prototype vectors based on labeled training samples:

Step 1

Initialize Prototypes

Initialize q prototype vectors {p₁, p₂, ..., p_q} with predefined class labels {t₁, t₂, ..., t_q}. Each prototype belongs to a specific class.

Step 2

Select Training Sample

Randomly select a labeled sample (xⱼ, yⱼ) from the training set, where yⱼ is the class label.

Step 3

Find Nearest Prototype

Calculate distance from xⱼ to all prototypes and find the nearest: p_t = argmin_i ||xⱼ - pᵢ||

Step 4

Update Prototype

If yⱼ = t_t (same class): p_t ← p_t + η(xⱼ - p_t)

If yⱼ != t_t (different class): p_t ← p_t - η(xⱼ - p_t)

where η is the learning rate (0 < η < 1)

Step 5

Convergence

Repeat steps 2-4 until convergence (prototypes stabilize) or maximum iterations reached.

Learning Rate Control

The learning rate η controls how much prototypes move toward or away from samples. Common strategies:

Fixed Learning Rate

Use constant η (e.g., 0.1) throughout training. Simple but may overshoot or converge slowly.

Decaying Learning Rate

Start with larger η and gradually decrease: η(t) = η₀ × (1 - t/T). Allows fine-tuning near convergence.