AlexNet MCQ 15 Questions
Time: ~25 mins Advanced

AlexNet MCQ

The 2012 ImageNet breakthrough: depth, ReLU, dropout, and training a large CNN on GPUs.

Easy: 5 Q Medium: 6 Q Hard: 4 Q
ImageNet

ILSVRC 2012

ReLU

Non-linearity

Dropout

Regularization

GPUs

Scale

AlexNet in context

AlexNet (Krizhevsky et al., 2012) won ImageNet ILSVRC with a large GPU-trained CNN. It popularized ReLU activations, dropout regularization, overlapping max pooling, data augmentation, and multi-GPU model parallelism for vision. Deeper stacks of conv layers followed (VGG, ResNet, …).

Why it mattered

It showed that deep CNNs scaled with data and compute could dominate hand-crafted features on a hard benchmark.

Key ideas

Architecture

Five conv layers (with LRN and pool stages) then three FC layers.

ReLU

Faster training than saturating sigmoids/tanh; helps deep nets converge.

Dropout

Randomly drops activations in FC layers to reduce co-adaptation / overfitting.

Scale

Trained on two GPUs with split conv layers—enabled larger width.

Rough data flow

227×227 input → conv/pool stages → 4096-4096-1000 FC → softmax

Pro tip: Local Response Normalization (LRN) was used in AlexNet but later often replaced by batch norm in modern designs.