Autonomous Vehicles MCQ 15 Questions
Time: ~25 mins Advanced

Autonomous Vehicles MCQ

Perceive lanes, traffic, and obstacles—often fusing camera, LiDAR, and radar for robust autonomy.

Easy: 5 Q Medium: 6 Q Hard: 4 Q
ego vehicle

Pose

Lanes

Boundaries

Vulnerable

Ped / bike

Fusion

Multi-sensor

Vision in self-driving stacks

Autonomous systems use cameras for rich semantics (lanes, signs, color) and often fuse LiDAR/radar for range and weather robustness. Semantic segmentation labels drivable space; detectors track vehicles and pedestrians; HD maps and odometry integrate over time. Redundancy and validation matter as much as model accuracy.

Functional safety

Production stacks duplicate sensing modalities and monitor perception health—not only raw mAP.

Key ideas

Lane detection

Polynomial fits, segmentation masks, or row-wise classifiers on road.

Critical objects

Vehicles, pedestrians, cyclists—often tracked over time.

Segmentation

Freespace vs obstacles; curb and road boundary cues.

Fusion

Project LiDAR into camera; late or early fusion strategies.

Perception loop

capture → calibrate → detect/segment → track → planner

Pro tip: Sim and log replay (shadow mode) validate models before on-road OTA updates.