Car Damage Detection
RepositoryBinary car damage detection using transfer learning (MobileNetV2), optimized through controlled experiments to reach 93.48% test accuracy.
Dataset
The model is trained on labeled car images representing damaged and undamaged vehicles.
Dataset is not included in the repository due to size constraints.
Tools & Technologies
Overview
This project focuses on binary image classification to detect vehicle damage using transfer learning with MobileNetV2. The goal is to evaluate different training configurations and identify the best-performing model for car damage detection.
Problem Statement
Automatic car damage detection is an important task for insurance assessment, vehicle inspection, and automation in the automotive domain. This project explores whether transfer learning can effectively classify damaged vs. undamaged vehicles from images.
Methodology
- Image preprocessing and resizing
- Transfer learning using MobileNetV2
- Multiple experimental configurations: dropout rates, epoch variations, data augmentation vs. no augmentation
- Model evaluation using accuracy and loss metrics
Model Performance

Training vs validation curves show strong generalization and stable convergence.
Results
Dataset
The model is trained on labeled car images representing damaged and undamaged vehicles.
Dataset is not included in the repository due to size constraints.