Car Damage Detection

Repository

Binary car damage detection using transfer learning (MobileNetV2), optimized through controlled experiments to reach 93.48% test accuracy.

Dataset

The model is trained on labeled car images representing damaged and undamaged vehicles.

Dataset is not included in the repository due to size constraints.

Tools & Technologies

PythonTensorFlow/KerasMobileNetV2NumPyMatplotlibJupyter Notebook

Overview

This project focuses on binary image classification to detect vehicle damage using transfer learning with MobileNetV2. The goal is to evaluate different training configurations and identify the best-performing model for car damage detection.

Problem Statement

Automatic car damage detection is an important task for insurance assessment, vehicle inspection, and automation in the automotive domain. This project explores whether transfer learning can effectively classify damaged vs. undamaged vehicles from images.

Methodology

  • Image preprocessing and resizing
  • Transfer learning using MobileNetV2
  • Multiple experimental configurations: dropout rates, epoch variations, data augmentation vs. no augmentation
  • Model evaluation using accuracy and loss metrics

Model Performance

Model Performance Graphs

Training vs validation curves show strong generalization and stable convergence.

Results

Best Model
MobileNetV2
Transfer Learning
Test Accuracy
93.48%
Best Performance
Configuration
Single Dropout (0.3)
12 Epochs
Augmentation
None
Best result without aug