A scene invariant convolutional neural network for visual crowd counting using fast-lane and sample selective methods

Convolutional neural network (CNN) based crowd counting aims to estimate the number of pedestrians from the image. Existing research usually follow the training-testing protocol within a single dataset and the accuracy drops when conducting cross-dataset evaluation. Density map prediction methodolog...

Full description

Saved in:
Bibliographic Details
Main Author: Teoh, Shen Khang
Format: Final Year Project / Dissertation / Thesis
Published: 2023
Subjects:
Online Access:http://eprints.utar.edu.my/5945/1/1606100_Teoh_Shen_Khang.pdf
http://eprints.utar.edu.my/5945/
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Convolutional neural network (CNN) based crowd counting aims to estimate the number of pedestrians from the image. Existing research usually follow the training-testing protocol within a single dataset and the accuracy drops when conducting cross-dataset evaluation. Density map prediction methodology is widely used but it has drawbacks in ground truth generation and the use of Euclidean distance results in low quality density map. Additionally, CNN models face the challenges of vanishing gradients and zero weights, leading to low accuracy in predictions. This study uses global regression methodology and whole image-based training pattern to directly estimates the final count from image. The proposed model is designed with single column architecture using single filter size and max pooling size. Fast lane connection and sample selective algorithms have been designed specifically to tackle the issue of vanishing gradient and enhance the quality of the model. The performance of the proposed model, which is scene-invariant, was assessed using the ShanghaiTech dataset, the UCSD dataset, and the Mall dataset. It achieved an average MAE of 2.75 and a MSE of 3.65. As a result of the proposed method, the model performs well overall and exhibits improved generalisability to unseen scenes.