Social engineering attack classifications on social media using deep learning

In defense-in-depth, humans have always been the weakest link in cybersecurity. However, unlike common threats, social engineering poses vulnerabilities not directly quantifiable in penetration testing. Most skilled social engineers trick users into giving up information voluntarily through attacks...

Full description

Saved in:
Bibliographic Details
Main Authors: Aun, Yichiet, Gan, Ming-Lee, Abdul Wahab, Nur Haliza, Guan, Goh Hock
Format: Article
Language:English
Published: Tech Science Press 2023
Subjects:
Online Access:http://eprints.utm.my/106321/1/NurHalizaAbdulWahab2023_SocialEngineeringAttackClassificationsonSocialMedia.pdf
http://eprints.utm.my/106321/
http://dx.doi.org/10.32604/cmc.2023.032373
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In defense-in-depth, humans have always been the weakest link in cybersecurity. However, unlike common threats, social engineering poses vulnerabilities not directly quantifiable in penetration testing. Most skilled social engineers trick users into giving up information voluntarily through attacks like phishing and adware. Social Engineering (SE) in social media is structurally similar to regular posts but contains malicious intrinsic meaning within the sentence semantic. In this paper, a novel SE model is trained using a Recurrent Neural Network Long Short Term Memory (RNN-LSTM) to identify well-disguised SE threats in social media posts. We use a custom dataset crawled from hundreds of corporate and personal Facebook posts. First, the social engineering attack detection pipeline (SEAD) is designed to filter out social posts with malicious intents using domain heuristics. Next, each social media post is tokenized into sentences and then analyzed with a sentiment analyzer before being labelled as an anomaly or normal training data. Then, we train an RNN-LSTM model to detect five types of social engineering attacks that potentially contain signs of information gathering. The experimental result showed that the Social Engineering Attack (SEA) model achieves 0.84 in classification precision and 0.81 in recall compared to the ground truth labeled by network experts. The experimental results showed that the semantics and linguistics similarities are an effective indicator for early detection of SEA.