Model hijacking exploitation and mitigation
In today's digital era, the utilization of machine learning has proliferated, making it an invaluable tool across various automotive applications. Machine learning has found its way into numerous facets of automotive engineering and operations. Additionally, machine learning is harnessed in...
Saved in:
Main Author: | |
---|---|
Format: | Final Year Project / Dissertation / Thesis |
Published: |
2024
|
Subjects: | |
Online Access: | http://eprints.utar.edu.my/6632/1/fyp_CS_2024_CZY.pdf http://eprints.utar.edu.my/6632/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | In today's digital era, the utilization of machine learning has proliferated, making it an
invaluable tool across various automotive applications. Machine learning has found its way
into numerous facets of automotive engineering and operations. Additionally, machine learning
is harnessed in predictive maintenance, where it analyses sensor data from vehicles to forecast
potential mechanical issues, thereby optimizing maintenance schedules and minimizing
downtime. However, things are two sides. There are vulnerabilities in machine learning that
can cause adversarial attack, namely model hijacking attack. In prior research, there are
experiment that shows that it is possible to insert trigger into the input to trigger backdoor
attack without the user notice. The severity of this problem stems from the attacker's ability to
wield this attack method at their discretion, resulting in highly erroneous outcomes. The
profound danger lies in the fact that once a malicious actor successfully gains control of a
machine learning model, they can manipulate it to generate intentionally misleading results or
predictions. Incorrect decisions prompted by a poisoned machine learning model can lead to
substantial financial losses, damage to a company's reputation, and even bankruptcy, life and
death. In critical applications such as healthcare, autonomous vehicles, and industrial control
systems, the reliability of machine learning models is paramount. Thus, in this project, I will
focus on the backdoor attack to identify how the attack make use of the loopholes of machine
learning. Think from the perspective of an attacker, I propose model that can be backdoor
without being realize. It can have face recognition real-time and enter the backdoor mode when
physical backdoor is being detected. When the model is in a backdoor mode, it will misclassify
the face into the wrong class as attacker intentions. By understanding how the attack works, it
can be twisted into a model that gives positives use such as steganography. Thus, in this
research, I will develop a backdoor machine learning that can be used for good intention. |
---|