WebThis work studies and develops a series of backdoor attacks on the deep-learning-based models for code search, through data poisoning. We first show that existing models are vulnerable to data-poisoning-based backdoor attacks. WebDoubleStar: Long-Range Attack Towards Depth Estimation based Obstacle Avoidance in Autonomous Systems, USENIX Security 2024 3. PatchCleanser: Certifiably Robust Defense against Adversarial Patches for Any Image Classifier, USENIX Security 2024 4. AutoDA: Automated Decision-based Iterative Adversarial Attacks, USENIX Security …
Rethinking the Trigger-injecting Position in Graph Backdoor Attack
WebApr 15, 2024 · Guided by feature-based explanations, EG-Booster enhances the precision ML evasion attacks by removing unnecessary perturbations and introducing necessary ones that lead to a successful evasion. WebProgressive Backdoor Erasing via connecting Backdoor and Adversarial Attacks Bingxu Mu · Zhenxing Niu · Le Wang · xue wang · Qiguang Miao · Rong Jin · Gang Hua MEDIC: … raj kommineni
Explanation-Guided Backdoor Poisoning Attacks …
WebExplanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers. Giorgio Severi, J. Meyer, Scott E. Coull; Computer Science. USENIX Security Symposium. 2024; TLDR. This paper proposes the use of techniques from explainable machine learning to guide the selection of relevant features and values to create effective backdoor triggers … WebApr 5, 2024 · Backdoor attacks have been demonstrated as a security threat for machine learning models. Traditional backdoor attacks intend to inject backdoor functionality into the model such that the... WebExplanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers. USENIX Security Symposium 2024: 1487-1504 2024 [i2] Giorgio Severi, Jim Meyer, Scott E. Coull, Alina Oprea: Exploring Backdoor Poisoning Attacks Against Malware Classifiers. CoRR abs/2003.01031 ( 2024) [i1] rajko mitić stadı