Image Payload Creating/Injecting tools
image
injection
image-processing
injector
payloads
hacking-tool
payload-generator
web-attack-payloads
backdoor-attacks
-
Updated
May 28, 2022 - Perl
After read through the example, can I simply think that you are trying to train a model to addicted to one target label, so that when predicting non-target samples but added with this noise, the poisoned model will output the target label to achieve backdoor attacks?