In recent years, machine learning, particularly deep learning, has made remarkable strides, and has great impact on our society across various domains such as transportation, healthcare, and finance. However, it is known that machine learning is highly vulnerable to malicious attacks. This paper focuses on the defense against backdoor attacks. A backdoor attack adds malicious data into the training dataset. The model trained on this dataset produces incorrect outputs for malicious data input by the attacker. A defense known as the signature-embedding method has been proposed. This defense involves incorporating
data (signatures) that only the model creator adds into the training dataset to detect backdoor attacks. This paper highlights the problems with this defense method and proposes improvements.