In recent years, machine learning technology has garnered attention due to its diverse range of applications. It has been increasingly utilized in various fields such as healthcare, finance, and transportation, and especially with the proliferation of IoT devices, these technologies have become an essential element of everyday life. Machine learning enables data-driven decision-making and has shown remarkable results in identifying complex patterns and in predictive modeling. However, machine learning models have inherent vulnerabilities in their design and training processes. In particular, they are susceptible to security threats such as backdoor attacks through malicious external interventions. Such attacks are covertly embedded in the model and are activated only when a specific trigger is present. For example, a BadNets attack manipulates a model to generate inappropriate outputs for specific inputs by embedding subtle patterns in the training data. This study proposes a new approach to detect and address these types of attacks. Firstly, we developed an analytical method to identify datasets that have been tampered with for attacks. This method uses Shannon entropy to detect abnormal patterns in the data, thereby identifying signs of attacks. Secondly, we developed a system to detect the presence of backdoors in the input data to models in operational environments. This enables effective measures to be taken before the model is exploited in an attack. This research focuses on enhancing the security of machine learning models and aims to improve the overall reliability of systems. Furthermore, by raising awareness of the potential risks associated with machine learning, we hope to contribute to the development of future technologies.