Federated learning (FL) is one of the technologies that have attracted much attention in the field of IoT. In federated learning, the learning models of each participant are aggregated at the server, and the participants can further update their learning models using the aggregated results. However, exchanging model-related data in federated learning may lead to the leakage of participants' confidential data. In 2017, a federated learning model using additively homomorphic encryption (AHE) was proposed by Phong et al. Furthermore, a privacy-preserving federated learning framework called chain-PPFL was proposed by Li et al. in 2021. However, the following two vulnerabilities are considered in these systems. 1)A malicious participant may know the learning data of other participants. 2)The learning parameters may be leaked from the server. In this paper, we propose a more secure federated learning framework, TAHE-PPFL, which solves the two problems by utilizing the properties of homomorphic encryption. TAHE-PPFL has the following three features in terms of security. 1)Prevents information leakage among participants. 2)Information leaked from the server is also encrypted. 3) It can be used securely even in the era of quantum computers by using quantum computer resistant cryptography.

Top