The rapid development of text-to-image generation models has led to concerns about privacy and security, as these models have the potential to scrape private images from the internet and use them in their training process. To address this issue, membership inference attacks (MIAs) have been proposed as a means of determining whether an image has been scraped and used in the training process of a text-to-image generation model.
The topic of privacy protection in machine learning has been intensively investigated with regard to membership inference attacks (MIAs). These attacks seek to ascertain whether a certain sample was used in the machine learning model's training. However, text-to-image creation models are constrained by conventional MIAs. We suggest using digital watermarking as a potential fix for this issue in this paper.
Previous studies have demonstrated the effectiveness of MIAs in identifying a sample's membership in image classification models and other related models. The same cannot be true, however, for text-to- image creation models, as these models produce images from a text entry, making it difficult to compare a query image to the generated image. Millions, perhaps billions, of text-image pairs are used for training the attack models, making it very computationally intensive.
Our contributions to this work include a discussion of the limitations of conventional MIAs in the context of text-to-image generation models and a potential solution for improving MIA performance in these models through the use of digital watermarking. To our knowledge, this is the first time that digital watermarking has been suggested as a remedy for MIAs' drawbacks in text-to-image generation models.

Top