• Tue. Jul 2nd, 2024

Can Differential Privacy and Federated Learning Protect Your Privacy? This Paper Uncovers a Major Security Flaw in Machine Learning Systems

Dec 31, 2023

Federated learning has attracted increasing interest from the research community in the past few years due to its capacity to provide privacy-preserving methods for building machine learning and deep learning models. Sophisticated Artificial Intelligence (AI) solutions have been made possible by utilizing the vast amounts of data currently available in the information technology field in conjunction with the most recent technological developments. 

Nonetheless, the dispersed, user-level data production and collecting is one of the fundamental elements of the previously described data era. While this condition makes creating and implementing sophisticated AI solutions possible, it has also brought up significant privacy and security issues due to the granularity of accessible information at the user level. Additionally, as technology has advanced, legal considerations and regulations have drawn more attention, sometimes even placing a strict limit on the development of AI. This has prompted researchers to focus on solutions where privacy protection is the primary barrier to AI advancement. This is exactly one of the goals of federated learning, whose architecture makes it possible to train deep learning models without having to gather potentially sensitive data centrally into a single computing unit. This learning paradigm distributes the computing and assigns each client to train a local model independently with a non-shareable private data set.

Researchers from the University of Pavia, the University of Padua, and Radboud University & Delft University of Technology anticipated that while more socially collaborative solutions can aid in enhancing the functionality of systems under consideration as well as in developing robust privacy-preserving strategies, this paradigm can be maliciously abused to create extremely potent cyberattacks. Due to its decentralized nature, Federated learning is generally a very appealing target environment for attackers. The aggregating server and all participating clients may become possible system enemies. Because of this, the scientific community has created several effective countermeasures and cutting-edge protective strategies that can be used to safeguard this intricate environment. But, by examining how the most recent defenses have behaved, one can observe that their primary tactic is essentially to identify and remove from the system any activity that deviates from the typical behavior of the communities that make up the federated scenario.

In contrast, the novel privacy-preserving techniques propose a collaborative strategy that safeguards individual clients’ local contributions. These tactics effectively integrate local updates with those of the local community members to achieve this. From the attacker’s perspective, this arrangement presents a chance to extend the assault to nearby targets. Resulting in the acquisition of a unique threat that may even be able to trick the most advanced defenses.

Their new study utilizes this innate sense to formulate an innovative artificial intelligence-driven assault plan for a situation where a social recommendation system is outfitted with the previously mentioned privacy safeguards. Taking inspiration from relevant literature, they incorporate two attack modes into the design, which are as follows: a false rating injection method (Backdoor Mode) and an adversarial mode of convergence inhibition. More specifically, they put the concept into practice by concentrating on the system mentioned, which builds a social recommender system by training a GNN model using a federated learning methodology. To attain a high degree of privacy protection, the goal system consists of a community-based mechanism incorporating pseudo-items from the community into the local model training and a Local Differential Privacy module. The researchers contend that while the attack detailed in this paper is specifically designed to target the characteristics of such a system, the underlying theory and approach are transferable to other comparable situations.

The team used the Mean Absolute Error, the Root Mean Squared Error, and a recently developed metric called Favorable Case Rate in particular to quantify the success rate of backdoor assault against the regressor that drives the recommender system to evaluate the effectiveness of the attack. They evaluate the efficacy of their assault against an actual recommender system. Additionally, they ran an experimental campaign using three highly well-liked recommender system datasets. The results demonstrate the powerful consequences that their approach can have in both operating modes. Specifically, in Adversarial Mode, it can, on average, negatively impact the performance of the target GNN model by 60%, while in Backdoor Mode, it permits the creation of completely functional backdoors in roughly 93% of cases—even when the latest federated learning defenses are present.

This paper’s proposal shouldn’t be interpreted as definitive. The team intends to expand the research by modifying the suggested attack tactic to fit various potential scenarios to show the approach’s general applicability. Furthermore, the risk they found stems from the collaborative nature of certain federated learning privacy-preserving techniques. Therefore, the team plans to create potential upgrades to current defenses to address the found weakness. They intend to expand on this research to include vertical federated learning.


Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, LinkedIn Group, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..

The post Can Differential Privacy and Federated Learning Protect Your Privacy? This Paper Uncovers a Major Security Flaw in Machine Learning Systems appeared first on MarkTechPost.


#AIShorts #Applications #ArtificialIntelligence #EditorsPick #MachineLearning #Staff #TechNews #Technology #Uncategorized
[Source: AI Techpark]

Related Post