Skip to content

Commit ec9b7eb

Browse files
Pushing
1 parent 3ec2fb5 commit ec9b7eb

File tree

1 file changed

+11
-0
lines changed

1 file changed

+11
-0
lines changed

_data/publist.yml

Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,14 @@
1+
- title: "Standing Firm in 5G: A Single-Round, Dropout-Resilient Secure Aggregation for Federated Learning"
2+
year: 2025
3+
description: "Federated learning (FL) has gained significant traction for privacy-preserving model training in next-generation networks, yet existing secure aggregation protocols often struggle with the massive connectivity and dynamic nature of 5G environments. We leverage the 5G architecture and propose simple yet efficient pre-computation techniques to design a highly efficient single-round secure aggregation protocol that meets the stringent performance requirement of mobile devices. This is achieved by leveraging the 5G architecture to employ base stations to assist the server in computing the model. Our protocol’s high resiliency is achieved through a novel adoption of key-homomorphic pseudorandom functions and t-out-of-k secret sharing mechanism. Experimental results show that our approach maintains strong security guarantees, achieves notable gains in computation and communication efficiency, and is well-suited for large-scale, on-device intelligence in 5G networks."
4+
authors: "<b>Yiwei Zhang</b>, Rouzbeh Behnia, <b>Imtiaz Karim</b>, Attila A Yavuz and Elisa Bertino"
5+
link:
6+
url:
7+
display: ACM Conference on Security and Privacy in Wireless and Mobile Networks (WiSec), 2025.
8+
pdf:
9+
highlight: 0
10+
11+
112
- title: "How Feasible is Augmenting Fake Nodes with Learnable Features as a counter-strategy against Link Stealing Attacks?"
213
year: 2025
314
description: "Graph Neural Networks (GNNs) are widely used and deployed for graph-based prediction tasks. However, as good as GNNs are for learning graph data, they also come with the risk of privacy leakage. For instance, an attacker can run carefully crafted queries on the GNNs and, from the responses, can infer the existence of an edge between a pair of nodes. This attack, dubbed as a link-stealing attack, can jeopardize the user's privacy by leaking potentially sensitive information. To protect against this attack, we propose an approach called (N)ode (A)ugmentation for (R)estricting (G)raphs from (I)nsinuating their (S)tructure (NARGIS) and study its feasibility. NARGIS is focused on reshaping the graph embedding space so that the posterior from the GNN model will still provide utility for the prediction task but will introduce ambiguity for the link-stealing attackers. To this end, NARGIS applies spectral clustering on the given graph to facilitate it being augmented with new nodes -- that have learned features instead of fixed ones. It utilizes tri-level optimization for learning parameters for the GNN model, surrogate attacker model, and our defense model (i.e. learnable node features). We extensively evaluate NARGIS on three benchmark citation datasets over eight knowledge availability settings for the attackers. We also evaluate the model fidelity and defense performance on influence-based link inference attacks. Through our studies, we have figured out the best feature of NARGIS -- its superior fidelity-privacy performance trade-off in a significant number of cases. We also have discovered in which cases the model needs to be improved, and proposed ways to integrate different schemes to make the model more robust against link stealing attacks."

0 commit comments

Comments
 (0)