All Stories

  1. A Secret Sharing-Inspired Robust Distributed Backdoor Attack to Federated Learning
  2. Distributed Backdoor Attacks on Federated Graph Learning and Certified Defenses
  3. Certifiable Black-Box Attacks with Randomized Adversarial Examples: Breaking Defenses with Provable Confidence
  4. Breaking State-of-the-Art Poisoning Defenses to Federated Learning: An Optimization-Based Attack Framework
  5. Leveraging Local Structure for Improving Model Explanations: An Information Propagation Approach
  6. Harmonizing Differential Privacy Mechanisms for Federated Learning: Boosting Accuracy and Convergence
  7. Efficient, Direct, and Restricted Black-Box Graph Evasion Attacks to Any-Layer Graph Neural Networks via Influence Function
  8. Defense against a privacy attack to protect training data information for neural networks.
  9. On Detecting Growing-Up Behaviors of Malicious Accounts in Privacy-Centric Mobile Social Networks
  10. A Hard Label Black-box Adversarial Attack Against Graph Neural Networks
  11. Towards Adversarial Patch Analysis and Certified Defense against Crowd Counting
  12. Unveiling Fake Accounts at the Time of Registration: An Unsupervised Approach
  13. Privacy-Preserving Representation Learning on Graphs
  14. Certified Robustness of Graph Neural Networks against Adversarial Structural Perturbation
  15. Backdoor Attacks to Graph Neural Networks
  16. Robust and Verifiable Information Embedding Attacks to Deep Neural Networks via Error-Correcting Codes
  17. Certified Robustness of Community Detection against Adversarial Structural Perturbation via Randomized Smoothing
  18. Attacking Graph-based Classification via Manipulating the Graph Structure