Skip to main content
Ctrl+K
Logo image

Tutorial

  • 1. Federated Learning
    • 1.1. FedAVG
    • 1.2. FedAVG with Paillier Encryption
    • 1.3. FedAVG with Sparse Gradient
    • 1.4. FedMD: Federated Learning with Model Distillation
    • 1.5. SecureBoost: Vertically Federated XGBoost with Paillier Encryption
  • 2. Model Inversion
    • 2.1. MI-FACE
    • 2.2. Gradient-based Model Inversion Attack against Federated Learning
    • 2.3. GAN Attack
    • 2.4. Soteria: : Provable Defense against Privacy Leakage in Federated Learning g from Representation Perspective
    • 2.5. Mutual Information-based Defense
  • 3. Label Leakage
    • 3.1. Split Learning and Label Leakage
  • 4. Membership Inference
    • 4.1. Memership Inference
  • 5. Poisoning Attack
    • 5.1. Poisoning Attack against Federated Learning
    • 5.2. Poisoning Attack against SVM
  • 6. Backdoor Attack
    • 6.1. Backdoor Attack against Federated Learning
  • 7. Evasion Attack
    • 7.1. Evasion Attack against SVM
    • 7.2. DIVA
    • 7.3. Exploring Adversarial Example Transferability and Robust Tree Models
    • 7.4. PixelDP
  • 8. Differential Privacy
    • 8.1. Differential Privacy and Moment Accountant
    • 8.2. MI-FACE vs DPSGD
    • 8.3. AdaDPS
    • 8.4. DPlis
  • 9. K-anonymity
    • 9.1. K-anonymity
  • 10. Debugging
    • 10.1. Neuron Coverage
    • 10.2. Model Assertions
  • 11. Homomorphic Encryption
    • 11.1. Paillier Encryption

API Docs

  • 1. aijack.attack package
    • 1.1.1. aijack.attack.backdoor package
    • 1.1.2. aijack.attack.evasion package
    • 1.1.3. aijack.attack.freerider package
    • 1.1.4. aijack.attack.inversion package
      • 1.1.4.1.1. aijack.attack.inversion.utils package
    • 1.1.5. aijack.attack.labelleakage package
    • 1.1.6. aijack.attack.membership package
    • 1.1.7. aijack.attack.poison package
  • 2. aijack.defense package
    • 2.1.1. aijack.defense.crobustness package
    • 2.1.2. aijack.defense.debugging package
      • 2.1.2.1.1. aijack.defense.debugging.assertions package
      • 2.1.2.1.2. aijack.defense.debugging.neuroncoverage package
    • 2.1.3. aijack.defense.dp package
      • 2.1.3.1.1. aijack.defense.dp.manager package
    • 2.1.4. aijack.defense.foolsgold package
    • 2.1.5. aijack.defense.kanonymity package
    • 2.1.6. aijack.defense.mid package
    • 2.1.7. aijack.defense.paillier package
    • 2.1.8. aijack.defense.soteria package
    • 2.1.9. aijack.defense.sparse package
  • 3. aijack.collaborative package
    • 3.1.1. aijack.collaborative.core package
    • 3.1.2. aijack.collaborative.dsfl package
    • 3.1.3. aijack.collaborative.fedavg package
    • 3.1.4. aijack.collaborative.fedexp package
    • 3.1.5. aijack.collaborative.fedgems package
    • 3.1.6. aijack.collaborative.fedkd package
    • 3.1.7. aijack.collaborative.fedmd package
    • 3.1.8. aijack.collaborative.fedprox package
    • 3.1.9. aijack.collaborative.moon package
    • 3.1.10. aijack.collaborative.optimizer package
    • 3.1.11. aijack.collaborative.splitnn package
    • 3.1.12. aijack.collaborative.tree package
  • 4. aijack.utils package

Developer Docs

  • 1. Contribution Guide
  • .rst

K-anonymity

9. K-anonymity#

  • 9.1. K-anonymity

previous

8.4. DPlis

next

9.1. K-anonymity

By Hideaki Takahashi

© Copyright 2023, Hideaki Takahashi.