1.1.7. aijack.attack.poison package#

1.1.7.1. Submodules#

1.1.7.2. aijack.attack.poison.history module#

class aijack.attack.poison.history.HistoryAttackClientWrapper(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
aijack.attack.poison.history.attach_history_attack_to_client(cls, lam)[source]#

Attaches a history attack to a client.

Parameters
  • cls – The client class.

  • lam (float) – The lambda parameter for the attack.

Returns

A wrapper class with attached history attack.

Return type

class

1.1.7.3. aijack.attack.poison.label_flip module#

class aijack.attack.poison.label_flip.LabelFlipAttackClientManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
aijack.attack.poison.label_flip.attach_label_flip_attack_to_client(cls, victim_label, target_label=None, class_num=None)[source]#

Attaches a label flip attack to a client.

Parameters
  • cls – The client class.

  • victim_label – The label to be replaced.

  • target_label – The label to replace the victim label with. If None, a random label will be chosen.

  • class_num – The number of classes.

Returns

A wrapper class with attached label flip attack.

Return type

class

1.1.7.4. aijack.attack.poison.mapf module#

class aijack.attack.poison.mapf.MAPFClientWrapper(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
aijack.attack.poison.mapf.attach_mapf_to_client(cls, lam, base_model_parameters=None)[source]#

Attaches a MAPF attack to a client.

Parameters
  • cls – The client class.

  • lam (float) – The lambda parameter for the attack.

  • base_model_parameters (list, optional) – Base model parameters for parameter flipping. If None, random parameters will be generated. Defaults to None.

Returns

A wrapper class with attached MAPF attack.

Return type

class

1.1.7.5. aijack.attack.poison.poison_attack module#

class aijack.attack.poison.poison_attack.Poison_attack_sklearn(target_model, X_train, y_train, t=0.5)[source]#

Bases: aijack.attack.base_attack.BaseAttacker

implementation of poison attack for sklearn binary classifier

reference https://arxiv.org/abs/1206.6389

Parameters
  • target_model – sklean classifier

  • X_train – training data for target_model

  • y_train – training label for target_model

  • t – step size

target_model#

sklean classifier

X_train#
y_train#
t#

step size

kernel#
delta_kernel#
attack(xc, yc, X_valid, y_valid, num_iterations=200)[source]#

Create an adversarial example for poison attack

Parameters
  • xc – initial attack point

  • yc – true label of initial attack point

  • X_valid – validation data for target_model

  • y_valid – validation label for target_model

  • num_iterations – (default = 200)

Returns

created adversarial example log: log of score of target_model under attack

Return type

xc

1.1.7.6. Module contents#

Subpackage for poisoning attack, which inserts malicious data to the training dataset, so that the performance of the trained machine learning model will degregate.

class aijack.attack.poison.HistoryAttackClientWrapper(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
class aijack.attack.poison.LabelFlipAttackClientManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
class aijack.attack.poison.MAPFClientWrapper(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
class aijack.attack.poison.Poison_attack_sklearn(target_model, X_train, y_train, t=0.5)[source]#

Bases: aijack.attack.base_attack.BaseAttacker

implementation of poison attack for sklearn binary classifier

reference https://arxiv.org/abs/1206.6389

Parameters
  • target_model – sklean classifier

  • X_train – training data for target_model

  • y_train – training label for target_model

  • t – step size

target_model#

sklean classifier

X_train#
y_train#
t#

step size

kernel#
delta_kernel#
attack(xc, yc, X_valid, y_valid, num_iterations=200)[source]#

Create an adversarial example for poison attack

Parameters
  • xc – initial attack point

  • yc – true label of initial attack point

  • X_valid – validation data for target_model

  • y_valid – validation label for target_model

  • num_iterations – (default = 200)

Returns

created adversarial example log: log of score of target_model under attack

Return type

xc