3.1.7. aijack.collaborative.fedmd package#

3.1.7.1. Submodules#

3.1.7.2. aijack.collaborative.fedmd.api module#

class aijack.collaborative.fedmd.api.FedMDAPI(server, clients, public_dataloader, local_dataloaders, criterion, client_optimizers, validation_dataloader=None, server_optimizer=None, num_communication=1, device='cpu', consensus_epoch=1, revisit_epoch=1, transfer_epoch_public=1, transfer_epoch_private=1, server_training_epoch=1, custom_action=<function FedMDAPI.<lambda>>)[source]#

Bases: aijack.collaborative.core.api.BaseFLKnowledgeDistillationAPI

Implementation of Fedmd: Heterogenous federated learning via model distillation

digest_phase(i, logging)[source]#
evaluation(i, logging)[source]#
revisit_phase(logging)[source]#
run()[source]#
server_side_training(logging)[source]#
train_server()[source]#
transfer_phase(logging)[source]#
class aijack.collaborative.fedmd.api.MPIFedMDAPI(comm, party, is_server, criterion, local_optimizer=None, local_dataloader=None, public_dataloader=None, num_communication=1, local_epoch=1, consensus_epoch=1, revisit_epoch=1, transfer_epoch_public=1, transfer_epoch_private=1, custom_action=<function MPIFedMDAPI.<lambda>>, device='cpu')[source]#

Bases: aijack.collaborative.core.api.BaseFedAPI

digest_phase()[source]#
local_train(epoch=1, public=True)[source]#
revisit_phase()[source]#
run()[source]#

3.1.7.3. aijack.collaborative.fedmd.client module#

class aijack.collaborative.fedmd.client.FedMDClient(model, public_dataloader, output_dim=1, batch_size=8, user_id=0, base_loss_func=CrossEntropyLoss(), consensus_loss_func=L1Loss(), round_decimal=None, device='cpu')[source]#

Bases: aijack.collaborative.core.client.BaseClient

approach_consensus(consensus_optimizer)[source]#
download(predicted_values_of_server)[source]#

Download the global model from the server.

local_train(local_epoch, criterion, trainloader, optimizer)[source]#
upload()[source]#

Upload the locally learned informatino to the server.

class aijack.collaborative.fedmd.client.MPIFedMDClientManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
aijack.collaborative.fedmd.client.attach_mpi_to_fedmdclient(cls)[source]#
aijack.collaborative.fedmd.client.initialize_global_logit(len_public_dataloader, output_dim, device)[source]#

3.1.7.4. aijack.collaborative.fedmd.nfdp module#

aijack.collaborative.fedmd.nfdp.get_delta_of_fedmd_nfdp(n, k, replacement=True)[source]#

Return delta of FedMD-NFDP

Parameters
  • n (int) – training set size

  • k (int) – sampling size

  • replacement (bool, optional) – sampling w/o replacement. Defaults to True.

Returns

delta of FedMD-NFDP

Return type

float

aijack.collaborative.fedmd.nfdp.get_epsilon_of_fedmd_nfdp(n, k, replacement=True)[source]#

Return epsilon of FedMD-NFDP

Parameters
  • n (int) – training set size

  • k (int) – sampling size

  • replacement (bool, optional) – sampling w/o replacement. Defaults to True.

Returns

epsilon of FedMD-NFDP

Return type

float

aijack.collaborative.fedmd.nfdp.get_k_of_fedmd_nfdp(epsilon, n, replacement=True)[source]#

Return k of FedMD-NFDP

Parameters
  • epsilon (float) – epsilon

  • n (int) – training set size

  • replacement (bool, optional) – sampling w/o replacement. Defaults to True.

Returns

k

Return type

int

3.1.7.5. aijack.collaborative.fedmd.server module#

class aijack.collaborative.fedmd.server.FedMDServer(clients, server_model=None, server_id=0, device='cpu')[source]#

Bases: aijack.collaborative.core.server.BaseServer

action()[source]#

Execute thr routine of each communication.

distribute()[source]#

Distribute the logits of public dataset to each client.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

receive()[source]#
update()[source]#

Update the global model.

class aijack.collaborative.fedmd.server.MPIFedMDServerManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
aijack.collaborative.fedmd.server.attach_mpi_to_fedmdserver(cls)[source]#

3.1.7.6. Module contents#

class aijack.collaborative.fedmd.FedMDAPI(server, clients, public_dataloader, local_dataloaders, criterion, client_optimizers, validation_dataloader=None, server_optimizer=None, num_communication=1, device='cpu', consensus_epoch=1, revisit_epoch=1, transfer_epoch_public=1, transfer_epoch_private=1, server_training_epoch=1, custom_action=<function FedMDAPI.<lambda>>)[source]#

Bases: aijack.collaborative.core.api.BaseFLKnowledgeDistillationAPI

Implementation of Fedmd: Heterogenous federated learning via model distillation

digest_phase(i, logging)[source]#
evaluation(i, logging)[source]#
revisit_phase(logging)[source]#
run()[source]#
server_side_training(logging)[source]#
train_server()[source]#
transfer_phase(logging)[source]#
class aijack.collaborative.fedmd.FedMDClient(model, public_dataloader, output_dim=1, batch_size=8, user_id=0, base_loss_func=CrossEntropyLoss(), consensus_loss_func=L1Loss(), round_decimal=None, device='cpu')[source]#

Bases: aijack.collaborative.core.client.BaseClient

approach_consensus(consensus_optimizer)[source]#
download(predicted_values_of_server)[source]#

Download the global model from the server.

local_train(local_epoch, criterion, trainloader, optimizer)[source]#
upload()[source]#

Upload the locally learned informatino to the server.

class aijack.collaborative.fedmd.FedMDServer(clients, server_model=None, server_id=0, device='cpu')[source]#

Bases: aijack.collaborative.core.server.BaseServer

action()[source]#

Execute thr routine of each communication.

distribute()[source]#

Distribute the logits of public dataset to each client.

forward(x)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

receive()[source]#
update()[source]#

Update the global model.

class aijack.collaborative.fedmd.MPIFedMDAPI(comm, party, is_server, criterion, local_optimizer=None, local_dataloader=None, public_dataloader=None, num_communication=1, local_epoch=1, consensus_epoch=1, revisit_epoch=1, transfer_epoch_public=1, transfer_epoch_private=1, custom_action=<function MPIFedMDAPI.<lambda>>, device='cpu')[source]#

Bases: aijack.collaborative.core.api.BaseFedAPI

digest_phase()[source]#
local_train(epoch=1, public=True)[source]#
revisit_phase()[source]#
run()[source]#
class aijack.collaborative.fedmd.MPIFedMDClientManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
class aijack.collaborative.fedmd.MPIFedMDServerManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
aijack.collaborative.fedmd.get_delta_of_fedmd_nfdp(n, k, replacement=True)[source]#

Return delta of FedMD-NFDP

Parameters
  • n (int) – training set size

  • k (int) – sampling size

  • replacement (bool, optional) – sampling w/o replacement. Defaults to True.

Returns

delta of FedMD-NFDP

Return type

float

aijack.collaborative.fedmd.get_epsilon_of_fedmd_nfdp(n, k, replacement=True)[source]#

Return epsilon of FedMD-NFDP

Parameters
  • n (int) – training set size

  • k (int) – sampling size

  • replacement (bool, optional) – sampling w/o replacement. Defaults to True.

Returns

epsilon of FedMD-NFDP

Return type

float

aijack.collaborative.fedmd.get_k_of_fedmd_nfdp(epsilon, n, replacement=True)[source]#

Return k of FedMD-NFDP

Parameters
  • epsilon (float) – epsilon

  • n (int) – training set size

  • replacement (bool, optional) – sampling w/o replacement. Defaults to True.

Returns

k

Return type

int