3.1.3. aijack.collaborative.fedavg package#

3.1.3.1. Submodules#

3.1.3.2. aijack.collaborative.fedavg.api module#

class aijack.collaborative.fedavg.api.FedAVGAPI(server, clients, criterion, local_optimizers, local_dataloaders, num_communication=1, local_epoch=1, use_gradients=True, custom_action=<function FedAVGAPI.<lambda>>, device='cpu')[source]#

Bases: aijack.collaborative.core.api.BaseFedAPI

Implementation of FedAVG (McMahan, Brendan, et al. ‘Communication-efficient learning of deep networks from decentralized data.’ Artificial intelligence and statistics. PMLR, 2017.)

Parameters
  • server (FedAvgServer) – FedAVG server.

  • clients ([FedAvgClient]) – a list of FedAVG clients.

  • criterion (function) – loss function.

  • local_optimizers ([torch.optimizer]) – a list of local optimizers for clients

  • local_dataloaders ([toch.dataloader]) – a list of local dataloaders for clients

  • num_communication (int, optional) – number of communication. Defaults to 1.

  • local_epoch (int, optional) – number of epochs for local training within each communication. Defaults to 1.

  • use_gradients (bool, optional) – communicate gradients if True. Otherwise communicate parameters. Defaults to True.

  • custom_action (function, optional) – arbitrary function that takes this instance itself. Defaults to lambdax:x.

  • device (str, optional) – device type. Defaults to “cpu”.

local_train(i)[source]#
run()[source]#
class aijack.collaborative.fedavg.api.MPIFedAVGAPI(comm, party, is_server, criterion, local_optimizer=None, local_dataloader=None, num_communication=1, local_epoch=1, custom_action=<function MPIFedAVGAPI.<lambda>>, device='cpu')[source]#

Bases: aijack.collaborative.core.api.BaseFedAPI

local_train(com_cnt)[source]#
run()[source]#

3.1.3.3. aijack.collaborative.fedavg.client module#

class aijack.collaborative.fedavg.client.FedAVGClient(model, user_id=0, lr=0.1, send_gradient=True, optimizer_type_for_global_grad='sgd', server_side_update=True, optimizer_kwargs_for_global_grad={}, device='cpu')[source]#

Bases: aijack.collaborative.core.client.BaseClient

Client of FedAVG for single process simulation

Parameters
  • model (torch.nn.Module) – local model

  • user_id (int, optional) – if of this client. Defaults to 0.

  • lr (float, optional) – learning rate. Defaults to 0.1.

  • send_gradient (bool, optional) – if True, communicate gradient to the server. otherwise, communicates model parameters. Defaults to True.

  • optimizer_type_for_global_grad (str, optional) – type of optimizer for model update with global gradient. sgd|adam. Defaults to “sgd”.

  • server_side_update (bool, optional) – If True, the global model update is conducted in the server side. Defaults to True.

  • optimizer_kwargs_for_global_grad (dict, optional) – kwargs for the optimizer for global gradients. Defaults to {}.

  • device (str, optional) – device type. Defaults to “cpu”.

download(new_global_model)[source]#

Download the new global model

local_train(local_epoch, criterion, trainloader, optimizer, communication_id=0)[source]#
revert()[source]#

Revert the local model state to the previous global model

upload()[source]#

Upload the current local model state

upload_gradients()[source]#

Upload the local gradients

upload_parameters()[source]#

Upload the model parameters

class aijack.collaborative.fedavg.client.MPIFedAVGClientManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
aijack.collaborative.fedavg.client.attach_mpi_to_fedavgclient(cls)[source]#

3.1.3.4. aijack.collaborative.fedavg.server module#

class aijack.collaborative.fedavg.server.FedAVGServer(clients, global_model, server_id=0, lr=0.1, optimizer_type='sgd', server_side_update=True, optimizer_kwargs={}, device='cpu')[source]#

Bases: aijack.collaborative.core.server.BaseServer

Server of FedAVG for single process simulation

Parameters
  • clients ([FedAvgClient]) – a list of FedAVG clients.

  • global_model (torch.nn.Module) – global model.

  • server_id (int, optional) – id of this server. Defaults to 0.

  • lr (float, optional) – learning rate. Defaults to 0.1.

  • optimizer_type (str, optional) – optimizer for the update of global model . Defaults to “sgd”.

  • server_side_update (bool, optional) – If True, update the global model at the server-side. Defaults to True.

  • optimizer_kwargs (dict, optional) – kwargs for the global optimizer. Defaults to {}.

action(use_gradients=True)[source]#

Execute thr routine of each communication.

distribute()[source]#

Distribute the current global model to each client.

Parameters

force_send_model_state_dict (bool, optional) – If True, send the global model as the dictionary of model state regardless of other parameters. Defaults to False.

receive(use_gradients=True)[source]#

Receive the local models

Parameters

use_gradients (bool, optional) – If True, receive the local gradients. Otherwise, receive the local parameters. Defaults to True.

receive_local_gradients()[source]#

Receive local gradients

receive_local_parameters()[source]#

Receive local parameters

update(use_gradients=True)[source]#

Update the global model

Parameters

use_gradients (bool, optional) – If True, update the global model with aggregated local gradients. Defaults to True.

update_from_gradients()[source]#

Update the global model with the local gradients.

update_from_parameters()[source]#

Update the global model with the local model parameters.

class aijack.collaborative.fedavg.server.MPIFedAVGServerManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
aijack.collaborative.fedavg.server.attach_mpi_to_fedavgserver(cls)[source]#

3.1.3.5. Module contents#

class aijack.collaborative.fedavg.FedAVGAPI(server, clients, criterion, local_optimizers, local_dataloaders, num_communication=1, local_epoch=1, use_gradients=True, custom_action=<function FedAVGAPI.<lambda>>, device='cpu')[source]#

Bases: aijack.collaborative.core.api.BaseFedAPI

Implementation of FedAVG (McMahan, Brendan, et al. ‘Communication-efficient learning of deep networks from decentralized data.’ Artificial intelligence and statistics. PMLR, 2017.)

Parameters
  • server (FedAvgServer) – FedAVG server.

  • clients ([FedAvgClient]) – a list of FedAVG clients.

  • criterion (function) – loss function.

  • local_optimizers ([torch.optimizer]) – a list of local optimizers for clients

  • local_dataloaders ([toch.dataloader]) – a list of local dataloaders for clients

  • num_communication (int, optional) – number of communication. Defaults to 1.

  • local_epoch (int, optional) – number of epochs for local training within each communication. Defaults to 1.

  • use_gradients (bool, optional) – communicate gradients if True. Otherwise communicate parameters. Defaults to True.

  • custom_action (function, optional) – arbitrary function that takes this instance itself. Defaults to lambdax:x.

  • device (str, optional) – device type. Defaults to “cpu”.

local_train(i)[source]#
run()[source]#
class aijack.collaborative.fedavg.FedAVGClient(model, user_id=0, lr=0.1, send_gradient=True, optimizer_type_for_global_grad='sgd', server_side_update=True, optimizer_kwargs_for_global_grad={}, device='cpu')[source]#

Bases: aijack.collaborative.core.client.BaseClient

Client of FedAVG for single process simulation

Parameters
  • model (torch.nn.Module) – local model

  • user_id (int, optional) – if of this client. Defaults to 0.

  • lr (float, optional) – learning rate. Defaults to 0.1.

  • send_gradient (bool, optional) – if True, communicate gradient to the server. otherwise, communicates model parameters. Defaults to True.

  • optimizer_type_for_global_grad (str, optional) – type of optimizer for model update with global gradient. sgd|adam. Defaults to “sgd”.

  • server_side_update (bool, optional) – If True, the global model update is conducted in the server side. Defaults to True.

  • optimizer_kwargs_for_global_grad (dict, optional) – kwargs for the optimizer for global gradients. Defaults to {}.

  • device (str, optional) – device type. Defaults to “cpu”.

download(new_global_model)[source]#

Download the new global model

local_train(local_epoch, criterion, trainloader, optimizer, communication_id=0)[source]#
revert()[source]#

Revert the local model state to the previous global model

upload()[source]#

Upload the current local model state

upload_gradients()[source]#

Upload the local gradients

upload_parameters()[source]#

Upload the model parameters

class aijack.collaborative.fedavg.FedAVGServer(clients, global_model, server_id=0, lr=0.1, optimizer_type='sgd', server_side_update=True, optimizer_kwargs={}, device='cpu')[source]#

Bases: aijack.collaborative.core.server.BaseServer

Server of FedAVG for single process simulation

Parameters
  • clients ([FedAvgClient]) – a list of FedAVG clients.

  • global_model (torch.nn.Module) – global model.

  • server_id (int, optional) – id of this server. Defaults to 0.

  • lr (float, optional) – learning rate. Defaults to 0.1.

  • optimizer_type (str, optional) – optimizer for the update of global model . Defaults to “sgd”.

  • server_side_update (bool, optional) – If True, update the global model at the server-side. Defaults to True.

  • optimizer_kwargs (dict, optional) – kwargs for the global optimizer. Defaults to {}.

action(use_gradients=True)[source]#

Execute thr routine of each communication.

distribute()[source]#

Distribute the current global model to each client.

Parameters

force_send_model_state_dict (bool, optional) – If True, send the global model as the dictionary of model state regardless of other parameters. Defaults to False.

receive(use_gradients=True)[source]#

Receive the local models

Parameters

use_gradients (bool, optional) – If True, receive the local gradients. Otherwise, receive the local parameters. Defaults to True.

receive_local_gradients()[source]#

Receive local gradients

receive_local_parameters()[source]#

Receive local parameters

update(use_gradients=True)[source]#

Update the global model

Parameters

use_gradients (bool, optional) – If True, update the global model with aggregated local gradients. Defaults to True.

update_from_gradients()[source]#

Update the global model with the local gradients.

update_from_parameters()[source]#

Update the global model with the local model parameters.

class aijack.collaborative.fedavg.MPIFedAVGAPI(comm, party, is_server, criterion, local_optimizer=None, local_dataloader=None, num_communication=1, local_epoch=1, custom_action=<function MPIFedAVGAPI.<lambda>>, device='cpu')[source]#

Bases: aijack.collaborative.core.api.BaseFedAPI

local_train(com_cnt)[source]#
run()[source]#
class aijack.collaborative.fedavg.MPIFedAVGClientManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#
class aijack.collaborative.fedavg.MPIFedAVGServerManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

attach(cls)[source]#