3.1.3. aijack.collaborative.fedavg package#
3.1.3.1. Submodules#
3.1.3.2. aijack.collaborative.fedavg.api module#
- class aijack.collaborative.fedavg.api.FedAVGAPI(server, clients, criterion, local_optimizers, local_dataloaders, num_communication=1, local_epoch=1, use_gradients=True, custom_action=<function FedAVGAPI.<lambda>>, device='cpu')[source]#
Bases:
aijack.collaborative.core.api.BaseFedAPI
Implementation of FedAVG (McMahan, Brendan, et al. ‘Communication-efficient learning of deep networks from decentralized data.’ Artificial intelligence and statistics. PMLR, 2017.)
- Parameters
server (FedAvgServer) – FedAVG server.
clients ([FedAvgClient]) – a list of FedAVG clients.
criterion (function) – loss function.
local_optimizers ([torch.optimizer]) – a list of local optimizers for clients
local_dataloaders ([toch.dataloader]) – a list of local dataloaders for clients
num_communication (int, optional) – number of communication. Defaults to 1.
local_epoch (int, optional) – number of epochs for local training within each communication. Defaults to 1.
use_gradients (bool, optional) – communicate gradients if True. Otherwise communicate parameters. Defaults to True.
custom_action (function, optional) – arbitrary function that takes this instance itself. Defaults to lambdax:x.
device (str, optional) – device type. Defaults to “cpu”.
3.1.3.3. aijack.collaborative.fedavg.client module#
- class aijack.collaborative.fedavg.client.FedAVGClient(model, user_id=0, lr=0.1, send_gradient=True, optimizer_type_for_global_grad='sgd', server_side_update=True, optimizer_kwargs_for_global_grad={}, device='cpu')[source]#
Bases:
aijack.collaborative.core.client.BaseClient
Client of FedAVG for single process simulation
- Parameters
model (torch.nn.Module) – local model
user_id (int, optional) – if of this client. Defaults to 0.
lr (float, optional) – learning rate. Defaults to 0.1.
send_gradient (bool, optional) – if True, communicate gradient to the server. otherwise, communicates model parameters. Defaults to True.
optimizer_type_for_global_grad (str, optional) – type of optimizer for model update with global gradient. sgd|adam. Defaults to “sgd”.
server_side_update (bool, optional) – If True, the global model update is conducted in the server side. Defaults to True.
optimizer_kwargs_for_global_grad (dict, optional) – kwargs for the optimizer for global gradients. Defaults to {}.
device (str, optional) – device type. Defaults to “cpu”.
3.1.3.4. aijack.collaborative.fedavg.server module#
- class aijack.collaborative.fedavg.server.FedAVGServer(clients, global_model, server_id=0, lr=0.1, optimizer_type='sgd', server_side_update=True, optimizer_kwargs={}, device='cpu')[source]#
Bases:
aijack.collaborative.core.server.BaseServer
Server of FedAVG for single process simulation
- Parameters
clients ([FedAvgClient]) – a list of FedAVG clients.
global_model (torch.nn.Module) – global model.
server_id (int, optional) – id of this server. Defaults to 0.
lr (float, optional) – learning rate. Defaults to 0.1.
optimizer_type (str, optional) – optimizer for the update of global model . Defaults to “sgd”.
server_side_update (bool, optional) – If True, update the global model at the server-side. Defaults to True.
optimizer_kwargs (dict, optional) – kwargs for the global optimizer. Defaults to {}.
- distribute()[source]#
Distribute the current global model to each client.
- Parameters
force_send_model_state_dict (bool, optional) – If True, send the global model as the dictionary of model state regardless of other parameters. Defaults to False.
- receive(use_gradients=True)[source]#
Receive the local models
- Parameters
use_gradients (bool, optional) – If True, receive the local gradients. Otherwise, receive the local parameters. Defaults to True.
3.1.3.5. Module contents#
- class aijack.collaborative.fedavg.FedAVGAPI(server, clients, criterion, local_optimizers, local_dataloaders, num_communication=1, local_epoch=1, use_gradients=True, custom_action=<function FedAVGAPI.<lambda>>, device='cpu')[source]#
Bases:
aijack.collaborative.core.api.BaseFedAPI
Implementation of FedAVG (McMahan, Brendan, et al. ‘Communication-efficient learning of deep networks from decentralized data.’ Artificial intelligence and statistics. PMLR, 2017.)
- Parameters
server (FedAvgServer) – FedAVG server.
clients ([FedAvgClient]) – a list of FedAVG clients.
criterion (function) – loss function.
local_optimizers ([torch.optimizer]) – a list of local optimizers for clients
local_dataloaders ([toch.dataloader]) – a list of local dataloaders for clients
num_communication (int, optional) – number of communication. Defaults to 1.
local_epoch (int, optional) – number of epochs for local training within each communication. Defaults to 1.
use_gradients (bool, optional) – communicate gradients if True. Otherwise communicate parameters. Defaults to True.
custom_action (function, optional) – arbitrary function that takes this instance itself. Defaults to lambdax:x.
device (str, optional) – device type. Defaults to “cpu”.
- class aijack.collaborative.fedavg.FedAVGClient(model, user_id=0, lr=0.1, send_gradient=True, optimizer_type_for_global_grad='sgd', server_side_update=True, optimizer_kwargs_for_global_grad={}, device='cpu')[source]#
Bases:
aijack.collaborative.core.client.BaseClient
Client of FedAVG for single process simulation
- Parameters
model (torch.nn.Module) – local model
user_id (int, optional) – if of this client. Defaults to 0.
lr (float, optional) – learning rate. Defaults to 0.1.
send_gradient (bool, optional) – if True, communicate gradient to the server. otherwise, communicates model parameters. Defaults to True.
optimizer_type_for_global_grad (str, optional) – type of optimizer for model update with global gradient. sgd|adam. Defaults to “sgd”.
server_side_update (bool, optional) – If True, the global model update is conducted in the server side. Defaults to True.
optimizer_kwargs_for_global_grad (dict, optional) – kwargs for the optimizer for global gradients. Defaults to {}.
device (str, optional) – device type. Defaults to “cpu”.
- class aijack.collaborative.fedavg.FedAVGServer(clients, global_model, server_id=0, lr=0.1, optimizer_type='sgd', server_side_update=True, optimizer_kwargs={}, device='cpu')[source]#
Bases:
aijack.collaborative.core.server.BaseServer
Server of FedAVG for single process simulation
- Parameters
clients ([FedAvgClient]) – a list of FedAVG clients.
global_model (torch.nn.Module) – global model.
server_id (int, optional) – id of this server. Defaults to 0.
lr (float, optional) – learning rate. Defaults to 0.1.
optimizer_type (str, optional) – optimizer for the update of global model . Defaults to “sgd”.
server_side_update (bool, optional) – If True, update the global model at the server-side. Defaults to True.
optimizer_kwargs (dict, optional) – kwargs for the global optimizer. Defaults to {}.
- distribute()[source]#
Distribute the current global model to each client.
- Parameters
force_send_model_state_dict (bool, optional) – If True, send the global model as the dictionary of model state regardless of other parameters. Defaults to False.
- receive(use_gradients=True)[source]#
Receive the local models
- Parameters
use_gradients (bool, optional) – If True, receive the local gradients. Otherwise, receive the local parameters. Defaults to True.
- class aijack.collaborative.fedavg.MPIFedAVGAPI(comm, party, is_server, criterion, local_optimizer=None, local_dataloader=None, num_communication=1, local_epoch=1, custom_action=<function MPIFedAVGAPI.<lambda>>, device='cpu')[source]#