3.1.10. aijack.collaborative.optimizer package#
3.1.10.1. Submodules#
3.1.10.2. aijack.collaborative.optimizer.adam module#
- class aijack.collaborative.optimizer.adam.AdamFLOptimizer(parameters, lr=0.01, weight_decay=0.0001, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]#
Bases:
aijack.collaborative.optimizer.base.BaseFLOptimizer
Implementation of Adam to update the global model of Federated Learning.
- Parameters
parameters (List[torch.nn.Parameter]) – parameters of the model
lr (float, optional) – learning rate. Defaults to 0.01.
weight_decay (float, optional) – coefficient of weight decay. Defaults to 0.0001.
beta1 (float, optional) – 1st-order exponential decay. Defaults to 0.9.
beta2 (float, optional) – 2nd-order exponential decay. Defaults to 0.999.
epsilon (float, optional) – a small value to prevent zero-devision. Defaults to 1e-8.
3.1.10.3. aijack.collaborative.optimizer.base module#
- class aijack.collaborative.optimizer.base.BaseFLOptimizer(parameters, lr=0.01, weight_decay=0.0001)[source]#
Bases:
object
Basic class for optimizers of the server in Federated Learning.
- Parameters
parameters (List[torch.nn.Parameter]) – parameters of the model
lr (float, optional) – learning rate. Defaults to 0.01.
weight_decay (float, optional) – coefficient of weight decay. Defaults to 0.0001.
3.1.10.4. aijack.collaborative.optimizer.sgd module#
- class aijack.collaborative.optimizer.sgd.SGDFLOptimizer(parameters, lr=0.01, weight_decay=0.0)[source]#
Bases:
aijack.collaborative.optimizer.base.BaseFLOptimizer
Implementation of SGD to update the global model of Federated Learning.
- Parameters
parameters (List[torch.nn.Parameter]) – parameters of the model
lr (float, optional) – learning rate. Defaults to 0.01.
weight_decay (float, optional) – coefficient of weight decay. Defaults to 0.0001.
3.1.10.5. Module contents#
Implementation of basic collaborative optimizers for neural network
- class aijack.collaborative.optimizer.AdamFLOptimizer(parameters, lr=0.01, weight_decay=0.0001, beta1=0.9, beta2=0.999, epsilon=1e-08)[source]#
Bases:
aijack.collaborative.optimizer.base.BaseFLOptimizer
Implementation of Adam to update the global model of Federated Learning.
- Parameters
parameters (List[torch.nn.Parameter]) – parameters of the model
lr (float, optional) – learning rate. Defaults to 0.01.
weight_decay (float, optional) – coefficient of weight decay. Defaults to 0.0001.
beta1 (float, optional) – 1st-order exponential decay. Defaults to 0.9.
beta2 (float, optional) – 2nd-order exponential decay. Defaults to 0.999.
epsilon (float, optional) – a small value to prevent zero-devision. Defaults to 1e-8.
- class aijack.collaborative.optimizer.SGDFLOptimizer(parameters, lr=0.01, weight_decay=0.0)[source]#
Bases:
aijack.collaborative.optimizer.base.BaseFLOptimizer
Implementation of SGD to update the global model of Federated Learning.
- Parameters
parameters (List[torch.nn.Parameter]) – parameters of the model
lr (float, optional) – learning rate. Defaults to 0.01.
weight_decay (float, optional) – coefficient of weight decay. Defaults to 0.0001.