4. aijack.utils package#

4.1. Submodules#

4.2. aijack.utils.dataloader module#

aijack.utils.dataloader.prepareFederatedMNISTDataloaders(client_num=2, local_label_num=2, local_data_num=20, batch_size=1, test_batch_size=16, path='MNIST/.', download=True, transform=Compose(     ToTensor()     Normalize(mean=(0.5, ), std=(0.5, )) ), seed=0, return_idx=False)[source]#

4.3. aijack.utils.metrics module#

aijack.utils.metrics.accuracy_torch_dataloader(model, dataloader, device='cpu', xpos=1, ypos=2)[source]#

Calculates the accuracy of the model on the given dataloader

Parameters
  • model (torch.nn.Module) – model to be evaluated

  • dataloader (torch.DataLoader) – dataloader to be evaluated

  • device (str, optional) – device type. Defaults to “cpu”.

  • xpos (int, optional) – the positional index of the input in data. Defaults to 1.

  • ypos (int, optional) – the positional index of the label in data. Defaults to 2.

Returns

accuracy

Return type

float

aijack.utils.metrics.crossentropyloss_between_logits(y_pred_logit, y_true_labels, reduction='mean')[source]#

Cross entropy loss for soft labels Based on https://discuss.pytorch.org/t/soft-cross-entropy-loss-tf-has-it-does-pytorch-have-it/69501/2 :param y_pred_logit: predicted logits :type y_pred_logit: torch.Tensor :param y_true_labels: ground-truth soft labels :type y_true_labels: torch.Tensor

Returns

average cross entropy between y_pred_logit and y_true_labels2

Return type

torch.Tensor

aijack.utils.metrics.total_variance(x)[source]#

Returns the total variance of the given data

Parameters

x (torch.Tensor) – input data

Returns

total variance of the given data

Return type

float

4.4. aijack.utils.utils module#

class aijack.utils.utils.NumpyDataset(x, y=None, transform=None, return_idx=False)[source]#

Bases: torch.utils.data.dataset.Dataset

This class allows you to convert numpy.array to torch.Dataset

Parameters
  • x (np.array) –

  • y (np.array) –

  • transform (torch.transform) –

Attriutes

x (np.array): y (np.array): transform (torch.transform):

class aijack.utils.utils.RoundDecimal(*args, **kwargs)[source]#

Bases: torch.autograd.function.Function

static backward(ctx, grad_output)[source]#

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

static forward(ctx, input, n_digits)[source]#

This function is to be overridden by all subclasses. There are two ways to define forward:

Usage 1 (Combined forward and ctx):

@staticmethod
def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:
    pass
  • It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

  • See combining-forward-context for more details

Usage 2 (Separate forward and ctx):

@staticmethod
def forward(*args: Any, **kwargs: Any) -> Any:
    pass

@staticmethod
def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:
    pass
  • The forward no longer accepts a ctx argument.

  • Instead, you must also override the torch.autograd.Function.setup_context() staticmethod to handle setting up the ctx object. output is the output of the forward, inputs are a Tuple of inputs to the forward.

  • See extending-autograd for more details

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

class aijack.utils.utils.TorchClassifier(model, criterion, optimizer, epoch=1, device='cpu', batch_size=1, shuffle=True, num_workers=2)[source]#

Bases: sklearn.base.BaseEstimator, sklearn.base.ClassifierMixin

fit(X, y)[source]#
predict(X)[source]#
predict_proba(X)[source]#
score(X, y)[source]#

Return the mean accuracy on the given test data and labels.

In multi-label classification, this is the subset accuracy which is a harsh metric since you require for each sample that each label set be correctly predicted.

Parameters
  • X (array-like of shape (n_samples, n_features)) – Test samples.

  • y (array-like of shape (n_samples,) or (n_samples, n_outputs)) – True labels for X.

  • sample_weight (array-like of shape (n_samples,), default=None) – Sample weights.

Returns

score – Mean accuracy of self.predict(X) w.r.t. y.

Return type

float

aijack.utils.utils.default_local_train_for_client(self, local_epoch, criterion, trainloader, optimizer)[source]#
aijack.utils.utils.try_gpu(e)[source]#

Send given tensor to gpu if it is available

Parameters

e – (torch.Tensor)

Returns

(torch.Tensor)

Return type

e

aijack.utils.utils.worker_init_fn(worker_id)[source]#

4.5. Module contents#