2.1.3.1.1. aijack.defense.dp.manager package#

2.1.3.1.1.1. Submodules#

2.1.3.1.1.2. aijack.defense.dp.manager.accountant module#

class aijack.defense.dp.manager.accountant.BaseMomentAccountant(search='ternary', order_min=2, order_max=64, precision=0.5, orders=[], max_iterations=10000)[source]#

Bases: object

Base class for computing the privacy budget using the Moments Accountant technique.

add_step_info(noise_params, sampling_rate, num_steps)[source]#

Add step information.

Parameters
  • noise_params (dict) – Parameters of the noise distribution.

  • sampling_rate (float) – Sampling rate.

  • num_steps (int) – Number of steps.

calc_upperbound_of_rdp_onestep(alpha, noise_params, sampling_rate)[source]#

Calculate the upper bound of Renyi Differential Privacy (RDP) for one step.

Parameters
  • alpha (float) – Privacy parameter alpha.

  • noise_params (dict) – Parameters of the noise distribution.

  • sampling_rate (float) – Sampling rate.

Returns

Upper bound of RDP for one step.

Return type

float

get_delta(epsilon)[source]#

Get delta.

Parameters

epsilon (float) – Epsilon value.

Returns

Delta value.

Return type

float

get_epsilon(delta)[source]#

Get epsilon.

Parameters

delta (float) – Delta value.

Returns

Epsilon value.

Return type

float

get_noise_multiplier(noise_multiplier_key, target_epsilon, target_delta, sampling_rate, num_iterations, noise_multiplier_min=0, noise_multiplier_max=10, noise_multiplier_precision=0.01)[source]#

Get noise multiplier.

Parameters
  • noise_multiplier_key (str) – Key for noise multiplier.

  • target_epsilon (float) – Target epsilon.

  • target_delta (float) – Target delta.

  • sampling_rate (float) – Sampling rate.

  • num_iterations (int) – Number of iterations.

  • noise_multiplier_min (float, optional) – Minimum noise multiplier. Defaults to 0.

  • noise_multiplier_max (float, optional) – Maximum noise multiplier. Defaults to 10.

  • noise_multiplier_precision (float, optional) – Precision of noise multiplier. Defaults to 0.01.

Returns

Noise multiplier.

Return type

float

reset_step_info()[source]#

Reset step information.

step(noise_params, sampling_rate, num_steps)[source]#

Decorator to add step information to a function.

Parameters
  • noise_params (dict) – Parameters of the noise distribution.

  • sampling_rate (float) – Sampling rate.

  • num_steps (int) – Number of steps.

Returns

Decorated function.

Return type

function

class aijack.defense.dp.manager.accountant.GeneralMomentAccountant(name='SGM', search='ternary', order_min=2, order_max=64, precision=0.5, orders=[], noise_type='Gaussian', bound_type='rdp_upperbound_closedformula', max_iterations=10000, backend='cpp')[source]#

Bases: aijack.defense.dp.manager.accountant.BaseMomentAccountant

Generalized class for computing the privacy budget using the Moments Accountant technique.

2.1.3.1.1.3. aijack.defense.dp.manager.adadps module#

aijack.defense.dp.manager.adadps.attach_adadps(cls, accountant, l2_norm_clip, noise_multiplier, lot_size, batch_size, dataset_size, mode='rmsprop', beta=0.9, eps_to_avoid_nan=1e-08)[source]#

Attach the AdaDPS optimizer to the given class.

Parameters
  • cls – Class to which AdaDPS optimizer will be attached.

  • accountant – Privacy accountant.

  • l2_norm_clip (float) – L2 norm clip value.

  • noise_multiplier (float) – Noise multiplier value.

  • lot_size (int) – Lot size.

  • batch_size (int) – Batch size.

  • dataset_size (int) – Size of the dataset.

  • mode (str, optional) – Mode of optimization. Defaults to “rmsprop”.

  • beta (float, optional) – Beta value. Defaults to 0.9.

  • eps_to_avoid_nan (float, optional) – Epsilon value to avoid NaN. Defaults to 1e-8.

Returns

Class with AdaDPS optimizer attached.

Return type

class

2.1.3.1.1.4. aijack.defense.dp.manager.client module#

class aijack.defense.dp.manager.client.DPSGDClientManager(*args, **kwargs)[source]#

Bases: aijack.manager.base.BaseManager

Manager class for attaching DPSGD to clients.

attach(cls)[source]#

Attaches DPSGD to the client class.

Parameters

cls – Client class.

Returns

Wrapped client class with DPSGD functionality.

Return type

DPSGDClientWrapper

aijack.defense.dp.manager.client.attach_dpsgd_to_client(cls, privacy_manager, sigma)[source]#

Attaches DPSGD (Differentially Private Stochastic Gradient Descent) functionality to the client class.

Parameters
  • cls – Client class to which DPSGD functionality will be attached.

  • privacy_manager – Privacy manager object providing DPSGD functionality.

  • sigma (float) – Noise multiplier for privacy.

Returns

Tuple containing the DPSGDClientWrapper class and the privacy optimizer wrapper.

Return type

tuple

2.1.3.1.1.5. aijack.defense.dp.manager.dataloader module#

class aijack.defense.dp.manager.dataloader.DPWrapperLotDataIterator(original_iterator, dp_optimizer)[source]#

Bases: object

class aijack.defense.dp.manager.dataloader.LotDataLoader(dp_optimizer, *args, **kwargs)[source]#

Bases: torch.utils.data.dataloader.DataLoader

class aijack.defense.dp.manager.dataloader.PoissonSampler(dataset, lot_size, iterations)[source]#

Bases: object

2.1.3.1.1.6. aijack.defense.dp.manager.dp_manager module#

class aijack.defense.dp.manager.dp_manager.AdaDPSManager(accountant, optimizer_cls, l2_norm_clip, dataset, lot_size, batch_size, iterations, mode='rmsprop', beta=0.9, eps_to_avoid_nan=1e-08)[source]#

Bases: object

Manager class for privatizing AdaDPS (Adaptive Differentially Private Stochastic Gradient Descent) optimization.

Parameters
  • accountant – Privacy accountant providing privacy guarantees.

  • optimizer_cls – Class of the optimizer to be privatized.

  • l2_norm_clip (float) – L2 norm clip parameter for gradient clipping.

  • dataset – Dataset used for training.

  • lot_size (int) – Size of the lot (local update).

  • batch_size (int) – Size of the batch used for training.

  • iterations (int) – Number of iterations.

  • mode (str, optional) – Mode of optimization (rmsprop or adam). Defaults to “rmsprop”.

  • beta (float, optional) – Beta parameter for optimization. Defaults to 0.9.

  • eps_to_avoid_nan (float, optional) – Epsilon parameter to avoid NaN during optimization. Defaults to 1e-8.

privatize(noise_multiplier)[source]#

Privatizes the optimizer.

Parameters

noise_multiplier (float) – Noise multiplier for privacy.

Returns

Tuple containing the privatized optimizer class, lot loader function, and batch loader function.

Return type

tuple

class aijack.defense.dp.manager.dp_manager.DPSGDManager(accountant, optimizer_cls, l2_norm_clip, dataset, lot_size, batch_size, iterations, smoothing=False, smoothing_radius=10.0)[source]#

Bases: object

Manager class for privatizing DPSGD (Differentially Private Stochastic Gradient Descent) optimization.

Parameters
  • accountant – Privacy accountant providing privacy guarantees.

  • optimizer_cls – Class of the optimizer to be privatized.

  • l2_norm_clip (float) – L2 norm clip parameter for gradient clipping.

  • dataset – Dataset used for training.

  • lot_size (int) – Size of the lot (local update).

  • batch_size (int) – Size of the batch used for training.

  • iterations (int) – Number of iterations.

  • smoothing (bool, optional) – Whether to enable smoothing. Defaults to False.

  • smoothing_radius (float, optional) – Smoothing radius. Defaults to 10.0.

privatize(noise_multiplier)[source]#

Privatizes the optimizer.

Parameters

noise_multiplier (float) – Noise multiplier for privacy.

Returns

Tuple containing the privatized optimizer class, lot loader function, and batch loader function.

Return type

tuple

2.1.3.1.1.7. aijack.defense.dp.manager.dpoptimizer module#

aijack.defense.dp.manager.dpoptimizer.attach_dpoptimizer(cls, accountant, l2_norm_clip, noise_multiplier, lot_size, batch_size, dataset_size, smoothing=False, smoothing_radius=10.0)[source]#

Wraps the given optimizer class in DPOptimizerWrapper.

Parameters
  • accountant (BaseMomentAccountant) – moment accountant

  • l2_norm_clip (float) – upper bound of l2-norm

  • noise_multiplier (float) – scale for added noise

  • lot_size (int) – sampled lot size

  • batch_size (int) – batch size

  • dataset_size (int) – total number of samples in the dataset

  • smoothing (bool) – if true, apply smoothing proposed in Wang, Wenxiao, et al. ``Dplis: Boosting utility of differentially private deep learning via randomized smoothing.` arXiv preprint arXiv:2103.01496 (2021).` (default=False)

  • smoothing_radius (float) – radius of smoothing (default=10.0)

Raises
  • ValueError – if noise_multiplier < 0.0

  • ValueError – if l2_norm_clip < 0

Returns

wrapped DPOptimizerWrapper

Return type

cls

2.1.3.1.1.8. aijack.defense.dp.manager.rdp module#

aijack.defense.dp.manager.rdp.calc_first_term_of_general_upper_bound_of_rdp(alpha, sampling_rate)[source]#
aijack.defense.dp.manager.rdp.calc_general_upperbound_of_rdp_with_theorem5_of_zhu_2019(alpha, params, sampling_rate, _eps)[source]#
aijack.defense.dp.manager.rdp.calc_tightupperbound_lowerbound_of_rdp_with_theorem6and8_of_zhu_2019(alpha, params, sampling_rate, _eps)[source]#
aijack.defense.dp.manager.rdp.calc_tightupperbound_lowerbound_of_rdp_with_theorem6and8_of_zhu_2019_with_tau_estimation(alpha, params, sampling_rate, _eps, tau=10)[source]#
aijack.defense.dp.manager.rdp.calc_upperbound_of_rdp_with_Sampled_Gaussian_Mechanism(alpha, params, sampling_rate, _eps)[source]#

Compute log(A_alpha) for any positive finite alpha.

aijack.defense.dp.manager.rdp.calc_upperbound_of_rdp_with_Sampled_Gaussian_Mechanism_float(alpha, params, sampling_rate)[source]#

Compute log(A_alpha) for fractional alpha. 0 < q < 1.

aijack.defense.dp.manager.rdp.calc_upperbound_of_rdp_with_Sampled_Gaussian_Mechanism_int(alpha, params, sampling_rate)[source]#

Renyi Differential Privacy of the Sampled Gaussian Mechanism 3.3 Numerically Stable Computatio

aijack.defense.dp.manager.rdp.calc_upperbound_of_rdp_with_theorem27_of_wang_2019(alpha, params, sampling_rate, _eps)[source]#
aijack.defense.dp.manager.rdp.eps_gaussian(alpha, params)[source]#
aijack.defense.dp.manager.rdp.eps_laplace(alpha, params)[source]#
aijack.defense.dp.manager.rdp.eps_randresp(alpha, params)[source]#

2.1.3.1.1.9. aijack.defense.dp.manager.utils module#

2.1.3.1.1.10. Module contents#