layers package#

Submodules#

layers.cbn module#

class layers.cbn.ConditionalBatchNorm(num_features: int, num_classes: int)[source]#

Bases: Module

__init__(num_features: int, num_classes: int) None[source]#

1D Conditional batch normalization (CBN) layer (Dumoulin et al., 2016; De Vries et al.,2017).

Parameters:
  • num_features (int) – Number of input features.

  • num_classes (int) – Number of classes (i.e., distinct labels).

forward(x: Tensor, y: Tensor) Tensor[source]#

Perform CBN given a batch of labels.

Parameters:
  • x (torch.Tensor) – Tensor on which to perform CBN.

  • y (torch.Tensor) – A batch of labels.

Returns:

Conditionally batch normalized input.

Return type:

torch.Tensor

layers.lsn module#

class layers.lsn.LSN(library_size: int, device: str | None = 'cpu')[source]#

Bases: Module

__init__(library_size: int, device: str | None = 'cpu') None[source]#

Library size normalization (LSN) layer.

Parameters:
  • library_size (int) – Total number of counts per generated cell.

  • device (Optional[str], optional) – Specifies to train on ‘cpu’ or ‘cuda’. Only ‘cuda’ is supported for training the GAN but ‘cpu’ can be used for inference, by default “cuda” if torch.cuda.is_available() else”cpu”.

forward(in_: Tensor, reuse_scale: bool | None = False) Tensor[source]#

Function for completing a forward pass of the LSN layer.

Parameters:
  • in (torch.Tensor) – Tensor containing gene expression of cells.

  • reuse_scale (Optional[bool], optional) – If set to true, the LSN layer will scale the cells by the same scale as the previous batch. Useful for performing perturbation studies. By default False

Returns:

Gene expression of cells after library size normalization.

Return type:

torch.Tensor

layers.masked_linear module#

class layers.masked_linear.MaskedLinearFunction(*args, **kwargs)[source]#

Bases: Function

autograd function which masks its weights by ‘mask’.

static forward(ctx: Tensor, input: Tensor, weight: Tensor, bias: Tensor | None = None, mask: Tensor | None = None) Tensor[source]#

Performs the operation.

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by any number of arguments (tensors or other types).

The context can be used to store arbitrary data that can be then retrieved during the backward pass. Tensors should not be stored directly on ctx (though this is not currently enforced for backward compatibility). Instead, tensors should be saved either with ctx.save_for_backward() if they are intended to be used in backward (equivalently, vjp) or ctx.save_for_forward() if they are intended to be used for in jvp.

static backward(ctx: Tensor, grad_output: Tensor)[source]#

Defines a formula for differentiating the operation with backward mode automatic differentiation (alias to the vjp function).

This function is to be overridden by all subclasses.

It must accept a context ctx as the first argument, followed by as many outputs as the forward() returned (None will be passed in for non tensor outputs of the forward function), and it should return as many tensors, as there were inputs to forward(). Each argument is the gradient w.r.t the given output, and each returned value should be the gradient w.r.t. the corresponding input. If an input is not a Tensor or is a Tensor not requiring grads, you can just pass None as a gradient for that input.

The context can be used to retrieve tensors saved during the forward pass. It also has an attribute ctx.needs_input_grad as a tuple of booleans representing whether each input needs gradient. E.g., backward() will have ctx.needs_input_grad[0] = True if the first input to forward() needs gradient computated w.r.t. the output.

_backward_cls#

alias of MaskedLinearFunctionBackward

class layers.masked_linear.MaskedLinear(mask: Tensor, bias: bool = True, device: str | None = 'cpu')[source]#

Bases: Module

__init__(mask: Tensor, bias: bool = True, device: str | None = 'cpu')[source]#

An extension of Pytorch’s linear module based on the following thread: https://discuss.pytorch.org/t/custom-connections-in-neural-network-layers/3027/13

Parameters:
  • mask (torch.Tensor) –

    Mask Tensor with shape (n_input_feature, n_output_feature). the elements are 0 or 1 which declare un-connected or connected.

    Example: the following mask declares a 4-dim from-layer and 3-dim to-layer. Neurons 0, 2, and 3 of the from-layer are connected to neurons 0 and 2 of the to-layer. Neuron 1 of the from-layer is connected to neuron 1 of the to-layer.

    mask = torch.tensor( [[1, 0, 1], [0, 1, 0], [1, 0, 1], [1, 0, 1],] )

  • bias (bool, optional) – By default True

  • device (Optional[str], optional) – Specifies to train on ‘cpu’ or ‘cuda’. Only ‘cuda’ is supported for training the GAN but ‘cpu’ can be used for inference, by default “cuda” if torch.cuda.is_available() else”cpu”.

reapply_mask()[source]#

Function to be called after weights have been initialized (e.g., using torch.nn.init) to reapply mask to weight.

reset_parameters()[source]#
forward(input: Tensor)[source]#

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

extra_repr()[source]#

Set the extra representation of the module

To print customized extra information, you should re-implement this method in your own modules. Both single-line and multi-line strings are acceptable.

Module contents#