networks package#
Submodules#
networks.critic module#
- class networks.critic.Critic(x_input: int, critic_layers: List[int])[source]#
Bases:
Module
- __init__(x_input: int, critic_layers: List[int]) None [source]#
Non-conditional Critic’s constructor.
- Parameters:
x_input (int) – The dimension of the input tensor.
critic_layers (List[int]) – List of integers corresponding to the number of neurons at each hidden layer of the critic.
- forward(data: Tensor, *args, **kwargs) Tensor [source]#
Function for completing a forward pass of the critic.
- Parameters:
data (torch.Tensor) – Tensor containing gene expression of (fake/real) cells.
*args – Variable length argument list.
**kwargs – Arbitrary keyword arguments.
- Returns:
1-dimensional tensor representing fake/real cells.
- Return type:
torch.Tensor
- static _create_critic_block(input_dim: int, output_dim: int, final_layer: bool | None = False) Sequential [source]#
Function for creating a sequence of operations corresponding to a Critic block; a linear layer, and ReLU (except in the final block).
- Parameters:
input_dim (int) – The block’s input dimensions.
output_dim (int) – The block’s output dimensions.
final_layer (Optional[bool], optional) – Indicates if the block contains the final layer, by default False.
- Returns:
Sequential container containing the modules.
- Return type:
nn.Sequential
- class networks.critic.ConditionalCritic(x_input: int, critic_layers: List[int], num_classes: int)[source]#
Bases:
Critic
- __init__(x_input: int, critic_layers: List[int], num_classes: int) None [source]#
Conditional Critic’s constructor - Projection Discriminator (Miyato et al.,2018).
- Parameters:
x_input (int) – The dimension of the input tensor.
critic_layers (List[int]) – List of integers corresponding to the number of neurons at each hidden layer of the critic.
num_classes (int) – Number of clusters.
- forward(data: Tensor, labels: Tensor | None = None, *args, **kwargs) Tensor [source]#
Function for completing a forward pass of the conditional critic.
- Parameters:
data (torch.Tensor) – Tensor containing gene expression of (fake/real) cells.
labels (torch.Tensor) – Tensor containing labels corresponding to cells (data parameter).
*args – Variable length argument list.
**kwargs – Arbitrary keyword arguments.
- Returns:
1-dimensional tensor representing fake/real cells.
- Return type:
torch.Tensor
- class networks.critic.ConditionalCriticProj(x_input: int, critic_layers: List[int], num_classes: int)[source]#
Bases:
Critic
- __init__(x_input: int, critic_layers: List[int], num_classes: int) None [source]#
Conditional Critic’s constructor using a modified implementation of Projection Discriminator (Marouf et al, 2020).
- Parameters:
x_input (int) – The dimension of the input tensor.
critic_layers (List[int]) – List of integers corresponding to the number of neurons at each hidden layer of the critic.
num_classes (int) – Number of clusters.
- forward(data: Tensor, labels: Tensor | None = None, *args, **kwargs) Tensor [source]#
Function for completing a forward pass of the conditional critic.
- Parameters:
data (torch.Tensor) – Tensor containing gene expression of (fake/real) cells.
labels (torch.Tensor) – Tensor containing labels corresponding to cells (data parameter).
*args – Variable length argument list.
**kwargs – Arbitrary keyword arguments.
- Returns:
1-dimensional tensor representing fake/real cells.
- Return type:
torch.Tensor
networks.generator module#
- class networks.generator.Generator(z_input: int, output_cells_dim: int, gen_layers: List[int], library_size: int | None = None)[source]#
Bases:
Module
- __init__(z_input: int, output_cells_dim: int, gen_layers: List[int], library_size: int | None = None) None [source]#
Non-conditional Generator’s constructor.
- Parameters:
z_input (int) – The dimension of the noise tensor.
output_cells_dim (int) – The dimension of the output cells (number of genes).
gen_layers (List[int]) – List of integers corresponding to the number of neurons at each hidden layer of the generator.
library_size (Optional[Union[int, None]]) – Total number of counts per generated cell.
- forward(noise: Tensor, *args, **kwargs) Tensor [source]#
Function for completing a forward pass of the generator.
- Parameters:
noise (torch.Tensor) – The noise used as input by the generator.
*args – Variable length argument list.
**kwargs – Arbitrary keyword arguments.
- Returns:
The output of the generator (genes of the generated cell).
- Return type:
torch.Tensor
- static _create_generator_block(input_dim: int, output_dim: int, library_size: int | None = None, final_layer: bool | None = False, *args, **kwargs) Sequential [source]#
Function for creating a sequence of operations corresponding to a Generator block; a linear layer, a batchnorm (except in the final block), a ReLU, and LSN in the final layer.
- Parameters:
input_dim (int) – The block’s input dimensions.
output_dim (int) – The block’s output dimensions.
library_size (Optional[Union[int, None]], optional) – Total number of counts per generated cell, by default None.
final_layer (Optional[bool], optional) – Indicates if the block contains the final layer, by default False.
*args – Variable length argument list.
**kwargs – Arbitrary keyword arguments.
- Returns:
Sequential container containing the modules.
- Return type:
nn.Sequential
- class networks.generator.ConditionalGenerator(z_input: int, output_cells_dim: int, num_classes: int, gen_layers: List[int], library_size: int | None = None)[source]#
Bases:
Generator
- __init__(z_input: int, output_cells_dim: int, num_classes: int, gen_layers: List[int], library_size: int | None = None) None [source]#
Conditional Generator’s constructor.
- Parameters:
z_input (int) – The dimension of the noise tensor.
output_cells_dim (int) – The dimension of the output cells (number of genes).
num_classes (int) – Number of clusters.
gen_layers (List[int]) – List of integers corresponding to the number of neurons at each hidden layer of the generator.
library_size (Optional[Union[int, None]], optional) – Total number of counts per generated cell, by default None.
- forward(noise: Tensor, labels: Tensor | None = None, *args, **kwargs) Tensor [source]#
Function for completing a forward pass of the generator.
- Parameters:
noise (torch.Tensor) – The noise used as input by the generator.
labels (torch.Tensor) – Tensor containing labels corresponding to cells to generate.
*args – Variable length argument list.
**kwargs – Arbitrary keyword arguments.
- Returns:
The output of the generator (genes of the generated cell).
- Return type:
torch.Tensor
- static _create_generator_block(input_dim: int, output_dim: int, library_size: int | None = None, final_layer: bool | None = False, num_classes: int | None = None, *args, **kwargs) Sequential | tuple [source]#
Function for creating a sequence of operations corresponding to a Conditional Generator block; a linear layer, a conditional batchnorm (except in the final block), a ReLU, and LSN in the final layer.
- Parameters:
input_dim (int) – The block’s input dimensions.
output_dim (int) – The block’s output dimensions.
library_size (Optional[Union[int, None]], optional) – Total number of counts per generated cell, by default None.
final_layer (Optional[bool], optional) – Indicates if the block contains the final layer, by default False.
num_classes (int) – Number of clusters.
*args – Variable length argument list.
**kwargs – Arbitrary keyword arguments.
- Returns:
Sequential container or tuple containing modules.
- Return type:
Union[nn.Sequential, tuple]
networks.labeler module#
- class networks.labeler.Labeler(num_genes: int, num_tfs: int, labeler_layers: List[int])[source]#
Bases:
Module
- __init__(num_genes: int, num_tfs: int, labeler_layers: List[int]) None [source]#
Labeler network’s constructor.
- Parameters:
num_genes (int) – Number of target genes (all genes excluding TFs) in the dataset.
num_tfs (int) – Number of transcription factors in the dataset.
labeler_layers (List[int]) – List of integers corresponding to the number of neurons at each deep layer of the labeler.
- forward(target_genes: Tensor) Tensor [source]#
Function for completing a forward pass of the labeler. This network performs a regression by predicting TF expression from target gene expression.
- Parameters:
target_genes (torch.Tensor) – Tensor containing target gene expression of (fake/real) cells.
- Returns:
Tensor containing regulatory TFs.
- Return type:
torch.Tensor
networks.masked_causal_generator module#
- class networks.masked_causal_generator.CausalGenerator(z_input: int, noise_per_gene: int, depth_per_gene: int, width_scale_per_gene: int, causal_controller: Module, causal_graph: Dict[int, Set[int]], library_size: int | None = None, device: str | None = 'cpu')[source]#
Bases:
Module
- __init__(z_input: int, noise_per_gene: int, depth_per_gene: int, width_scale_per_gene: int, causal_controller: Module, causal_graph: Dict[int, Set[int]], library_size: int | None = None, device: str | None = 'cpu') None [source]#
Causal Generator’s constructor.
- Parameters:
z_input (int) – The dimension of the noise tensor.
noise_per_gene (int) – Dimension of the latent space from which the noise vectors used by target generators is sampled.
depth_per_gene (int) – Depth of the target generator networks.
width_scale_per_gene (int) – The width scale used for the target generator networks. if width_scale_per_gene = 2 and a gene is regulated by 10 TFs and 1 noise vector, the width of the target gene generator will be 2 * (10 + 1) = 22. Assuming 1000 target genes, each regulated by 10 TFs and 1 noise, the total width of the sparse target generator will be 22000.
causal_controller (nn.Module) – Causal controller module (retrieved from checkpoint if pretrained). It is a GAN trained on genes and TFs with the LSN layer removed after training. It cannot be trained on TFs only since the library size has to be enforced. However, during causal generator training, only TFs are used.
causal_graph (Dict[int, Set[int]]) –
The causal graph is a dictionary representing the TRN to impose. It has the following format: {target gene index: {TF1 index, TF2 index, …}}. This causal graph has to be acyclic and bipartite. A TF cannot be regulated by another TF. Invalid: {1: {2, 3, {4, 6}}, …} - a regulator (TF) is regulated by another regulator (TF) Invalid: {1: {2, 3, 4}, 2: {4, 3, 5}, …} - a regulator (TF) is also regulated Invalid: {4: {2, 3}, 2: {4, 3}} - contains a cycle
Valid causal graph example: {1: {2, 3, 4}, 6: {5, 4, 2}, …}
library_size (Optional[Union[int, None]], optional) – Total number of counts per generated cell, by default None
device (Optional[str], optional) – Specifies to train on ‘cpu’ or ‘cuda’. Only ‘cuda’ is supported for training the GAN but ‘cpu’ can be used for inference, by default “cuda” if torch.cuda.is_available() else”cpu”.
- forward(noise: Tensor, *args, **kwargs) Tensor [source]#
Function for completing a forward pass of the generator. This includes a forward pass of the causal controller to generate TFs. TFs and generated noise are then used to complete a forward pass of the causal generator.
- Parameters:
noise (torch.Tensor) – The noise used as input by the causal controller.
*args – Variable length argument list.
**kwargs – Arbitrary keyword arguments.
- Returns:
The output of the causal generator (gene expression matrix).
- Return type:
torch.Tensor
- _create_generator() None [source]#
Method for creating the Causal Generator’s network. An independent generator can be created for each gene. In that case, a pass of the causal generator would require a pass of generator networks individually in a loop (since all gene expressions are needed before being passed to the LSN layer), which is very inefficient.
Instead, we create a single large Causal Generator containing sparse connections to logically create independent generators for each gene. This is done by creating 3 masks:
input mask: contains connections between genes and their regulating TFs/noise hidden mask: contains connections between hidden layers such that there is no connection between hidden layers of two genes’ generators output mask: contains connections between hidden layers of each gene’s generator and its expression (before LSN)
The MaskedLinear module is used to mask weights and gradients in linear layers.
- _create_generator_block(mask: Tensor, library_size: int | None = None, final_layer: bool | None = False) Sequential [source]#
Method for creating a sequence of operations corresponding to a masked causal generator block; a masked linear layer, a batchnorm (except in the final block), and ReLU.
- Parameters:
mask (torch.Tensor) – Mask Tensor with shape (n_input_feature, n_output_feature).
library_size (Optional[Union[int, None]], optional) – Total number of counts per generated cell, by default None.
final_layer (Optional[bool], optional) – Indicates if the block contains the final layer, by default False.
- Returns:
Sequential container containing the modules.
- Return type:
nn.Sequential