Some useful utils to extend pytorch functions

class InfiniteDl[source]

InfiniteDl(dl)

isin[source]

isin(t, ids)

Returns ByteTensor where True values are positions that contain ids.

t = torch.tensor([[12, 11, 0, 0], 
                  [9, 1, 5, 0]])
mask = isin(t, [0, 1])
test_eq(mask, torch.tensor([[0, 0, 1, 1],
                            [0, 1, 0, 1]]).bool())

get_src_mask[source]

get_src_mask(cap_len, max_seq_len, device='cpu')

cap_len: (bs,), max_seq_len: int

cap_len = torch.tensor([2, 1, 3])
max_seq_len = 5
src_mask = get_src_mask(cap_len, max_seq_len)
test_eq(src_mask, torch.tensor([[False, False,  True,  True,  True],
                                [False,  True,  True,  True,  True],
                                [False, False, False,  True,  True]]))

class Normalizer[source]

Normalizer(device='cpu')

normalize input image to -1 ~ 1

normalizer = Normalizer()
img = torch.randint(0, 255, (2, 3, 16, 16))
img_encoded = normalizer.encode(img)
img_decoded = normalizer.decode(img_encoded)
test_close(img, img_decoded, eps=2)

# test encoded img is in range -1~1
test_eq((img_encoded>=-1).long() + (img_encoded<=1).long(), torch.ones(2, 3, 16, 16).long()*2 )
# test decoded img is in range 0~255
test_eq((img_decoded>=0).long() + (img_decoded<=255).long(), torch.ones(2, 3, 16, 16).long()*2 )

to_device[source]

to_device(tensors, device='cpu')

detach[source]

detach(tensors, is_to_cpu=False)

is_models_equal[source]

is_models_equal(model_1, model_2)

class MultiWrapper[source]

MultiWrapper(layer, n_returns=1) :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

class MultiSequential[source]

MultiSequential(*args:Any) :: Sequential

A sequential container. Modules will be added to it in the order they are passed in the constructor. Alternatively, an ordered dict of modules can also be passed in.

To make it easier to understand, here is a small example::

# Example of using Sequential
model = nn.Sequential(
          nn.Conv2d(1,20,5),
          nn.ReLU(),
          nn.Conv2d(20,64,5),
          nn.ReLU()
        )

# Example of using Sequential with OrderedDict
model = nn.Sequential(OrderedDict([
          ('conv1', nn.Conv2d(1,20,5)),
          ('relu1', nn.ReLU()),
          ('conv2', nn.Conv2d(20,64,5)),
          ('relu2', nn.ReLU())
        ]))

class IdentityModule[source]

IdentityModule() :: Module

Base class for all neural network modules.

Your models should also subclass this class.

Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::

import torch.nn as nn
import torch.nn.functional as F

class Model(nn.Module):
    def __init__(self):
        super(Model, self).__init__()
        self.conv1 = nn.Conv2d(1, 20, 5)
        self.conv2 = nn.Conv2d(20, 20, 5)

    def forward(self, x):
        x = F.relu(self.conv1(x))
        return F.relu(self.conv2(x))

Submodules assigned in this way will be registered, and will have their parameters converted too when you call :meth:to, etc.

noise = noise_gen.sample((2, 100))
test_eq(noise.shape, (2, 100))