bob.ip.binseg.models.make_layers

Functions

conv_with_kaiming_uniform(in_channels, …)

convtrans_with_kaiming_uniform(in_channels, …)

icnr(x[, scale, init])

https://docs.fast.ai/layers.html#PixelShuffle_ICNR

ifnone(a, b)

a if a is not None, otherwise b.

Classes

PixelShuffle_ICNR(ni, nf, scale)

https://docs.fast.ai/layers.html#PixelShuffle_ICNR

UnetBlock(up_in_c, x_in_c[, pixel_shuffle, …])

UpsampleCropBlock(in_channels, out_channels, …)

Combines Conv2d, ConvTransposed2d and Cropping.

bob.ip.binseg.models.make_layers.conv_with_kaiming_uniform(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1)[source]
bob.ip.binseg.models.make_layers.convtrans_with_kaiming_uniform(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1)[source]
class bob.ip.binseg.models.make_layers.UpsampleCropBlock(in_channels, out_channels, up_kernel_size, up_stride, up_padding, pixelshuffle=False)[source]

Bases: torch.nn.modules.module.Module

Combines Conv2d, ConvTransposed2d and Cropping. Simulates the caffe2 crop layer in the forward function.

Used for DRIU and HED.

Parameters
  • in_channels (int) – number of channels of intermediate layer

  • out_channels (int) – number of output channels

  • up_kernel_size (int) – kernel size for transposed convolution

  • up_stride (int) – stride for transposed convolution

  • up_padding (int) – padding for transposed convolution

forward(x, input_res)[source]

Forward pass of UpsampleBlock.

Upsampled feature maps are cropped to the resolution of the input image.

Parameters
  • x (tuple) – input channels

  • input_res (tuple) – Resolution of the input image format (height, width)

training: bool
bob.ip.binseg.models.make_layers.ifnone(a, b)[source]

a if a is not None, otherwise b.

bob.ip.binseg.models.make_layers.icnr(x, scale=2, init=<function kaiming_normal_>)[source]

https://docs.fast.ai/layers.html#PixelShuffle_ICNR

ICNR init of x, with scale and init function.

class bob.ip.binseg.models.make_layers.PixelShuffle_ICNR(ni: int, nf: int = None, scale: int = 2)[source]

Bases: torch.nn.modules.module.Module

https://docs.fast.ai/layers.html#PixelShuffle_ICNR

Upsample by scale from ni filters to nf (default ni), using torch.nn.PixelShuffle, icnr init, and weight_norm.

forward(x)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.

training: bool
class bob.ip.binseg.models.make_layers.UnetBlock(up_in_c, x_in_c, pixel_shuffle=False, middle_block=False)[source]

Bases: torch.nn.modules.module.Module

training: bool
forward(up_in, x_in)[source]

Defines the computation performed at every call.

Should be overridden by all subclasses.

Note

Although the recipe for forward pass needs to be defined within this function, one should call the Module instance afterwards instead of this since the former takes care of running the registered hooks while the latter silently ignores them.