Building Blocks¶
trans_conv_norm_relu
[source]
trans_conv_norm_relu
(n_in
,n_out
,norm_layer
,bias
,ks
=3
,stride
=2
)
Transpose convolutional layer
pad_conv_norm_relu
[source]
pad_conv_norm_relu
(n_in
,n_out
,norm_layer
,padding_mode
='zeros'
,pad
=1
,ks
=3
,stride
=1
,activ
=True
,bias
=True
)
Adding ability to specify different paddings to convolutional layer
conv_norm_relu
[source]
conv_norm_relu
(n_in
,n_out
,norm_layer
=None
,ks
=3
,bias
:bool
=True
,pad
=1
,stride
=1
,activ
=True
,a
=0.2
)
Convolutional layer
class
ResBlock
[source]
ResBlock
(dim
,padding_mode
,bias
,dropout
,norm_layer
='InstanceNorm2d'
) ::Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
Generator¶
generator
[source]
generator
(n_in
,n_out
,n_f
=64
,norm_layer
=None
,dropout
=0.0
,n_blocks
=6
,pad_mode
='reflection'
)
Generator that maps an input of one domain to the other
generator(3,10)
Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), padding_mode=reflection) ) (2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): AutoConv( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) ) (5): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (6): ReLU(inplace=True) (7): AutoConv( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) ) (8): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (9): ReLU(inplace=True) (10): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (11): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (12): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (13): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (14): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (15): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (16): AutoTransConv( (conv): ConvTranspose2d(256, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1)) ) (17): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (18): ReLU() (19): AutoTransConv( (conv): ConvTranspose2d(128, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1)) ) (20): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (21): ReLU() (22): ReflectionPad2d((3, 3, 3, 3)) (23): Conv2d(64, 10, kernel_size=(7, 7), stride=(1, 1)) (24): Tanh() )
Discriminator¶
discriminator
[source]
discriminator
(c_in
,n_f
,n_layers
,norm_layer
=None
,sigmoid
=False
)
Discrminator to classify input as belonging to one class or the other
discriminator(3, 6, 3)
Sequential( (0): Conv2d(3, 6, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.2, inplace=True) (2): Conv2d(6, 12, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (3): InstanceNorm2d(12, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (4): LeakyReLU(negative_slope=0.2, inplace=True) (5): Conv2d(12, 24, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (6): InstanceNorm2d(24, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (7): LeakyReLU(negative_slope=0.2, inplace=True) (8): Conv2d(24, 48, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1)) (9): InstanceNorm2d(48, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (10): LeakyReLU(negative_slope=0.2, inplace=True) (11): Conv2d(48, 1, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1)) )
Loss and Trainer¶
class
cycleGAN
[source]
cycleGAN
(c_in
,c_out
,n_f
=64
,disc_layers
=3
,gen_blocks
=6
,drop
=0.0
,norm_layer
=None
,sigmoid
=False
) ::Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
class
DynamicLoss
[source]
DynamicLoss
(loss_fn
) ::Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
class
CycleGANLoss
[source]
CycleGANLoss
(model
,loss_fn
='mse_loss'
,la
=10.0
,lb
=10
,lid
=0.5
) ::Module
Base class for all neural network modules.
Your models should also subclass this class.
Modules can also contain other Modules, allowing to nest them in a tree structure. You can assign the submodules as regular attributes::
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super(Model, self).__init__()
self.conv1 = nn.Conv2d(1, 20, 5)
self.conv2 = nn.Conv2d(20, 20, 5)
def forward(self, x):
x = F.relu(self.conv1(x))
return F.relu(self.conv2(x))
Submodules assigned in this way will be registered, and will have their
parameters converted too when you call :meth:to
, etc.
CycleGAN loss is composed of 3 parts:
- Identity, an image that has gone through its own domain should remain the same
- Generator, the output images should fool the discriminator into thinking they belong to that class
- Cyclical loss, and image that has been mapped to the other domain then mapped back to itself should resemble the original input
Training¶
cgan = cycleGAN(3, 3, gen_blocks=9)
learn = get_learner(cgan, loss=CycleGANLoss(cgan))
run = get_runner(learn, [cycleGANTrainer()])
cgan
cycleGAN( (a_discriminator): Sequential( (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.2, inplace=True) (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (4): LeakyReLU(negative_slope=0.2, inplace=True) (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (7): LeakyReLU(negative_slope=0.2, inplace=True) (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1)) (9): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (10): LeakyReLU(negative_slope=0.2, inplace=True) (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1)) ) (b_discriminator): Sequential( (0): Conv2d(3, 64, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (1): LeakyReLU(negative_slope=0.2, inplace=True) (2): Conv2d(64, 128, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (3): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (4): LeakyReLU(negative_slope=0.2, inplace=True) (5): Conv2d(128, 256, kernel_size=(4, 4), stride=(2, 2), padding=(1, 1)) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (7): LeakyReLU(negative_slope=0.2, inplace=True) (8): Conv2d(256, 512, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1)) (9): InstanceNorm2d(512, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (10): LeakyReLU(negative_slope=0.2, inplace=True) (11): Conv2d(512, 1, kernel_size=(4, 4), stride=(1, 1), padding=(1, 1)) ) (generate_a): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), padding_mode=reflection) ) (2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): AutoConv( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) ) (5): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (6): ReLU(inplace=True) (7): AutoConv( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) ) (8): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (9): ReLU(inplace=True) (10): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (11): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (12): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (13): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (14): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (15): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (16): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (17): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (18): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (19): AutoTransConv( (conv): ConvTranspose2d(256, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1)) ) (20): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (21): ReLU() (22): AutoTransConv( (conv): ConvTranspose2d(128, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1)) ) (23): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (24): ReLU() (25): ReflectionPad2d((3, 3, 3, 3)) (26): Conv2d(64, 3, kernel_size=(7, 7), stride=(1, 1)) (27): Tanh() ) (generate_b): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(1, 1), padding=(3, 3), padding_mode=reflection) ) (2): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): AutoConv( (conv): Conv2d(64, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) ) (5): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (6): ReLU(inplace=True) (7): AutoConv( (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1)) ) (8): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (9): ReLU(inplace=True) (10): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (11): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (12): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (13): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (14): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (15): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (16): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (17): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (18): ResBlock( (xb): Sequential( (0): ReflectionPad2d((0, 0, 0, 0)) (1): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (2): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (3): ReLU(inplace=True) (4): ReflectionPad2d((0, 0, 0, 0)) (5): AutoConv( (conv): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), padding_mode=reflection) ) (6): InstanceNorm2d(256, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) ) ) (19): AutoTransConv( (conv): ConvTranspose2d(256, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1)) ) (20): InstanceNorm2d(128, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (21): ReLU() (22): AutoTransConv( (conv): ConvTranspose2d(128, 64, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), output_padding=(1, 1)) ) (23): InstanceNorm2d(64, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False) (24): ReLU() (25): ReflectionPad2d((3, 3, 3, 3)) (26): Conv2d(64, 3, kernel_size=(7, 7), stride=(1, 1)) (27): Tanh() ) )