ResNet
ResNet 系列笔记
1. 背景与动机 (Background & Motivation)
深度网络的退化问题 (Degradation Problem)
随着网络深度的增加,准确率应该至少不会下降(因为深层网络可以包含浅层网络的功能,例如让多出的层进行恒等映射)。然而,实验发现,当网络层数增加到一定程度时,训练准确率和测试准确率都会迅速下降。
- 这不是过拟合(Overfitting),因为训练误差也在增加。
- 这表明深层网络更难优化。
2. 残差学习 (Residual Learning)
为了解决退化问题,ResNet 提出了残差学习框架。 假设期望的底层映射为 $H(x)$,我们不再让堆叠的非线性层直接拟合 $H(x)$,而是让它们拟合残差映射 $F(x) := H(x) - x$。
- 此时,原始的映射变为 $F(x) + x$。
- 假设:优化残差映射 $F(x)$ 比优化原始的、未参考的映射 $H(x)$ 要容易得多。极端情况下,如果恒等映射是最优的,那么将残差推向零比拟合一个恒等映射要容易。
- 实现:通过“快捷连接”(Shortcut Connections)或“跳跃连接”(Skip Connections)实现,即直接将输入 $x$ 加到堆叠层的输出上。
3. 网络结构 (Network Architectures)
ResNet 主要有两种 Block 结构:
3.1 BasicBlock
- 适用模型:ResNet-18, ResNet-34
- 结构:包含两个 $3 \times 3$ 卷积层。
- 流程:Input -> Conv(3x3) -> BN -> ReLU -> Conv(3x3) -> BN -> Add Input -> ReLU
3.2 Bottleneck Block
- 适用模型:ResNet-50, ResNet-101, ResNet-152
- 结构:使用 $1 \times 1$ 卷积进行降维和升维,中间夹一个 $3 \times 3$ 卷积。
- 目的:减少参数量和计算量,使得网络可以设计得更深。
- 流程:
- $1 \times 1$ Conv: 降维 (Reduce dimensions)
- $3 \times 3$ Conv: 特征提取
- $1 \times 1$ Conv: 升维 (Restore dimensions)
4. PyTorch 实现 (PyTorch Implementation)
以下是 BasicBlock 和 Bottleneck 的标准 PyTorch 实现参考。
import torch
import torch.nn as nn
def conv3x3(in_planes, out_planes, stride=1):
"""3x3 convolution with padding"""
return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride,
padding=1, bias=False)
def conv1x1(in_planes, out_planes, stride=1):
"""1x1 convolution"""
return nn.Conv2d(in_planes, out_planes, kernel_size=1, stride=stride,
bias=False)
class BasicBlock(nn.Module):
"""
BasicBlock for ResNet-18 and ResNet-34
Structure: Conv3x3 -> BN -> ReLU -> Conv3x3 -> BN -> Residual Add -> ReLU
"""
expansion = 1
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(BasicBlock, self).__init__()
# conv1 handles stride (downsampling)
self.conv1 = conv3x3(inplanes, planes, stride)
self.bn1 = nn.BatchNorm2d(planes)
self.relu = nn.ReLU(inplace=True)
self.conv2 = conv3x3(planes, planes)
self.bn2 = nn.BatchNorm2d(planes)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
# If input shape doesn't match output shape (stride!=1 or channels change),
# apply downsample to identity
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out
class Bottleneck(nn.Module):
"""
Bottleneck Block for ResNet-50, 101, 152
Structure: 1x1 (reduce) -> 3x3 -> 1x1 (expand)
"""
expansion = 4 # Output channels are 4x input planes
def __init__(self, inplanes, planes, stride=1, downsample=None):
super(Bottleneck, self).__init__()
# 1x1 conv to reduce dimensions
self.conv1 = conv1x1(inplanes, planes)
self.bn1 = nn.BatchNorm2d(planes)
# 3x3 conv
self.conv2 = conv3x3(planes, planes, stride)
self.bn2 = nn.BatchNorm2d(planes)
# 1x1 conv to expand dimensions
self.conv3 = conv1x1(planes, planes * self.expansion)
self.bn3 = nn.BatchNorm2d(planes * self.expansion)
self.relu = nn.ReLU(inplace=True)
self.downsample = downsample
self.stride = stride
def forward(self, x):
identity = x
out = self.conv1(x)
out = self.bn1(out)
out = self.relu(out)
out = self.conv2(out)
out = self.bn2(out)
out = self.relu(out)
out = self.conv3(out)
out = self.bn3(out)
if self.downsample is not None:
identity = self.downsample(x)
out += identity
out = self.relu(out)
return out