在卷积神经网络中,卷积滤波器的设计使得信息流被限制在局部区域,从而限制了网络对复杂场景的理解。PSANet提出使用PSA(point-wise spatial attention)来解决局部区域限制的问题。通过PSA模块,每个位置的像素都可以和其他位置的像素建立联系。
☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

paper:PSANet: Point-wise Spatial Attention Network for Scene Parsing
github: https://github.com/hszhao/semseg
复现地址:https://github.com/justld/PSANet_paddle
本次复现的要求为PSANet-resnet50 输入分辨率512x1024 mIOU 77.24%,本次复现的miou为79.94%。
在卷积神经网络中,卷积滤波器的设计使得信息流被限制在局部区域,从而限制了网络对复杂场景的理解。PSANet提出使用PSA(point-wise spatial attention)来解决局部区域限制的问题。通过PSA模块,每个位置的像素都可以和其他位置的像素建立联系。网络预测结果如下:
PSA有3中模式:collect\distribute\bi-direction。collect和distribute是单向信息传递(collect:其他位置的信息传递到当前位置,distribute:当前位置的信息传递到其他位置),bi-direction是双向信息传递(其实就是collect+distribute)。
立即学习“C++免费学习笔记(深入)”;
PSA(bi-direction)结构图如下,上方的分支为collect分支,下方为distribute分支。通过PSA模块,每个像素都可以和其他位置建立联系,从而丰富了上下文信息。
下图为PSA模块的原理(以collect为例,distribute与其相反):
1、输入特征图[c, h, w], 经过卷积层得到[mask_h * mask_w, h, w]的特征图(这里需要注意不一定是(2h-1)(2w-1),这个通道数是可以设置的,后续把2h-1当作mask_h,2w-1当作mask_w理解);
2、[mask_h * mask_w, h, w]中的每个embedding(就是mask_h * mask_w的向量)reshape为[mask_h, mask_w],得到特征图维度为[h * w, mask_h, mask_w];
3、假设某个embedding在原特征图的位置为i行j列,在新的特征图中,构建[h, w]的mask,使得mask的i行j列为[mask_h, mask_w]的中心,然后将mask的内容取出来,得到输出特征图的维度为[h * w, h, w]。(PS:这个步骤可能比较难理解,建议跟着源码看一下)
官方此次复现的指标应该是参考mmsegmentation复现的结果,要求PSANet-resnet50 输入分辨率512x1024 mIOU=77.24%。
mmsegmentation PSANet参考:https://github.com/open-mmlab/mmsegmentation/tree/master/configs/psanet
# step 1: clone # 可跳过# %cd ~/# !git clone https://gitee.com/dudulang001/PSANet_paddle.git# %cd PSANet_paddle# !git pull
# step 2: 卸载paddleseg 防止后续自定义外部算子未注册导致运行出错## 务必卸载paddleseg!pip uninstall paddleseg
# step 3: unzip data%cd ~/PSANet_paddle/ !mkdir data !tar -xf ~/data/data64550/cityscapes.tar -C data/ %cd ~/
# step 4: 训练%cd ~/PSANet_paddle
!python train.py --config configs/psanet/psanet_resnet50_os8_cityscapes_1024x512_80k.yml \
--use_vdl --log_iter 10 --save_interval 100 --save_dir output # --do_eval# step 5: val%cd ~/PSANet_paddle/
!python val.py \
--config configs/psanet/psanet_resnet50_os8_cityscapes_1024x512_80k.yml \
--model_path ~/model.pdparams/home/aistudio/PSANet_paddle
Compiling user custom op, it will cost a few seconds.....
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/fluid/layers/utils.py:77: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working
return (isinstance(seq, collections.Sequence) and
2022-04-25 09:04:34 [INFO]
---------------Config Information---------------
batch_size: 8
iters: 80000
loss:
coef:
- 1
- 0.4
types:
- type: CrossEntropyLoss
- type: CrossEntropyLoss
lr_scheduler:
end_lr: 1.0e-05
learning_rate: 0.01
power: 0.9
type: PolynomialDecay
model:
align_corners: false
backbone:
output_stride: 8
pretrained: https://bj.bcebos.com/paddleseg/dygraph/resnet50_vd_ssld_v2.tar.gz
type: ResNet50_vd
enable_auxiliary_loss: true
mask_h: 59
mask_w: 59
normalization_factor: 1.0
psa_softmax: true
psa_type: 2
shrink_factor: 2
type: PSANet
use_psa: true
optimizer:
momentum: 0.9
type: sgd
weight_decay: 4.0e-05
train_dataset:
dataset_root: data/cityscapes
mode: train
transforms:
- max_scale_factor: 2.0
min_scale_factor: 0.5
scale_step_size: 0.25
type: ResizeStepScaling
- crop_size:
- 1024
- 512
type: RandomPaddingCrop
- type: RandomHorizontalFlip
- brightness_range: 0.4
contrast_range: 0.4
saturation_range: 0.4
type: RandomDistort
- type: Normalize
type: Cityscapes
val_dataset:
dataset_root: data/cityscapes
mode: val
transforms:
- type: Normalize
type: Cityscapes
------------------------------------------------
W0425 09:04:34.999719 952 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1
W0425 09:04:34.999774 952 device_context.cc:465] device: 0, cuDNN Version: 7.6.
2022-04-25 09:04:39 [INFO] Loading pretrained model from https://bj.bcebos.com/paddleseg/dygraph/resnet50_vd_ssld_v2.tar.gz
2022-04-25 09:04:40 [INFO] There are 275/275 variables loaded into ResNet_vd.
2022-04-25 09:04:40 [INFO] Loading pretrained model from /home/aistudio/model.pdparams
2022-04-25 09:04:40 [INFO] There are 316/316 variables loaded into PSANet.
2022-04-25 09:04:40 [INFO] Loaded trained params of model successfully
2022-04-25 09:04:40 [INFO] Start evaluating (total_samples: 500, total_iters: 500)...
500/500 [==============================] - 143s 287ms/step - batch_cost: 0.2866 - reader cost: 8.4048e-04
2022-04-25 09:07:04 [INFO] [EVAL] #Images: 500 mIoU: 0.7994 Acc: 0.9637 Kappa: 0.9528 Dice: 0.8825
2022-04-25 09:07:04 [INFO] [EVAL] Class IoU:
[0.9839 0.8721 0.9272 0.5406 0.6225 0.6643 0.7219 0.8053 0.9271 0.654
0.9481 0.8321 0.6427 0.9562 0.8628 0.9078 0.863 0.6689 0.7886]
2022-04-25 09:07:04 [INFO] [EVAL] Class Precision:
[0.9934 0.9274 0.9562 0.8691 0.8382 0.8159 0.8432 0.9091 0.9552 0.8596
0.9646 0.8919 0.8184 0.9741 0.9425 0.9614 0.9633 0.8229 0.8825]
2022-04-25 09:07:04 [INFO] [EVAL] Class Recall:
[0.9904 0.936 0.9683 0.5885 0.7075 0.7814 0.8339 0.8758 0.9693 0.7322
0.9823 0.9255 0.7496 0.9811 0.9107 0.9421 0.8923 0.7814 0.881 ]# step 6: val flip%cd ~/PSANet_paddle/
!python val.py \
--config configs/psanet/psanet_resnet50_os8_cityscapes_1024x512_80k.yml \
--model_path ~/model.pdparams \
--aug_eval \
--flip_horizontal# step 7: val ms flip %cd ~/PSANet_paddle/
!python val.py \
--config configs/psanet/psanet_resnet50_os8_cityscapes_1024x512_80k.yml \
--model_path ~/model.pdparams \
--aug_eval \
--scales 0.75 1.0 1.25 \
--flip_horizontal# step 8: 预测, 预测结果在~/PaddleSeg/output/result文件夹内%cd ~/PSANet_paddle/
!python predict.py \
--config configs/psanet/psanet_resnet50_os8_cityscapes_1024x512_80k.yml \
--model_path ~/model.pdparams \
--image_path data/cityscapes/leftImg8bit/val/frankfurt/frankfurt_000000_000294_leftImg8bit.png \
--save_dir output/result# 查看预测结果import cv2import matplotlib.pyplot as plt image_path = "/home/aistudio/PSANet_paddle/output/result/added_prediction/frankfurt_000000_000294_leftImg8bit.png"image = cv2.imread(image_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.imshow(image) plt.show()
<Figure size 432x288 with 1 Axes>
# step 9: export # 最好不要导出,自定义的外部算子目前在静态图推理有bug,issue: https://github.com/PaddlePaddle/Paddle/issues/42068%cd ~/PSANet_paddle
!python export.py \
--config configs/psanet/psanet_resnet50_os8_cityscapes_1024x512_80k.yml \
--model_path ~/model.pdparams \
--save_dir output --input_shape 1 3 512 1024# step 10: infer # 静态图推理,目前有bug,参考上一步issue%cd ~/PSANet_paddle
!python deploy/python/infer.py \
--config output/deploy.yaml \
--image_path ~/test.png \
--save_dir output/infer/## 静态图预测异常import cv2import matplotlib.pyplot as plt image_path = "/home/aistudio/PSANet_paddle/output/infer/test.png"image = cv2.imread(image_path) image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) plt.imshow(image) plt.show()
# step 11: test tipc 准备数据# test tipc 1: prepare data%cd ~/PSANet_paddle/ !bash test_tipc/prepare.sh ./test_tipc/configs/psanet/train_infer_python.txt 'lite_train_lite_infer'
# step 12: test tipc # test tipc 2: pip install requirements%cd ~/PSANet_paddle/test_tipc/ !pip install -r requirements.txt
# step 13: test tipc # test tipc 3: 安装auto_log%cd ~/# !git clone https://github.com/LDOUBLEV/AutoLog %cd AutoLog/ !pip3 install -r requirements.txt !python3 setup.py bdist_wheel !pip3 install ./dist/auto_log-1.2.0-py3-none-any.whl
# step 14: test tipc 这里需要注意,自定义的外部算子导出时需要给定维度,否则会导致维度丢失,参考train_infer_python.txt# test tipc 4: test train inference%cd ~/PSANet_paddle/ !bash test_tipc/test_train_inference_python.sh ./test_tipc/configs/psanet/train_infer_python.txt 'lite_train_lite_infer'
1、仿照repo给的pytorch参考代码,写了外部算子,在cpu前向推理测试算子,与torch输出一致,于是直接训练模型,结果几次迭代后网络输出全部为nan;
2、移除自定义外部算子,网络训练恢复正常,确认问题在自定义算子内;
3、cpu测试,打印反向传播梯度,与torch不一致,仔细核对,发现问题为反向传播梯度不一致;
首先说一下原因:官方给的relu算子示例,他的特征图输出维度和输入维度是相同的,所以定义梯度不需要初始化,因为每个梯度值都会被覆盖(见下方代码)。
但是PSA算子的输入是[mask_h*mask_w, h, w],输出是[h * w, h, w],他们维度不同!!!! 所以如果不对梯度初始化,那么未赋值的梯度值是随机的,导致网络训练奔溃,将梯度初始化为0后解决该问题。(哭死,这里不知道掉了多少头发才发现)
std::vector<paddle::Tensor> ReluCPUBackward(const paddle::Tensor& x, const paddle::Tensor& out, const paddle::Tensor& grad_out) { CHECK_INPUT(x); CHECK_INPUT(out); CHECK_INPUT(grad_out); auto grad_x = paddle::Tensor(paddle::PlaceType::kCPU, x.shape()); # 看这里 auto out_numel = out.size(); auto* out_data = out.data<float>(); auto* grad_out_data = grad_out.data<float>(); auto* grad_x_data = grad_x.mutable_data<float>(x.place()); for (int i = 0; i < out_numel; ++i) {
grad_x_data[i] =
grad_out_data[i] * (out_data[i] > static_cast<float>(0) ? 1. : 0.);
} return {grad_x};
}4、cpu算子调试好了后,cuda算子就好写多了,但是需要注意不要有小错误,不然很难发现;(梯度初始化一开始写错了,部分未初始化为0,然后一个个梯度打印出来调试,又是大把的头发)
1、使用paddleseg套件复现论文,可以赢在起跑线;
2、论文提供的repo不一定没问题(切记这一点,官方的repo模型中有个compact参数,只要设定了就会报错,一开始以为自己写的有问题,后来发现原来官方提供的就有问题,只是它没用到);
3、写自定义算子一定要仔细核对,最好能够一个个参数前向反向对齐,cpu gpu都确认无误再使用,否则出问题很难debug。
以上就是【论文复现赛第六期】PSANet(含自定义C++外部算子调试经验)的详细内容,更多请关注php中文网其它相关文章!
每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。
Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号