【AI达人特训营】基于ResNet50的NAM注意力机制论文复现

P粉084495128
发布: 2025-08-01 10:26:09
原创
827人浏览过
本文提出一种基于归一化的注意力模块(NAM),可以降低不太显著的特征的权重,这种方式在注意力模块上应用了稀疏的权重惩罚,这使得这些权重在计算上更加高效,同时能够保持同样的性能。我们在ResNet和MobileNet上和其他的注意力方式进行了对比,我们的方法可以达到更高的准确率。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

【ai达人特训营】基于resnet50的nam注意力机制论文复现 - php中文网

【AI达人特训营】NAM: Normalization-based Attention Module论文复现

本项目源于百度AI达人训练营。通过论文的领读分析和代码解读,论文精读和飞桨(PaddlePaddle)代码复现相结合方式学习。

一、论文解读

摘要

本文提出一种基于归一化的注意力模块(NAM),可以降低不太显著的特征的权重,这种方式在注意力模块上应用了稀疏的权重惩罚,这使得这些权重在计算上更加高效,同时能够保持同样的性能。我们在ResNet和MobileNet上和其他的注意力方式进行了对比,我们的方法可以达到更高的准确率。

1.介绍

注意力机制在近年来大热,注意力机制可以帮助神经网络抑制通道中或者是空间中不太显著的特征。之前的很多的研究聚焦于如何通过注意力算子来获取显著性的特征。这些方法成功的发现了特征的不同维度之间的互信息量。但是,缺乏对权值的贡献因子的考虑,而这个贡献因子可以进一步的抑制不显著的特征。因此,我们瞄准了利用权值的贡献因子来提升注意力的效果。我们使用了Batch Normalization的缩放因子来表示权值的重要程度。这样可以避免如SE,BAM和CBAM一样增加全连接层和卷积层。这样,我们提出了一个新的注意力方式:基于归一化的注意力(NAM)

2.方法

我们提出的NAM是一种轻量级的高效的注意力机制,我们采用了CBAM的模块集成方式,重新设计了通道注意力和空间注意力子模块,这样,NAM可以嵌入到每个网络block的最后。对于残差网络,可以嵌入到残差结构的最后。对于通道注意力子模块,我们使用了Batch Normalization中的缩放因子,如式子(1),缩放因子反映出各个通道的变化的大小,也表示了该通道的重要性。为什么这么说呢,可以这样理解,缩放因子即BN中的方差,方差越大表示该通道变化的越厉害,那么该通道中包含的信息会越丰富,重要性也越大,而那些变化不大的通道,信息单一,重要性小。

【AI达人特训营】基于ResNet50的NAM注意力机制论文复现 - php中文网        

因此,通道注意力子模块如图1,式(2),用表示最后得到的输出特征,γ是每个通道的缩放因子,因此,每个通道的权值可以得到,如果对空间中的每个像素使用同样的归一化方法,就可以得到空间注意力的权重,式(3),就叫做像素归一化。像素注意力见图2,

【AI达人特训营】基于ResNet50的NAM注意力机制论文复现 - php中文网        

为了抑制不重要的特征,我们在损失函数中加入了一个正则化项,如(4)式:

【AI达人特训营】基于ResNet50的NAM注意力机制论文复现 - php中文网        

3.实验

达奇AI论文写作
达奇AI论文写作

达奇AI论文辅助写作平台,在校学生、职场精英都在用的AI论文辅助写作平台

达奇AI论文写作24
查看详情 达奇AI论文写作

我们将NAM和SE,BAM,CBAM,TAM在ResNet和MobileNet上,在CIFAR100数据集和ImageNet数据集上进行了对比,我们对每种注意力机制都使用了同样的预处理和训练方式,对比结果表示,在CIFAR100上,单独使用NAM的通道注意力或者空间注意力就可以达到超越其他方式的效果。在ImageNet上,同时使用NAM的通道注意力和空间注意力可以达到超越其他方法的效果。

【AI达人特训营】基于ResNet50的NAM注意力机制论文复现 - php中文网        

4.结论

我们提出了一个NAM模块,该模块通过抑制不太显著的特征来提高效率。我们的实验表明,NAM在ResNet和MobileNet上都提供了效率增益。我们正在对NAM的集成变化和超参数调整性能进行详细分析。我们还计划使用不同的模型压缩技术优化NAM,以提高其效率。未来,我们将研究它对其他深度学习体系结构和应用程序的影响。

二、数据集介绍

CIFAR100数据集有100个类。每个类有600张大小为32 × 32 32\times 3232×32的彩色图像,其中500张作为训练集,100张作为测试集。对于每一张图像,它有fine_labels和coarse_labels两个标签,分别代表图像的细粒度和粗粒度标签,对应下图中的classes和superclass。也就是说,CIFAR100数据集是层次的。【AI达人特训营】基于ResNet50的NAM注意力机制论文复现 - php中文网        

三、基于ResNet50的cirfar100实验

3.1 导入、划分数据集

In [8]
import paddleimport paddle.vision.transforms as tdef data_process():
    # 数据增强策略
    transform_strategy = t.Compose([
        t.ColorJitter(),            	#随即调节亮度、对比度等
        t.RandomHorizontalFlip(),  # 随机水平翻转
        t.RandomVerticalFlip(),      # 随机垂直翻转
        t.ToTensor()  			 # 转化为张量
    ])    
    # 加载训练数据集
    train_dataset = paddle.vision.datasets.Cifar100(
        mode='train',
        transform=transform_strategy
    )    
    # 测试集采用与训练集相同的增强策略,检验模型的泛化能力
    eval_dataset = paddle.vision.datasets.Cifar100(
        mode='test', 
        transform=transform_strategy
    )    
    print('训练集样本数:', str(len(train_dataset)), '| 测试集样本数:', str(len(eval_dataset)))    return train_dataset, eval_dataset

train_dataset, eval_dataset = data_process() #获取数据
登录后复制
       
item    52/41261 [..............................] - ETA: 1:25 - 2ms/item
登录后复制
       
Cache file /home/aistudio/.cache/paddle/dataset/cifar/cifar-100-python.tar.gz not found, downloading https://dataset.bj.bcebos.com/cifar/cifar-100-python.tar.gz 
Begin to download
登录后复制
       
item 40921/41261 [============================>.] - ETA: 0s - 674us/item
登录后复制
       
Download finished
登录后复制
       
训练集样本数: 50000 | 测试集样本数: 10000
登录后复制
       

3.2 调用paddle API搭建resnet50

In [9]
model = paddle.Model(paddle.vision.models.resnet50(pretrained=False))#模型可视化model.summary((-1, 3, 32, 32))
登录后复制
       
W0623 11:42:45.144419   167 gpu_context.cc:278] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 11.2, Runtime API Version: 10.1
W0623 11:42:45.149612   167 gpu_context.cc:306] device: 0, cuDNN Version: 7.6.
登录后复制
       
-------------------------------------------------------------------------------
   Layer (type)         Input Shape          Output Shape         Param #    
===============================================================================
     Conv2D-1         [[1, 3, 32, 32]]     [1, 64, 16, 16]         9,408     
   BatchNorm2D-1     [[1, 64, 16, 16]]     [1, 64, 16, 16]          256      
      ReLU-1         [[1, 64, 16, 16]]     [1, 64, 16, 16]           0       
    MaxPool2D-1      [[1, 64, 16, 16]]      [1, 64, 8, 8]            0       
     Conv2D-3         [[1, 64, 8, 8]]       [1, 64, 8, 8]          4,096     
   BatchNorm2D-3      [[1, 64, 8, 8]]       [1, 64, 8, 8]           256      
      ReLU-2          [[1, 256, 8, 8]]      [1, 256, 8, 8]           0       
     Conv2D-4         [[1, 64, 8, 8]]       [1, 64, 8, 8]         36,864     
   BatchNorm2D-4      [[1, 64, 8, 8]]       [1, 64, 8, 8]           256      
     Conv2D-5         [[1, 64, 8, 8]]       [1, 256, 8, 8]        16,384     
   BatchNorm2D-5      [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
     Conv2D-2         [[1, 64, 8, 8]]       [1, 256, 8, 8]        16,384     
   BatchNorm2D-2      [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
 BottleneckBlock-1    [[1, 64, 8, 8]]       [1, 256, 8, 8]           0       
     Conv2D-6         [[1, 256, 8, 8]]      [1, 64, 8, 8]         16,384     
   BatchNorm2D-6      [[1, 64, 8, 8]]       [1, 64, 8, 8]           256      
      ReLU-3          [[1, 256, 8, 8]]      [1, 256, 8, 8]           0       
     Conv2D-7         [[1, 64, 8, 8]]       [1, 64, 8, 8]         36,864     
   BatchNorm2D-7      [[1, 64, 8, 8]]       [1, 64, 8, 8]           256      
     Conv2D-8         [[1, 64, 8, 8]]       [1, 256, 8, 8]        16,384     
   BatchNorm2D-8      [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
 BottleneckBlock-2    [[1, 256, 8, 8]]      [1, 256, 8, 8]           0       
     Conv2D-9         [[1, 256, 8, 8]]      [1, 64, 8, 8]         16,384     
   BatchNorm2D-9      [[1, 64, 8, 8]]       [1, 64, 8, 8]           256      
      ReLU-4          [[1, 256, 8, 8]]      [1, 256, 8, 8]           0       
     Conv2D-10        [[1, 64, 8, 8]]       [1, 64, 8, 8]         36,864     
  BatchNorm2D-10      [[1, 64, 8, 8]]       [1, 64, 8, 8]           256      
     Conv2D-11        [[1, 64, 8, 8]]       [1, 256, 8, 8]        16,384     
  BatchNorm2D-11      [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
 BottleneckBlock-3    [[1, 256, 8, 8]]      [1, 256, 8, 8]           0       
     Conv2D-13        [[1, 256, 8, 8]]      [1, 128, 8, 8]        32,768     
  BatchNorm2D-13      [[1, 128, 8, 8]]      [1, 128, 8, 8]          512      
      ReLU-5          [[1, 512, 4, 4]]      [1, 512, 4, 4]           0       
     Conv2D-14        [[1, 128, 8, 8]]      [1, 128, 4, 4]        147,456    
  BatchNorm2D-14      [[1, 128, 4, 4]]      [1, 128, 4, 4]          512      
     Conv2D-15        [[1, 128, 4, 4]]      [1, 512, 4, 4]        65,536     
  BatchNorm2D-15      [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
     Conv2D-12        [[1, 256, 8, 8]]      [1, 512, 4, 4]        131,072    
  BatchNorm2D-12      [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
 BottleneckBlock-4    [[1, 256, 8, 8]]      [1, 512, 4, 4]           0       
     Conv2D-16        [[1, 512, 4, 4]]      [1, 128, 4, 4]        65,536     
  BatchNorm2D-16      [[1, 128, 4, 4]]      [1, 128, 4, 4]          512      
      ReLU-6          [[1, 512, 4, 4]]      [1, 512, 4, 4]           0       
     Conv2D-17        [[1, 128, 4, 4]]      [1, 128, 4, 4]        147,456    
  BatchNorm2D-17      [[1, 128, 4, 4]]      [1, 128, 4, 4]          512      
     Conv2D-18        [[1, 128, 4, 4]]      [1, 512, 4, 4]        65,536     
  BatchNorm2D-18      [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
 BottleneckBlock-5    [[1, 512, 4, 4]]      [1, 512, 4, 4]           0       
     Conv2D-19        [[1, 512, 4, 4]]      [1, 128, 4, 4]        65,536     
  BatchNorm2D-19      [[1, 128, 4, 4]]      [1, 128, 4, 4]          512      
      ReLU-7          [[1, 512, 4, 4]]      [1, 512, 4, 4]           0       
     Conv2D-20        [[1, 128, 4, 4]]      [1, 128, 4, 4]        147,456    
  BatchNorm2D-20      [[1, 128, 4, 4]]      [1, 128, 4, 4]          512      
     Conv2D-21        [[1, 128, 4, 4]]      [1, 512, 4, 4]        65,536     
  BatchNorm2D-21      [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
 BottleneckBlock-6    [[1, 512, 4, 4]]      [1, 512, 4, 4]           0       
     Conv2D-22        [[1, 512, 4, 4]]      [1, 128, 4, 4]        65,536     
  BatchNorm2D-22      [[1, 128, 4, 4]]      [1, 128, 4, 4]          512      
      ReLU-8          [[1, 512, 4, 4]]      [1, 512, 4, 4]           0       
     Conv2D-23        [[1, 128, 4, 4]]      [1, 128, 4, 4]        147,456    
  BatchNorm2D-23      [[1, 128, 4, 4]]      [1, 128, 4, 4]          512      
     Conv2D-24        [[1, 128, 4, 4]]      [1, 512, 4, 4]        65,536     
  BatchNorm2D-24      [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
 BottleneckBlock-7    [[1, 512, 4, 4]]      [1, 512, 4, 4]           0       
     Conv2D-26        [[1, 512, 4, 4]]      [1, 256, 4, 4]        131,072    
  BatchNorm2D-26      [[1, 256, 4, 4]]      [1, 256, 4, 4]         1,024     
      ReLU-9         [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-27        [[1, 256, 4, 4]]      [1, 256, 2, 2]        589,824    
  BatchNorm2D-27      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
     Conv2D-28        [[1, 256, 2, 2]]     [1, 1024, 2, 2]        262,144    
  BatchNorm2D-28     [[1, 1024, 2, 2]]     [1, 1024, 2, 2]         4,096     
     Conv2D-25        [[1, 512, 4, 4]]     [1, 1024, 2, 2]        524,288    
  BatchNorm2D-25     [[1, 1024, 2, 2]]     [1, 1024, 2, 2]         4,096     
 BottleneckBlock-8    [[1, 512, 4, 4]]     [1, 1024, 2, 2]           0       
     Conv2D-29       [[1, 1024, 2, 2]]      [1, 256, 2, 2]        262,144    
  BatchNorm2D-29      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
      ReLU-10        [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-30        [[1, 256, 2, 2]]      [1, 256, 2, 2]        589,824    
  BatchNorm2D-30      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
     Conv2D-31        [[1, 256, 2, 2]]     [1, 1024, 2, 2]        262,144    
  BatchNorm2D-31     [[1, 1024, 2, 2]]     [1, 1024, 2, 2]         4,096     
 BottleneckBlock-9   [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-32       [[1, 1024, 2, 2]]      [1, 256, 2, 2]        262,144    
  BatchNorm2D-32      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
      ReLU-11        [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-33        [[1, 256, 2, 2]]      [1, 256, 2, 2]        589,824    
  BatchNorm2D-33      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
     Conv2D-34        [[1, 256, 2, 2]]     [1, 1024, 2, 2]        262,144    
  BatchNorm2D-34     [[1, 1024, 2, 2]]     [1, 1024, 2, 2]         4,096     
BottleneckBlock-10   [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-35       [[1, 1024, 2, 2]]      [1, 256, 2, 2]        262,144    
  BatchNorm2D-35      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
      ReLU-12        [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-36        [[1, 256, 2, 2]]      [1, 256, 2, 2]        589,824    
  BatchNorm2D-36      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
     Conv2D-37        [[1, 256, 2, 2]]     [1, 1024, 2, 2]        262,144    
  BatchNorm2D-37     [[1, 1024, 2, 2]]     [1, 1024, 2, 2]         4,096     
BottleneckBlock-11   [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-38       [[1, 1024, 2, 2]]      [1, 256, 2, 2]        262,144    
  BatchNorm2D-38      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
      ReLU-13        [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-39        [[1, 256, 2, 2]]      [1, 256, 2, 2]        589,824    
  BatchNorm2D-39      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
     Conv2D-40        [[1, 256, 2, 2]]     [1, 1024, 2, 2]        262,144    
  BatchNorm2D-40     [[1, 1024, 2, 2]]     [1, 1024, 2, 2]         4,096     
BottleneckBlock-12   [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-41       [[1, 1024, 2, 2]]      [1, 256, 2, 2]        262,144    
  BatchNorm2D-41      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
      ReLU-14        [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-42        [[1, 256, 2, 2]]      [1, 256, 2, 2]        589,824    
  BatchNorm2D-42      [[1, 256, 2, 2]]      [1, 256, 2, 2]         1,024     
     Conv2D-43        [[1, 256, 2, 2]]     [1, 1024, 2, 2]        262,144    
  BatchNorm2D-43     [[1, 1024, 2, 2]]     [1, 1024, 2, 2]         4,096     
BottleneckBlock-13   [[1, 1024, 2, 2]]     [1, 1024, 2, 2]           0       
     Conv2D-45       [[1, 1024, 2, 2]]      [1, 512, 2, 2]        524,288    
  BatchNorm2D-45      [[1, 512, 2, 2]]      [1, 512, 2, 2]         2,048     
      ReLU-15        [[1, 2048, 1, 1]]     [1, 2048, 1, 1]           0       
     Conv2D-46        [[1, 512, 2, 2]]      [1, 512, 1, 1]       2,359,296   
  BatchNorm2D-46      [[1, 512, 1, 1]]      [1, 512, 1, 1]         2,048     
     Conv2D-47        [[1, 512, 1, 1]]     [1, 2048, 1, 1]       1,048,576   
  BatchNorm2D-47     [[1, 2048, 1, 1]]     [1, 2048, 1, 1]         8,192     
     Conv2D-44       [[1, 1024, 2, 2]]     [1, 2048, 1, 1]       2,097,152   
  BatchNorm2D-44     [[1, 2048, 1, 1]]     [1, 2048, 1, 1]         8,192     
BottleneckBlock-14   [[1, 1024, 2, 2]]     [1, 2048, 1, 1]           0       
     Conv2D-48       [[1, 2048, 1, 1]]      [1, 512, 1, 1]       1,048,576   
  BatchNorm2D-48      [[1, 512, 1, 1]]      [1, 512, 1, 1]         2,048     
      ReLU-16        [[1, 2048, 1, 1]]     [1, 2048, 1, 1]           0       
     Conv2D-49        [[1, 512, 1, 1]]      [1, 512, 1, 1]       2,359,296   
  BatchNorm2D-49      [[1, 512, 1, 1]]      [1, 512, 1, 1]         2,048     
     Conv2D-50        [[1, 512, 1, 1]]     [1, 2048, 1, 1]       1,048,576   
  BatchNorm2D-50     [[1, 2048, 1, 1]]     [1, 2048, 1, 1]         8,192     
BottleneckBlock-15   [[1, 2048, 1, 1]]     [1, 2048, 1, 1]           0       
     Conv2D-51       [[1, 2048, 1, 1]]      [1, 512, 1, 1]       1,048,576   
  BatchNorm2D-51      [[1, 512, 1, 1]]      [1, 512, 1, 1]         2,048     
      ReLU-17        [[1, 2048, 1, 1]]     [1, 2048, 1, 1]           0       
     Conv2D-52        [[1, 512, 1, 1]]      [1, 512, 1, 1]       2,359,296   
  BatchNorm2D-52      [[1, 512, 1, 1]]      [1, 512, 1, 1]         2,048     
     Conv2D-53        [[1, 512, 1, 1]]     [1, 2048, 1, 1]       1,048,576   
  BatchNorm2D-53     [[1, 2048, 1, 1]]     [1, 2048, 1, 1]         8,192     
BottleneckBlock-16   [[1, 2048, 1, 1]]     [1, 2048, 1, 1]           0       
AdaptiveAvgPool2D-1  [[1, 2048, 1, 1]]     [1, 2048, 1, 1]           0       
     Linear-1           [[1, 2048]]           [1, 1000]          2,049,000   
===============================================================================
Total params: 25,610,152
Trainable params: 25,503,912
Non-trainable params: 106,240
-------------------------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 5.36
Params size (MB): 97.69
Estimated Total Size (MB): 103.07
-------------------------------------------------------------------------------
登录后复制
       
{'total_params': 25610152, 'trainable_params': 25503912}
登录后复制
               

3.3 模型训练

In [12]
from paddle.optimizer.lr import CosineAnnealingDecay, MultiStepDecay, LinearWarmup# model.prepare(paddle.optimizer.SGD(learning_rate=0.001, parameters=model.parameters()),#使用Adam优化器,学习率为0.0001#               paddle.nn.CrossEntropyLoss(),#损失函数使用交叉熵函数#               paddle.metric.Accuracy())#Acc用top1与top5精准度表示model.prepare(
    paddle.optimizer.Momentum(
        learning_rate=LinearWarmup(CosineAnnealingDecay(0.001, 100), 2000, 0., 0.001),
        momentum=0.9,
        parameters=model.parameters(),
        weight_decay=5e-4),
    paddle.nn.CrossEntropyLoss(),
    paddle.metric.Accuracy(topk=(1,5)))


callback_visualdl = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir')#开始模型训练model.fit(train_dataset,
          eval_dataset,
          epochs=100,#训练的轮数
          batch_size=128,#每次训练多少个
          verbose=1,#显示模式
          shuffle=True,#打乱数据集顺序
          num_workers=4,
          callbacks=callback_visualdl,
          )
登录后复制
       
The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/100
登录后复制
       
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/nn/layer/norm.py:654: UserWarning: When training, we now always track global mean and variance.
  "When training, we now always track global mean and variance.")
登录后复制
       
step 391/391 [==============================] - loss: 5.8216 - acc_top1: 0.0083 - acc_top5: 0.0365 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 5.1818 - acc_top1: 0.0139 - acc_top5: 0.0630 - 30ms/step          
Eval samples: 10000
Epoch 2/100
step 391/391 [==============================] - loss: 4.9121 - acc_top1: 0.0193 - acc_top5: 0.0816 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.3763 - acc_top1: 0.0310 - acc_top5: 0.1156 - 29ms/step          
Eval samples: 10000
Epoch 3/100
step 391/391 [==============================] - loss: 4.5889 - acc_top1: 0.0426 - acc_top5: 0.1608 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.3340 - acc_top1: 0.0639 - acc_top5: 0.2092 - 29ms/step          
Eval samples: 10000
Epoch 4/100
step 391/391 [==============================] - loss: 4.0951 - acc_top1: 0.0792 - acc_top5: 0.2558 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.4047 - acc_top1: 0.0998 - acc_top5: 0.2908 - 29ms/step          
Eval samples: 10000
Epoch 5/100
step 391/391 [==============================] - loss: 4.0470 - acc_top1: 0.1093 - acc_top5: 0.3159 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.1264 - acc_top1: 0.1306 - acc_top5: 0.3494 - 29ms/step          
Eval samples: 10000
Epoch 6/100
step 391/391 [==============================] - loss: 3.3129 - acc_top1: 0.1412 - acc_top5: 0.3718 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.7290 - acc_top1: 0.1552 - acc_top5: 0.3856 - 30ms/step          
Eval samples: 10000
Epoch 7/100
step 391/391 [==============================] - loss: 3.6535 - acc_top1: 0.1597 - acc_top5: 0.4010 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.6085 - acc_top1: 0.1640 - acc_top5: 0.4079 - 29ms/step          
Eval samples: 10000
Epoch 8/100
step 391/391 [==============================] - loss: 3.2932 - acc_top1: 0.1744 - acc_top5: 0.4265 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4222 - acc_top1: 0.1731 - acc_top5: 0.4195 - 31ms/step          
Eval samples: 10000
Epoch 9/100
step 391/391 [==============================] - loss: 3.4369 - acc_top1: 0.1843 - acc_top5: 0.4422 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2243 - acc_top1: 0.1849 - acc_top5: 0.4386 - 29ms/step          
Eval samples: 10000
Epoch 10/100
step 391/391 [==============================] - loss: 3.5167 - acc_top1: 0.1973 - acc_top5: 0.4603 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.5050 - acc_top1: 0.1892 - acc_top5: 0.4476 - 29ms/step          
Eval samples: 10000
Epoch 11/100
step 391/391 [==============================] - loss: 3.1014 - acc_top1: 0.2053 - acc_top5: 0.4726 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4008 - acc_top1: 0.1954 - acc_top5: 0.4587 - 29ms/step           loss: 3.3302 - acc_top1: 0.1945 - acc_top5: 0.4608 - ETA: 0s - 32m
Eval samples: 10000
Epoch 12/100
step 391/391 [==============================] - loss: 3.7199 - acc_top1: 0.2150 - acc_top5: 0.4872 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4300 - acc_top1: 0.2004 - acc_top5: 0.4636 - 29ms/step           loss: 3.0789 - acc_top1: 0.2027 - acc_top5: 0.4613 - ETA: 2s
Eval samples: 10000
Epoch 13/100
step 391/391 [==============================] - loss: 3.4431 - acc_top1: 0.2238 - acc_top5: 0.4971 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.5822 - acc_top1: 0.2085 - acc_top5: 0.4721 - 38ms/step          
Eval samples: 10000
Epoch 14/100
step 391/391 [==============================] - loss: 2.7427 - acc_top1: 0.2316 - acc_top5: 0.5125 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4022 - acc_top1: 0.2151 - acc_top5: 0.4802 - 29ms/step          
Eval samples: 10000
Epoch 15/100
step 391/391 [==============================] - loss: 3.3946 - acc_top1: 0.2384 - acc_top5: 0.5218 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.3966 - acc_top1: 0.2138 - acc_top5: 0.4899 - 29ms/step          
Eval samples: 10000
Epoch 16/100
step 391/391 [==============================] - loss: 3.0503 - acc_top1: 0.2481 - acc_top5: 0.5355 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.5413 - acc_top1: 0.2195 - acc_top5: 0.4901 - 32ms/step          
Eval samples: 10000
Epoch 17/100
step 391/391 [==============================] - loss: 3.1893 - acc_top1: 0.2571 - acc_top5: 0.5454 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.5470 - acc_top1: 0.2124 - acc_top5: 0.4812 - 29ms/step          
Eval samples: 10000
Epoch 18/100
step 391/391 [==============================] - loss: 2.9919 - acc_top1: 0.2657 - acc_top5: 0.5556 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.6241 - acc_top1: 0.2206 - acc_top5: 0.4934 - 29ms/step          
Eval samples: 10000
Epoch 19/100
step 391/391 [==============================] - loss: 2.9012 - acc_top1: 0.2698 - acc_top5: 0.5641 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2880 - acc_top1: 0.2222 - acc_top5: 0.4976 - 29ms/step          
Eval samples: 10000
Epoch 20/100
step 391/391 [==============================] - loss: 2.8051 - acc_top1: 0.2770 - acc_top5: 0.5716 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.3445 - acc_top1: 0.2281 - acc_top5: 0.5019 - 29ms/step          
Eval samples: 10000
Epoch 21/100
step 391/391 [==============================] - loss: 2.8484 - acc_top1: 0.2824 - acc_top5: 0.5814 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1611 - acc_top1: 0.2339 - acc_top5: 0.5085 - 29ms/step          
Eval samples: 10000
Epoch 22/100
step 391/391 [==============================] - loss: 2.6754 - acc_top1: 0.2907 - acc_top5: 0.5896 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0240 - acc_top1: 0.2327 - acc_top5: 0.5114 - 30ms/step          
Eval samples: 10000
Epoch 23/100
step 391/391 [==============================] - loss: 3.2871 - acc_top1: 0.2964 - acc_top5: 0.5974 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1273 - acc_top1: 0.2442 - acc_top5: 0.5140 - 30ms/step          
Eval samples: 10000
Epoch 24/100
step 391/391 [==============================] - loss: 3.0351 - acc_top1: 0.3056 - acc_top5: 0.6029 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.3547 - acc_top1: 0.2479 - acc_top5: 0.5299 - 29ms/step          
Eval samples: 10000
Epoch 25/100
step 391/391 [==============================] - loss: 2.7127 - acc_top1: 0.3093 - acc_top5: 0.6148 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2127 - acc_top1: 0.2506 - acc_top5: 0.5291 - 30ms/step          
Eval samples: 10000
Epoch 26/100
step 391/391 [==============================] - loss: 2.8931 - acc_top1: 0.3145 - acc_top5: 0.6212 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9948 - acc_top1: 0.2512 - acc_top5: 0.5316 - 30ms/step          
Eval samples: 10000
Epoch 27/100
step 391/391 [==============================] - loss: 3.0113 - acc_top1: 0.3210 - acc_top5: 0.6290 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1367 - acc_top1: 0.2537 - acc_top5: 0.5331 - 37ms/step          
Eval samples: 10000
Epoch 28/100
step 391/391 [==============================] - loss: 2.8563 - acc_top1: 0.3280 - acc_top5: 0.6368 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9553 - acc_top1: 0.2667 - acc_top5: 0.5432 - 29ms/step          
Eval samples: 10000
Epoch 29/100
step 391/391 [==============================] - loss: 2.3831 - acc_top1: 0.3327 - acc_top5: 0.6425 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8827 - acc_top1: 0.2671 - acc_top5: 0.5509 - 30ms/step          
Eval samples: 10000
Epoch 30/100
step 391/391 [==============================] - loss: 2.8241 - acc_top1: 0.3421 - acc_top5: 0.6498 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1073 - acc_top1: 0.2667 - acc_top5: 0.5525 - 30ms/step          
Eval samples: 10000
Epoch 31/100
step 391/391 [==============================] - loss: 2.5978 - acc_top1: 0.3466 - acc_top5: 0.6570 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1769 - acc_top1: 0.2712 - acc_top5: 0.5555 - 30ms/step          
Eval samples: 10000
Epoch 32/100
step 391/391 [==============================] - loss: 2.6507 - acc_top1: 0.3562 - acc_top5: 0.6653 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0157 - acc_top1: 0.2752 - acc_top5: 0.5587 - 30ms/step          
Eval samples: 10000
Epoch 33/100
step 391/391 [==============================] - loss: 3.0660 - acc_top1: 0.3567 - acc_top5: 0.6712 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0034 - acc_top1: 0.2762 - acc_top5: 0.5559 - 30ms/step          
Eval samples: 10000
Epoch 34/100
step 391/391 [==============================] - loss: 2.4485 - acc_top1: 0.3656 - acc_top5: 0.6777 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9226 - acc_top1: 0.2772 - acc_top5: 0.5563 - 29ms/step          
Eval samples: 10000
Epoch 35/100
step 391/391 [==============================] - loss: 2.7170 - acc_top1: 0.3770 - acc_top5: 0.6829 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1584 - acc_top1: 0.2801 - acc_top5: 0.5603 - 29ms/step          
Eval samples: 10000
Epoch 36/100
step 391/391 [==============================] - loss: 2.3328 - acc_top1: 0.3813 - acc_top5: 0.6928 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8032 - acc_top1: 0.2761 - acc_top5: 0.5617 - 30ms/step          
Eval samples: 10000
Epoch 37/100
step 391/391 [==============================] - loss: 2.3197 - acc_top1: 0.3866 - acc_top5: 0.6992 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9066 - acc_top1: 0.2725 - acc_top5: 0.5597 - 30ms/step          
Eval samples: 10000
Epoch 38/100
step 391/391 [==============================] - loss: 2.4766 - acc_top1: 0.3977 - acc_top5: 0.7053 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7028 - acc_top1: 0.2792 - acc_top5: 0.5652 - 29ms/step          
Eval samples: 10000
Epoch 39/100
step 391/391 [==============================] - loss: 2.4647 - acc_top1: 0.4030 - acc_top5: 0.7120 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8679 - acc_top1: 0.2783 - acc_top5: 0.5623 - 30ms/step          
Eval samples: 10000
Epoch 40/100
step 391/391 [==============================] - loss: 2.3726 - acc_top1: 0.4081 - acc_top5: 0.7183 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6993 - acc_top1: 0.2803 - acc_top5: 0.5579 - 30ms/step          
Eval samples: 10000
Epoch 41/100
step 391/391 [==============================] - loss: 2.3839 - acc_top1: 0.4168 - acc_top5: 0.7266 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0042 - acc_top1: 0.2769 - acc_top5: 0.5667 - 33ms/step          
Eval samples: 10000
Epoch 42/100
step 391/391 [==============================] - loss: 2.2273 - acc_top1: 0.4213 - acc_top5: 0.7319 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9708 - acc_top1: 0.2747 - acc_top5: 0.5610 - 29ms/step          
Eval samples: 10000
Epoch 43/100
step 391/391 [==============================] - loss: 2.3523 - acc_top1: 0.4268 - acc_top5: 0.7353 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5551 - acc_top1: 0.2726 - acc_top5: 0.5633 - 29ms/step          
Eval samples: 10000
Epoch 44/100
step 391/391 [==============================] - loss: 2.4685 - acc_top1: 0.4367 - acc_top5: 0.7454 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4999 - acc_top1: 0.2706 - acc_top5: 0.5608 - 29ms/step          
Eval samples: 10000
Epoch 45/100
step 391/391 [==============================] - loss: 2.3961 - acc_top1: 0.4377 - acc_top5: 0.7449 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.3164 - acc_top1: 0.2735 - acc_top5: 0.5627 - 32ms/step          
Eval samples: 10000
Epoch 46/100
step 391/391 [==============================] - loss: 2.4653 - acc_top1: 0.4486 - acc_top5: 0.7549 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9869 - acc_top1: 0.2840 - acc_top5: 0.5694 - 29ms/step          
Eval samples: 10000
Epoch 47/100
step 391/391 [==============================] - loss: 2.2756 - acc_top1: 0.4484 - acc_top5: 0.7588 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1067 - acc_top1: 0.2838 - acc_top5: 0.5716 - 29ms/step          
Eval samples: 10000
Epoch 48/100
step 391/391 [==============================] - loss: 2.4643 - acc_top1: 0.4553 - acc_top5: 0.7622 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9304 - acc_top1: 0.2933 - acc_top5: 0.5712 - 30ms/step          
Eval samples: 10000
Epoch 49/100
step 391/391 [==============================] - loss: 2.4657 - acc_top1: 0.4611 - acc_top5: 0.7706 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7275 - acc_top1: 0.2974 - acc_top5: 0.5844 - 30ms/step          
Eval samples: 10000
Epoch 50/100
step 391/391 [==============================] - loss: 2.1496 - acc_top1: 0.4677 - acc_top5: 0.7758 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8040 - acc_top1: 0.2993 - acc_top5: 0.5862 - 29ms/step          
Eval samples: 10000
Epoch 51/100
step 391/391 [==============================] - loss: 2.0272 - acc_top1: 0.4741 - acc_top5: 0.7796 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4976 - acc_top1: 0.2983 - acc_top5: 0.5870 - 29ms/step          
Eval samples: 10000
Epoch 52/100
step 391/391 [==============================] - loss: 1.9062 - acc_top1: 0.4817 - acc_top5: 0.7860 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.1902 - acc_top1: 0.3037 - acc_top5: 0.5881 - 29ms/step          
Eval samples: 10000
Epoch 53/100
step 391/391 [==============================] - loss: 2.2504 - acc_top1: 0.4873 - acc_top5: 0.7908 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2911 - acc_top1: 0.3053 - acc_top5: 0.5894 - 29ms/step          
Eval samples: 10000
Epoch 54/100
step 391/391 [==============================] - loss: 2.3077 - acc_top1: 0.4942 - acc_top5: 0.7972 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8821 - acc_top1: 0.3028 - acc_top5: 0.5855 - 29ms/step          
Eval samples: 10000
Epoch 55/100
step 391/391 [==============================] - loss: 1.6972 - acc_top1: 0.5009 - acc_top5: 0.7999 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8060 - acc_top1: 0.2986 - acc_top5: 0.5821 - 37ms/step          
Eval samples: 10000
Epoch 56/100
step 391/391 [==============================] - loss: 1.7615 - acc_top1: 0.5104 - acc_top5: 0.8059 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9240 - acc_top1: 0.3039 - acc_top5: 0.5941 - 29ms/step          
Eval samples: 10000
Epoch 57/100
step 391/391 [==============================] - loss: 1.8560 - acc_top1: 0.5171 - acc_top5: 0.8115 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8036 - acc_top1: 0.3018 - acc_top5: 0.5895 - 29ms/step          
Eval samples: 10000
Epoch 58/100
step 391/391 [==============================] - loss: 1.7631 - acc_top1: 0.5235 - acc_top5: 0.8165 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8598 - acc_top1: 0.2962 - acc_top5: 0.5827 - 29ms/step          
Eval samples: 10000
Epoch 59/100
step 391/391 [==============================] - loss: 1.7417 - acc_top1: 0.5279 - acc_top5: 0.8220 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2199 - acc_top1: 0.2978 - acc_top5: 0.5896 - 30ms/step          
Eval samples: 10000
Epoch 60/100
step 391/391 [==============================] - loss: 1.7674 - acc_top1: 0.5393 - acc_top5: 0.8285 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8630 - acc_top1: 0.2959 - acc_top5: 0.5795 - 30ms/step          
Eval samples: 10000
Epoch 61/100
step 391/391 [==============================] - loss: 1.3035 - acc_top1: 0.5450 - acc_top5: 0.8302 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9358 - acc_top1: 0.2996 - acc_top5: 0.5866 - 29ms/step          
Eval samples: 10000
Epoch 62/100
step 391/391 [==============================] - loss: 1.7947 - acc_top1: 0.5515 - acc_top5: 0.8387 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7642 - acc_top1: 0.2944 - acc_top5: 0.5785 - 30ms/step          
Eval samples: 10000
Epoch 63/100
step 391/391 [==============================] - loss: 1.7934 - acc_top1: 0.5578 - acc_top5: 0.8427 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2590 - acc_top1: 0.2904 - acc_top5: 0.5701 - 32ms/step          
Eval samples: 10000
Epoch 64/100
step 391/391 [==============================] - loss: 1.9037 - acc_top1: 0.5635 - acc_top5: 0.8449 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9783 - acc_top1: 0.2995 - acc_top5: 0.5813 - 30ms/step          
Eval samples: 10000
Epoch 65/100
step 391/391 [==============================] - loss: 1.8681 - acc_top1: 0.5721 - acc_top5: 0.8486 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9527 - acc_top1: 0.2814 - acc_top5: 0.5709 - 30ms/step          
Eval samples: 10000
Epoch 66/100
step 391/391 [==============================] - loss: 1.8873 - acc_top1: 0.5746 - acc_top5: 0.8537 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7710 - acc_top1: 0.2848 - acc_top5: 0.5751 - 30ms/step          
Eval samples: 10000
Epoch 67/100
step 391/391 [==============================] - loss: 1.6490 - acc_top1: 0.5793 - acc_top5: 0.8578 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9304 - acc_top1: 0.2981 - acc_top5: 0.5861 - 30ms/step          
Eval samples: 10000
Epoch 68/100
step 391/391 [==============================] - loss: 1.5646 - acc_top1: 0.5872 - acc_top5: 0.8634 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7347 - acc_top1: 0.2951 - acc_top5: 0.5789 - 30ms/step          
Eval samples: 10000
Epoch 69/100
step 391/391 [==============================] - loss: 1.3484 - acc_top1: 0.5881 - acc_top5: 0.8630 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0181 - acc_top1: 0.3066 - acc_top5: 0.5908 - 32ms/step          
Eval samples: 10000
Epoch 70/100
step 391/391 [==============================] - loss: 1.4429 - acc_top1: 0.5969 - acc_top5: 0.8682 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9097 - acc_top1: 0.3112 - acc_top5: 0.5959 - 29ms/step          
Eval samples: 10000
Epoch 71/100
step 391/391 [==============================] - loss: 1.4620 - acc_top1: 0.6028 - acc_top5: 0.8715 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6693 - acc_top1: 0.3119 - acc_top5: 0.5972 - 29ms/step          
Eval samples: 10000
Epoch 72/100
step 391/391 [==============================] - loss: 1.4809 - acc_top1: 0.6091 - acc_top5: 0.8749 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6038 - acc_top1: 0.3128 - acc_top5: 0.5888 - 30ms/step          
Eval samples: 10000
Epoch 73/100
step 391/391 [==============================] - loss: 1.4798 - acc_top1: 0.6139 - acc_top5: 0.8800 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7522 - acc_top1: 0.3162 - acc_top5: 0.5978 - 29ms/step          
Eval samples: 10000
Epoch 74/100
step 391/391 [==============================] - loss: 1.2607 - acc_top1: 0.6219 - acc_top5: 0.8848 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.3127 - acc_top1: 0.3183 - acc_top5: 0.5982 - 31ms/step          
Eval samples: 10000
Epoch 75/100
step 391/391 [==============================] - loss: 1.1673 - acc_top1: 0.6318 - acc_top5: 0.8888 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8425 - acc_top1: 0.3115 - acc_top5: 0.6030 - 29ms/step          
Eval samples: 10000
Epoch 76/100
step 391/391 [==============================] - loss: 1.4403 - acc_top1: 0.6363 - acc_top5: 0.8925 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9370 - acc_top1: 0.3217 - acc_top5: 0.5940 - 30ms/step          
Eval samples: 10000
Epoch 77/100
step 391/391 [==============================] - loss: 1.3236 - acc_top1: 0.6443 - acc_top5: 0.8987 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0327 - acc_top1: 0.3128 - acc_top5: 0.5929 - 29ms/step          
Eval samples: 10000
Epoch 78/100
step 391/391 [==============================] - loss: 1.7846 - acc_top1: 0.6460 - acc_top5: 0.9004 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9746 - acc_top1: 0.3133 - acc_top5: 0.6020 - 30ms/step          
Eval samples: 10000
Epoch 79/100
step 391/391 [==============================] - loss: 1.1664 - acc_top1: 0.6522 - acc_top5: 0.9030 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9526 - acc_top1: 0.3175 - acc_top5: 0.5957 - 29ms/step          
Eval samples: 10000
Epoch 80/100
step 391/391 [==============================] - loss: 1.3212 - acc_top1: 0.6623 - acc_top5: 0.9088 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6803 - acc_top1: 0.3171 - acc_top5: 0.5908 - 30ms/step          
Eval samples: 10000
Epoch 81/100
step 391/391 [==============================] - loss: 0.9561 - acc_top1: 0.6657 - acc_top5: 0.9101 - 52ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2766 - acc_top1: 0.3122 - acc_top5: 0.5888 - 29ms/step          
Eval samples: 10000
Epoch 82/100
step 391/391 [==============================] - loss: 1.2105 - acc_top1: 0.6770 - acc_top5: 0.9150 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8955 - acc_top1: 0.3038 - acc_top5: 0.5915 - 29ms/step          
Eval samples: 10000
Epoch 83/100
step 391/391 [==============================] - loss: 1.2193 - acc_top1: 0.6803 - acc_top5: 0.9179 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8806 - acc_top1: 0.3011 - acc_top5: 0.5742 - 32ms/step          
Eval samples: 10000
Epoch 84/100
step 391/391 [==============================] - loss: 1.3346 - acc_top1: 0.6901 - acc_top5: 0.9241 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2308 - acc_top1: 0.3176 - acc_top5: 0.5958 - 29ms/step          
Eval samples: 10000
Epoch 85/100
step 391/391 [==============================] - loss: 1.2272 - acc_top1: 0.6933 - acc_top5: 0.9232 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.7281 - acc_top1: 0.2976 - acc_top5: 0.5796 - 29ms/step          
Eval samples: 10000
Epoch 86/100
step 391/391 [==============================] - loss: 1.4050 - acc_top1: 0.6962 - acc_top5: 0.9265 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7497 - acc_top1: 0.3092 - acc_top5: 0.5891 - 30ms/step          
Eval samples: 10000
Epoch 87/100
step 391/391 [==============================] - loss: 1.1955 - acc_top1: 0.7055 - acc_top5: 0.9302 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9081 - acc_top1: 0.2900 - acc_top5: 0.5724 - 30ms/step          
Eval samples: 10000
Epoch 88/100
step 391/391 [==============================] - loss: 1.1573 - acc_top1: 0.7063 - acc_top5: 0.9303 - 50ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2307 - acc_top1: 0.3017 - acc_top5: 0.5833 - 29ms/step          
Eval samples: 10000
Epoch 89/100
step 391/391 [==============================] - loss: 1.0924 - acc_top1: 0.7142 - acc_top5: 0.9348 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2138 - acc_top1: 0.3054 - acc_top5: 0.5843 - 30ms/step          
Eval samples: 10000
Epoch 90/100
step 391/391 [==============================] - loss: 1.0687 - acc_top1: 0.7176 - acc_top5: 0.9373 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9641 - acc_top1: 0.3103 - acc_top5: 0.5898 - 29ms/step          
Eval samples: 10000
Epoch 91/100
step 391/391 [==============================] - loss: 1.4099 - acc_top1: 0.7216 - acc_top5: 0.9381 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.3445 - acc_top1: 0.3144 - acc_top5: 0.5849 - 29ms/step          
Eval samples: 10000
Epoch 92/100
step 391/391 [==============================] - loss: 0.9097 - acc_top1: 0.7285 - acc_top5: 0.9405 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7660 - acc_top1: 0.3144 - acc_top5: 0.5947 - 30ms/step          
Eval samples: 10000
Epoch 93/100
step 391/391 [==============================] - loss: 0.7941 - acc_top1: 0.7298 - acc_top5: 0.9421 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.3507 - acc_top1: 0.3180 - acc_top5: 0.5912 - 30ms/step          
Eval samples: 10000
Epoch 94/100
step 391/391 [==============================] - loss: 1.2478 - acc_top1: 0.7398 - acc_top5: 0.9460 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2983 - acc_top1: 0.3197 - acc_top5: 0.5926 - 30ms/step          
Eval samples: 10000
Epoch 95/100
step 391/391 [==============================] - loss: 0.9032 - acc_top1: 0.7401 - acc_top5: 0.9485 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.3697 - acc_top1: 0.3224 - acc_top5: 0.5991 - 29ms/step          
Eval samples: 10000
Epoch 96/100
step 391/391 [==============================] - loss: 0.8351 - acc_top1: 0.7491 - acc_top5: 0.9488 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9680 - acc_top1: 0.3210 - acc_top5: 0.5995 - 29ms/step          
Eval samples: 10000
Epoch 97/100
step 391/391 [==============================] - loss: 1.0612 - acc_top1: 0.7601 - acc_top5: 0.9523 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2625 - acc_top1: 0.3166 - acc_top5: 0.5961 - 37ms/step          
Eval samples: 10000
Epoch 98/100
step 391/391 [==============================] - loss: 0.8611 - acc_top1: 0.7618 - acc_top5: 0.9553 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8326 - acc_top1: 0.3220 - acc_top5: 0.6042 - 30ms/step          
Eval samples: 10000
Epoch 99/100
step 391/391 [==============================] - loss: 1.0832 - acc_top1: 0.7693 - acc_top5: 0.9560 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1979 - acc_top1: 0.3192 - acc_top5: 0.6012 - 32ms/step          
Eval samples: 10000
Epoch 100/100
step 391/391 [==============================] - loss: 0.9606 - acc_top1: 0.7716 - acc_top5: 0.9584 - 51ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2596 - acc_top1: 0.3246 - acc_top5: 0.6043 - 29ms/step          
Eval samples: 10000
登录后复制
       

四、基于ResNet50-NAM意力机制的cifar100实验

4.1 导入NAM注意力机制

In [17]
import paddle.nn as nnimport paddlefrom paddle.nn import functional as Fclass Channel_Att(nn.Layer):
    def __init__(self, channels=3, t=16):
        super(Channel_Att, self).__init__()
        self.channels = channels
      
        self.bn2 = nn.BatchNorm2D(self.channels)    def forward(self, x):
        residual = x

        x = self.bn2(x)

        weight_bn = self.bn2.weight.abs() / paddle.sum(self.bn2.weight.abs())

        x = x.transpose([0, 2, 3, 1])
        x = paddle.multiply(weight_bn, x)
        x = x.transpose([0, 3, 1, 2])
        
        x = F.sigmoid(x) * residual #
        
        return xclass Att(nn.Layer):
    def __init__(self, channels=3, out_channels=None, no_spatial=True):
        super(Att, self).__init__()
        self.Channel_Att = Channel_Att(channels)  
    def forward(self, x):
        x_out1=self.Channel_Att(x) 
        return x_out1
登录后复制
   

4.2.搭建ResNet-NAM网络模型

此网络模型模型同时适用ResNet18、ResNet34、ResNet50、ResNet101、ResNet152

In [18]
__all__ = []
model_urls = {    'resnet18': ('https://paddle-hapi.bj.bcebos.com/models/resnet18.pdparams',                 'cf548f46534aa3560945be4b95cd11c4'),    'resnet34': ('https://paddle-hapi.bj.bcebos.com/models/resnet34.pdparams',                 '8d2275cf8706028345f78ac0e1d31969'),    'resnet50': ('https://paddle-hapi.bj.bcebos.com/models/resnet50.pdparams',                 'ca6f485ee1ab0492d38f323885b0ad80'),    'resnet101': ('https://paddle-hapi.bj.bcebos.com/models/resnet101.pdparams',                  '02f35f034ca3858e1e54d4036443c92d'),    'resnet152': ('https://paddle-hapi.bj.bcebos.com/models/resnet152.pdparams',                  '7ad16a2f1e7333859ff986138630fd7a'),
}class BasicBlock(nn.Layer):
    expansion = 1

    def __init__(self,
                 inplanes,
                 planes,
                 stride=1,
                 downsample=None,
                 groups=1,
                 base_width=64,
                 dilation=1,
                 norm_layer=None):
        super(BasicBlock, self).__init__()        if norm_layer is None:
            norm_layer = nn.BatchNorm2D        if dilation > 1:            raise NotImplementedError(                "Dilation > 1 not supported in BasicBlock")

        self.conv1 = nn.Conv2D(
            inplanes, planes, 3, padding=1, stride=stride, bias_attr=False)
        self.bn1 = norm_layer(planes)
        self.relu = nn.ReLU()
        self.conv2 = nn.Conv2D(planes, planes, 3, padding=1, bias_attr=False)
        self.bn2 = norm_layer(planes)
        self.downsample = downsample
        self.stride = stride
        self.nam = Att(planes)    def forward(self, x):
        identity = x
        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)
        out = self.conv2(out)
        out = self.bn2(out)        if self.downsample is not None:
            identity = self.downsample(x)
        out = self.nam(out)
        out += identity
        out = self.relu(out)        return outclass BottleneckBlock(nn.Layer):

    expansion = 4

    def __init__(self,
                 inplanes,
                 planes,
                 stride=1,
                 downsample=None,
                 groups=1,
                 base_width=64,
                 dilation=1,
                 norm_layer=None):
        super(BottleneckBlock, self).__init__()        if norm_layer is None:
            norm_layer = nn.BatchNorm2D
        width = int(planes * (base_width / 64.)) * groups
        self.conv1 = nn.Conv2D(inplanes, width, 1, bias_attr=False)
        self.bn1 = norm_layer(width)
        self.conv2 = nn.Conv2D(
            width,
            width,            3,
            padding=dilation,
            stride=stride,
            groups=groups,
            dilation=dilation,
            bias_attr=False)
        self.bn2 = norm_layer(width)
        self.conv3 = nn.Conv2D(
            width, planes * self.expansion, 1, bias_attr=False)
        self.bn3 = norm_layer(planes * self.expansion)
        self.relu = nn.ReLU()
        self.downsample = downsample
        self.stride = stride
        self.nam = Att(planes*4)    def forward(self, x):
        identity = x

        out = self.conv1(x)
        out = self.bn1(out)
        out = self.relu(out)

        out = self.conv2(out)
        out = self.bn2(out)
        out = self.relu(out)

        out = self.conv3(out)
        out = self.bn3(out)        if self.downsample is not None:
            identity = self.downsample(x)
        out = self.nam(out)
        out += identity
        out = self.relu(out)        return outclass ResNet(nn.Layer):
    """ResNet model from
    `"Deep Residual Learning for Image Recognition" <https://arxiv.org/pdf/1512.03385.pdf>`_

    Args:
        Block (BasicBlock|BottleneckBlock): block module of model.
        depth (int): layers of resnet, default: 50.
        num_classes (int): output dim of last fc layer. If num_classes <=0, last fc layer
                            will not be defined. Default: 1000.
        with_pool (bool): use pool before the last fc layer or not. Default: True.

    Examples:
        .. code-block:: python

            from paddle.vision.models import ResNet
            from paddle.vision.models.resnet import BottleneckBlock, BasicBlock

            resnet50 = ResNet(BottleneckBlock, 50)

            resnet18 = ResNet(BasicBlock, 18)

    """

    def __init__(self, block, depth, num_classes=100, with_pool=True):
        super(ResNet, self).__init__()
        layer_cfg = {            18: [2, 2, 2, 2],            34: [3, 4, 6, 3],            50: [3, 4, 6, 3],            101: [3, 4, 23, 3],            152: [3, 8, 36, 3]
        }
        layers = layer_cfg[depth]
        self.num_classes = num_classes
        self.with_pool = with_pool
        self._norm_layer = nn.BatchNorm2D

        self.inplanes = 64
        self.dilation = 1
        
        ###
        # 将大核卷积改为小核卷积
        ###

        self.conv1 = nn.Conv2D(            3,
            self.inplanes,
            kernel_size=3,
            stride=1,
            padding=1,
            bias_attr=False)
        self.bn1 = self._norm_layer(self.inplanes)
        self.relu = nn.ReLU()
        self.layer1 = self._make_layer(block, 64, layers[0])
        self.layer2 = self._make_layer(block, 128, layers[1], stride=2)
        self.layer3 = self._make_layer(block, 256, layers[2], stride=2)
        self.layer4 = self._make_layer(block, 512, layers[3], stride=2)        if with_pool:
            self.avgpool = nn.AdaptiveAvgPool2D((1, 1))        if num_classes > 0:
            self.fc = nn.Linear(512 * block.expansion, num_classes)    def _make_layer(self, block, planes, blocks, stride=1, dilate=False):
        norm_layer = self._norm_layer
        downsample = None
        previous_dilation = self.dilation        if dilate:
            self.dilation *= stride
            stride = 1
        if stride != 1 or self.inplanes != planes * block.expansion:
            downsample = nn.Sequential(
                nn.Conv2D(
                    self.inplanes,
                    planes * block.expansion,                    1,
                    stride=stride,
                    bias_attr=False),
                norm_layer(planes * block.expansion), )

        layers = []
        layers.append(
            block(self.inplanes, planes, stride, downsample, 1, 64,
                  previous_dilation, norm_layer))
        self.inplanes = planes * block.expansion        for _ in range(1, blocks):
            layers.append(block(self.inplanes, planes, norm_layer=norm_layer))        return nn.Sequential(*layers)    def forward(self, x):
        x = self.conv1(x)
        x = self.bn1(x)
        x = self.relu(x)        ### 
        # 去掉池化
        ###
        
        # x = self.maxpool(x)
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)
        x = self.layer4(x)        if self.with_pool:
            x = self.avgpool(x)        if self.num_classes > 0:
            x = paddle.flatten(x, 1)
            x = self.fc(x)        return xdef _resnet(arch, Block, depth, pretrained, **kwargs):
    model = ResNet(Block, depth, **kwargs)    if pretrained:        assert arch in model_urls, "{} model do not have a pretrained model now, you should set pretrained=False".format(
            arch)
        weight_path = get_weights_path_from_url(model_urls[arch][0],
                                                model_urls[arch][1])

        param = paddle.load(weight_path)
        model.set_dict(param)    return modeldef resnet50(pretrained=False, **kwargs):
    """ResNet 50-layer model

    Args:
        pretrained (bool): If True, returns a model pre-trained on ImageNet

    Examples:
        .. code-block:: python

            from paddle.vision.models import resnet50

            # build model
            model = resnet50()

            # build model and load imagenet pretrained weight
            # model = resnet50(pretrained=True)
    """
    return _resnet('resnet50', BottleneckBlock, 50, pretrained, **kwargs)def resnet18(pretrained=False, **kwargs):

    return _resnet('resnet18', BasicBlock, 18, pretrained, **kwargs)
登录后复制
   

4.3.实例化网络模型

使用summary可视化模型参数

In [19]
resnet = resnet50()
model = paddle.Model(resnet)#模型可视化model.summary((-1, 3, 32, 32))
登录后复制
       
-------------------------------------------------------------------------------
   Layer (type)         Input Shape          Output Shape         Param #    
===============================================================================
    Conv2D-165        [[1, 3, 32, 32]]     [1, 64, 32, 32]         1,728     
  BatchNorm2D-165    [[1, 64, 32, 32]]     [1, 64, 32, 32]          256      
      ReLU-54        [[1, 64, 32, 32]]     [1, 64, 32, 32]           0       
    Conv2D-167       [[1, 64, 32, 32]]     [1, 64, 32, 32]         4,096     
  BatchNorm2D-167    [[1, 64, 32, 32]]     [1, 64, 32, 32]          256      
      ReLU-55        [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
    Conv2D-168       [[1, 64, 32, 32]]     [1, 64, 32, 32]        36,864     
  BatchNorm2D-168    [[1, 64, 32, 32]]     [1, 64, 32, 32]          256      
    Conv2D-169       [[1, 64, 32, 32]]     [1, 256, 32, 32]       16,384     
  BatchNorm2D-169    [[1, 256, 32, 32]]    [1, 256, 32, 32]        1,024     
    Conv2D-166       [[1, 64, 32, 32]]     [1, 256, 32, 32]       16,384     
  BatchNorm2D-166    [[1, 256, 32, 32]]    [1, 256, 32, 32]        1,024     
  BatchNorm2D-170    [[1, 256, 32, 32]]    [1, 256, 32, 32]        1,024     
   Channel_Att-1     [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
       Att-1         [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
BottleneckBlock-50   [[1, 64, 32, 32]]     [1, 256, 32, 32]          0       
    Conv2D-170       [[1, 256, 32, 32]]    [1, 64, 32, 32]        16,384     
  BatchNorm2D-171    [[1, 64, 32, 32]]     [1, 64, 32, 32]          256      
      ReLU-56        [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
    Conv2D-171       [[1, 64, 32, 32]]     [1, 64, 32, 32]        36,864     
  BatchNorm2D-172    [[1, 64, 32, 32]]     [1, 64, 32, 32]          256      
    Conv2D-172       [[1, 64, 32, 32]]     [1, 256, 32, 32]       16,384     
  BatchNorm2D-173    [[1, 256, 32, 32]]    [1, 256, 32, 32]        1,024     
  BatchNorm2D-174    [[1, 256, 32, 32]]    [1, 256, 32, 32]        1,024     
   Channel_Att-2     [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
       Att-2         [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
BottleneckBlock-51   [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
    Conv2D-173       [[1, 256, 32, 32]]    [1, 64, 32, 32]        16,384     
  BatchNorm2D-175    [[1, 64, 32, 32]]     [1, 64, 32, 32]          256      
      ReLU-57        [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
    Conv2D-174       [[1, 64, 32, 32]]     [1, 64, 32, 32]        36,864     
  BatchNorm2D-176    [[1, 64, 32, 32]]     [1, 64, 32, 32]          256      
    Conv2D-175       [[1, 64, 32, 32]]     [1, 256, 32, 32]       16,384     
  BatchNorm2D-177    [[1, 256, 32, 32]]    [1, 256, 32, 32]        1,024     
  BatchNorm2D-178    [[1, 256, 32, 32]]    [1, 256, 32, 32]        1,024     
   Channel_Att-3     [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
       Att-3         [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
BottleneckBlock-52   [[1, 256, 32, 32]]    [1, 256, 32, 32]          0       
    Conv2D-177       [[1, 256, 32, 32]]    [1, 128, 32, 32]       32,768     
  BatchNorm2D-180    [[1, 128, 32, 32]]    [1, 128, 32, 32]         512      
      ReLU-58        [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
    Conv2D-178       [[1, 128, 32, 32]]    [1, 128, 16, 16]       147,456    
  BatchNorm2D-181    [[1, 128, 16, 16]]    [1, 128, 16, 16]         512      
    Conv2D-179       [[1, 128, 16, 16]]    [1, 512, 16, 16]       65,536     
  BatchNorm2D-182    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
    Conv2D-176       [[1, 256, 32, 32]]    [1, 512, 16, 16]       131,072    
  BatchNorm2D-179    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
  BatchNorm2D-183    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
   Channel_Att-4     [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
       Att-4         [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
BottleneckBlock-53   [[1, 256, 32, 32]]    [1, 512, 16, 16]          0       
    Conv2D-180       [[1, 512, 16, 16]]    [1, 128, 16, 16]       65,536     
  BatchNorm2D-184    [[1, 128, 16, 16]]    [1, 128, 16, 16]         512      
      ReLU-59        [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
    Conv2D-181       [[1, 128, 16, 16]]    [1, 128, 16, 16]       147,456    
  BatchNorm2D-185    [[1, 128, 16, 16]]    [1, 128, 16, 16]         512      
    Conv2D-182       [[1, 128, 16, 16]]    [1, 512, 16, 16]       65,536     
  BatchNorm2D-186    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
  BatchNorm2D-187    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
   Channel_Att-5     [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
       Att-5         [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
BottleneckBlock-54   [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
    Conv2D-183       [[1, 512, 16, 16]]    [1, 128, 16, 16]       65,536     
  BatchNorm2D-188    [[1, 128, 16, 16]]    [1, 128, 16, 16]         512      
      ReLU-60        [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
    Conv2D-184       [[1, 128, 16, 16]]    [1, 128, 16, 16]       147,456    
  BatchNorm2D-189    [[1, 128, 16, 16]]    [1, 128, 16, 16]         512      
    Conv2D-185       [[1, 128, 16, 16]]    [1, 512, 16, 16]       65,536     
  BatchNorm2D-190    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
  BatchNorm2D-191    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
   Channel_Att-6     [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
       Att-6         [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
BottleneckBlock-55   [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
    Conv2D-186       [[1, 512, 16, 16]]    [1, 128, 16, 16]       65,536     
  BatchNorm2D-192    [[1, 128, 16, 16]]    [1, 128, 16, 16]         512      
      ReLU-61        [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
    Conv2D-187       [[1, 128, 16, 16]]    [1, 128, 16, 16]       147,456    
  BatchNorm2D-193    [[1, 128, 16, 16]]    [1, 128, 16, 16]         512      
    Conv2D-188       [[1, 128, 16, 16]]    [1, 512, 16, 16]       65,536     
  BatchNorm2D-194    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
  BatchNorm2D-195    [[1, 512, 16, 16]]    [1, 512, 16, 16]        2,048     
   Channel_Att-7     [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
       Att-7         [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
BottleneckBlock-56   [[1, 512, 16, 16]]    [1, 512, 16, 16]          0       
    Conv2D-190       [[1, 512, 16, 16]]    [1, 256, 16, 16]       131,072    
  BatchNorm2D-197    [[1, 256, 16, 16]]    [1, 256, 16, 16]        1,024     
      ReLU-62        [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-191       [[1, 256, 16, 16]]     [1, 256, 8, 8]        589,824    
  BatchNorm2D-198     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
    Conv2D-192        [[1, 256, 8, 8]]     [1, 1024, 8, 8]        262,144    
  BatchNorm2D-199    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
    Conv2D-189       [[1, 512, 16, 16]]    [1, 1024, 8, 8]        524,288    
  BatchNorm2D-196    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  BatchNorm2D-200    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
   Channel_Att-8     [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
       Att-8         [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
BottleneckBlock-57   [[1, 512, 16, 16]]    [1, 1024, 8, 8]           0       
    Conv2D-193       [[1, 1024, 8, 8]]      [1, 256, 8, 8]        262,144    
  BatchNorm2D-201     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
      ReLU-63        [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-194        [[1, 256, 8, 8]]      [1, 256, 8, 8]        589,824    
  BatchNorm2D-202     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
    Conv2D-195        [[1, 256, 8, 8]]     [1, 1024, 8, 8]        262,144    
  BatchNorm2D-203    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  BatchNorm2D-204    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
   Channel_Att-9     [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
       Att-9         [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
BottleneckBlock-58   [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-196       [[1, 1024, 8, 8]]      [1, 256, 8, 8]        262,144    
  BatchNorm2D-205     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
      ReLU-64        [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-197        [[1, 256, 8, 8]]      [1, 256, 8, 8]        589,824    
  BatchNorm2D-206     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
    Conv2D-198        [[1, 256, 8, 8]]     [1, 1024, 8, 8]        262,144    
  BatchNorm2D-207    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  BatchNorm2D-208    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  Channel_Att-10     [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
      Att-10         [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
BottleneckBlock-59   [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-199       [[1, 1024, 8, 8]]      [1, 256, 8, 8]        262,144    
  BatchNorm2D-209     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
      ReLU-65        [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-200        [[1, 256, 8, 8]]      [1, 256, 8, 8]        589,824    
  BatchNorm2D-210     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
    Conv2D-201        [[1, 256, 8, 8]]     [1, 1024, 8, 8]        262,144    
  BatchNorm2D-211    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  BatchNorm2D-212    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  Channel_Att-11     [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
      Att-11         [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
BottleneckBlock-60   [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-202       [[1, 1024, 8, 8]]      [1, 256, 8, 8]        262,144    
  BatchNorm2D-213     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
      ReLU-66        [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-203        [[1, 256, 8, 8]]      [1, 256, 8, 8]        589,824    
  BatchNorm2D-214     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
    Conv2D-204        [[1, 256, 8, 8]]     [1, 1024, 8, 8]        262,144    
  BatchNorm2D-215    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  BatchNorm2D-216    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  Channel_Att-12     [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
      Att-12         [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
BottleneckBlock-61   [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-205       [[1, 1024, 8, 8]]      [1, 256, 8, 8]        262,144    
  BatchNorm2D-217     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
      ReLU-67        [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-206        [[1, 256, 8, 8]]      [1, 256, 8, 8]        589,824    
  BatchNorm2D-218     [[1, 256, 8, 8]]      [1, 256, 8, 8]         1,024     
    Conv2D-207        [[1, 256, 8, 8]]     [1, 1024, 8, 8]        262,144    
  BatchNorm2D-219    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  BatchNorm2D-220    [[1, 1024, 8, 8]]     [1, 1024, 8, 8]         4,096     
  Channel_Att-13     [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
      Att-13         [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
BottleneckBlock-62   [[1, 1024, 8, 8]]     [1, 1024, 8, 8]           0       
    Conv2D-209       [[1, 1024, 8, 8]]      [1, 512, 8, 8]        524,288    
  BatchNorm2D-222     [[1, 512, 8, 8]]      [1, 512, 8, 8]         2,048     
      ReLU-68        [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
    Conv2D-210        [[1, 512, 8, 8]]      [1, 512, 4, 4]       2,359,296   
  BatchNorm2D-223     [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
    Conv2D-211        [[1, 512, 4, 4]]     [1, 2048, 4, 4]       1,048,576   
  BatchNorm2D-224    [[1, 2048, 4, 4]]     [1, 2048, 4, 4]         8,192     
    Conv2D-208       [[1, 1024, 8, 8]]     [1, 2048, 4, 4]       2,097,152   
  BatchNorm2D-221    [[1, 2048, 4, 4]]     [1, 2048, 4, 4]         8,192     
  BatchNorm2D-225    [[1, 2048, 4, 4]]     [1, 2048, 4, 4]         8,192     
  Channel_Att-14     [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
      Att-14         [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
BottleneckBlock-63   [[1, 1024, 8, 8]]     [1, 2048, 4, 4]           0       
    Conv2D-212       [[1, 2048, 4, 4]]      [1, 512, 4, 4]       1,048,576   
  BatchNorm2D-226     [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
      ReLU-69        [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
    Conv2D-213        [[1, 512, 4, 4]]      [1, 512, 4, 4]       2,359,296   
  BatchNorm2D-227     [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
    Conv2D-214        [[1, 512, 4, 4]]     [1, 2048, 4, 4]       1,048,576   
  BatchNorm2D-228    [[1, 2048, 4, 4]]     [1, 2048, 4, 4]         8,192     
  BatchNorm2D-229    [[1, 2048, 4, 4]]     [1, 2048, 4, 4]         8,192     
  Channel_Att-15     [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
      Att-15         [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
BottleneckBlock-64   [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
    Conv2D-215       [[1, 2048, 4, 4]]      [1, 512, 4, 4]       1,048,576   
  BatchNorm2D-230     [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
      ReLU-70        [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
    Conv2D-216        [[1, 512, 4, 4]]      [1, 512, 4, 4]       2,359,296   
  BatchNorm2D-231     [[1, 512, 4, 4]]      [1, 512, 4, 4]         2,048     
    Conv2D-217        [[1, 512, 4, 4]]     [1, 2048, 4, 4]       1,048,576   
  BatchNorm2D-232    [[1, 2048, 4, 4]]     [1, 2048, 4, 4]         8,192     
  BatchNorm2D-233    [[1, 2048, 4, 4]]     [1, 2048, 4, 4]         8,192     
  Channel_Att-16     [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
      Att-16         [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
BottleneckBlock-65   [[1, 2048, 4, 4]]     [1, 2048, 4, 4]           0       
AdaptiveAvgPool2D-4  [[1, 2048, 4, 4]]     [1, 2048, 1, 1]           0       
     Linear-4           [[1, 2048]]            [1, 100]           204,900    
===============================================================================
Total params: 23,818,788
Trainable params: 23,652,132
Non-trainable params: 166,656
-------------------------------------------------------------------------------
Input size (MB): 0.01
Forward/backward pass size (MB): 121.64
Params size (MB): 90.86
Estimated Total Size (MB): 212.51
-------------------------------------------------------------------------------
登录后复制
       
{'total_params': 23818788, 'trainable_params': 23652132}
登录后复制
               

4.4模型训练

我们使用momentum这个动量优化函数,交叉熵损失函数,训练轮数为100

In [24]
from paddle.optimizer.lr import CosineAnnealingDecay, MultiStepDecay, LinearWarmup

model.prepare(
    paddle.optimizer.Momentum(
        learning_rate=LinearWarmup(CosineAnnealingDecay(0.001, 100), 2000, 0., 0.001),
        momentum=0.9,
        parameters=model.parameters(),
        weight_decay=5e-4),
    paddle.nn.CrossEntropyLoss(),
    paddle.metric.Accuracy(topk=(1,5)))#开始模型训练model.fit(train_dataset,
          eval_dataset,
          epochs=100,#训练的轮数
          batch_size=128,#每次训练多少个
          verbose=1,#显示模式
          shuffle=True,#打乱数据集顺序
          num_workers=4,
          callbacks=callback_visualdl,
          )

callback_visualdl = paddle.callbacks.VisualDL(log_dir='visualdl_log_dir-NAM')
登录后复制
       
The loss value printed in the log is the current step, and the metric is the average value of previous steps.
Epoch 1/100
step 391/391 [==============================] - loss: 4.8497 - acc_top1: 0.0111 - acc_top5: 0.0573 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.8184 - acc_top1: 0.0137 - acc_top5: 0.0692 - 46ms/step          
Eval samples: 10000
Epoch 2/100
step 391/391 [==============================] - loss: 4.7389 - acc_top1: 0.0179 - acc_top5: 0.0762 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.5421 - acc_top1: 0.0212 - acc_top5: 0.0901 - 48ms/step          
Eval samples: 10000
Epoch 3/100
step 391/391 [==============================] - loss: 4.5462 - acc_top1: 0.0318 - acc_top5: 0.1244 - 91ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.5771 - acc_top1: 0.0473 - acc_top5: 0.1673 - 45ms/step          
Eval samples: 10000
Epoch 4/100
step 391/391 [==============================] - loss: 4.0376 - acc_top1: 0.0583 - acc_top5: 0.2006 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.2838 - acc_top1: 0.0760 - acc_top5: 0.2449 - 47ms/step          
Eval samples: 10000
Epoch 5/100
step 391/391 [==============================] - loss: 3.9339 - acc_top1: 0.0872 - acc_top5: 0.2677 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 4.0610 - acc_top1: 0.1020 - acc_top5: 0.2914 - 46ms/step          
Eval samples: 10000
Epoch 6/100
step 391/391 [==============================] - loss: 3.5850 - acc_top1: 0.1119 - acc_top5: 0.3213 - 94ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.9599 - acc_top1: 0.1227 - acc_top5: 0.3317 - 51ms/step          
Eval samples: 10000
Epoch 7/100
step 391/391 [==============================] - loss: 3.7031 - acc_top1: 0.1271 - acc_top5: 0.3470 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.9104 - acc_top1: 0.1420 - acc_top5: 0.3642 - 46ms/step          
Eval samples: 10000
Epoch 8/100
step 391/391 [==============================] - loss: 3.6172 - acc_top1: 0.1409 - acc_top5: 0.3728 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.7252 - acc_top1: 0.1502 - acc_top5: 0.3806 - 45ms/step          
Eval samples: 10000
Epoch 9/100
step 391/391 [==============================] - loss: 3.5688 - acc_top1: 0.1514 - acc_top5: 0.3920 - 100ms/step         
Eval begin...
step 79/79 [==============================] - loss: 3.6717 - acc_top1: 0.1567 - acc_top5: 0.3929 - 46ms/step          
Eval samples: 10000
Epoch 10/100
step 391/391 [==============================] - loss: 3.6321 - acc_top1: 0.1633 - acc_top5: 0.4103 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.6300 - acc_top1: 0.1653 - acc_top5: 0.4026 - 48ms/step          
Eval samples: 10000
Epoch 11/100
step 391/391 [==============================] - loss: 3.5487 - acc_top1: 0.1704 - acc_top5: 0.4237 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.7029 - acc_top1: 0.1760 - acc_top5: 0.4197 - 46ms/step          
Eval samples: 10000
Epoch 12/100
step 391/391 [==============================] - loss: 3.8424 - acc_top1: 0.1797 - acc_top5: 0.4357 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4210 - acc_top1: 0.1766 - acc_top5: 0.4333 - 48ms/step          
Eval samples: 10000
Epoch 13/100
step 391/391 [==============================] - loss: 3.6314 - acc_top1: 0.1887 - acc_top5: 0.4541 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4551 - acc_top1: 0.1874 - acc_top5: 0.4394 - 49ms/step          
Eval samples: 10000
Epoch 14/100
step 391/391 [==============================] - loss: 2.9399 - acc_top1: 0.1986 - acc_top5: 0.4647 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4774 - acc_top1: 0.1918 - acc_top5: 0.4490 - 47ms/step          
Eval samples: 10000
Epoch 15/100
step 391/391 [==============================] - loss: 3.5771 - acc_top1: 0.2082 - acc_top5: 0.4766 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2711 - acc_top1: 0.1947 - acc_top5: 0.4522 - 46ms/step          
Eval samples: 10000
Epoch 16/100
step 391/391 [==============================] - loss: 3.3304 - acc_top1: 0.2145 - acc_top5: 0.4872 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1900 - acc_top1: 0.2015 - acc_top5: 0.4576 - 45ms/step          
Eval samples: 10000
Epoch 17/100
step 391/391 [==============================] - loss: 3.5043 - acc_top1: 0.2237 - acc_top5: 0.4994 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.4247 - acc_top1: 0.1992 - acc_top5: 0.4625 - 45ms/step          
Eval samples: 10000
Epoch 18/100
step 391/391 [==============================] - loss: 3.3019 - acc_top1: 0.2264 - acc_top5: 0.5060 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2735 - acc_top1: 0.2071 - acc_top5: 0.4673 - 45ms/step          
Eval samples: 10000
Epoch 19/100
step 391/391 [==============================] - loss: 3.1922 - acc_top1: 0.2345 - acc_top5: 0.5142 - 91ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2611 - acc_top1: 0.2136 - acc_top5: 0.4784 - 46ms/step          
Eval samples: 10000
Epoch 20/100
step 391/391 [==============================] - loss: 2.9049 - acc_top1: 0.2388 - acc_top5: 0.5234 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2051 - acc_top1: 0.2146 - acc_top5: 0.4867 - 45ms/step          
Eval samples: 10000
Epoch 21/100
step 391/391 [==============================] - loss: 2.9065 - acc_top1: 0.2464 - acc_top5: 0.5364 - 91ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.3232 - acc_top1: 0.2204 - acc_top5: 0.4860 - 45ms/step          
Eval samples: 10000
Epoch 22/100
step 391/391 [==============================] - loss: 2.8180 - acc_top1: 0.2546 - acc_top5: 0.5443 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0776 - acc_top1: 0.2184 - acc_top5: 0.4927 - 46ms/step          
Eval samples: 10000
Epoch 23/100
step 391/391 [==============================] - loss: 3.3415 - acc_top1: 0.2574 - acc_top5: 0.5520 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1972 - acc_top1: 0.2295 - acc_top5: 0.4959 - 45ms/step          
Eval samples: 10000
Epoch 24/100
step 391/391 [==============================] - loss: 3.1298 - acc_top1: 0.2646 - acc_top5: 0.5576 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0983 - acc_top1: 0.2341 - acc_top5: 0.5087 - 45ms/step          
Eval samples: 10000
Epoch 25/100
step 391/391 [==============================] - loss: 2.9851 - acc_top1: 0.2744 - acc_top5: 0.5661 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9562 - acc_top1: 0.2353 - acc_top5: 0.5048 - 45ms/step          
Eval samples: 10000
Epoch 26/100
step 391/391 [==============================] - loss: 2.9751 - acc_top1: 0.2788 - acc_top5: 0.5743 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1786 - acc_top1: 0.2448 - acc_top5: 0.5172 - 46ms/step          
Eval samples: 10000
Epoch 27/100
step 391/391 [==============================] - loss: 3.0608 - acc_top1: 0.2819 - acc_top5: 0.5805 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9408 - acc_top1: 0.2470 - acc_top5: 0.5202 - 45ms/step          
Eval samples: 10000
Epoch 28/100
step 391/391 [==============================] - loss: 3.0520 - acc_top1: 0.2899 - acc_top5: 0.5886 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1723 - acc_top1: 0.2506 - acc_top5: 0.5260 - 46ms/step          
Eval samples: 10000
Epoch 29/100
step 391/391 [==============================] - loss: 2.7072 - acc_top1: 0.2943 - acc_top5: 0.5966 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9665 - acc_top1: 0.2507 - acc_top5: 0.5346 - 45ms/step          
Eval samples: 10000
Epoch 30/100
step 391/391 [==============================] - loss: 2.9159 - acc_top1: 0.3018 - acc_top5: 0.6030 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9954 - acc_top1: 0.2521 - acc_top5: 0.5332 - 45ms/step          
Eval samples: 10000
Epoch 31/100
step 391/391 [==============================] - loss: 2.6270 - acc_top1: 0.3087 - acc_top5: 0.6109 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8555 - acc_top1: 0.2601 - acc_top5: 0.5434 - 46ms/step          
Eval samples: 10000
Epoch 32/100
step 391/391 [==============================] - loss: 3.0665 - acc_top1: 0.3113 - acc_top5: 0.6158 - 94ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0362 - acc_top1: 0.2614 - acc_top5: 0.5450 - 45ms/step          
Eval samples: 10000
Epoch 33/100
step 391/391 [==============================] - loss: 3.3042 - acc_top1: 0.3189 - acc_top5: 0.6238 - 94ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9307 - acc_top1: 0.2604 - acc_top5: 0.5492 - 45ms/step          
Eval samples: 10000
Epoch 34/100
step 391/391 [==============================] - loss: 2.6142 - acc_top1: 0.3244 - acc_top5: 0.6288 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9296 - acc_top1: 0.2680 - acc_top5: 0.5458 - 45ms/step          
Eval samples: 10000
Epoch 35/100
step 391/391 [==============================] - loss: 2.8182 - acc_top1: 0.3319 - acc_top5: 0.6381 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8653 - acc_top1: 0.2727 - acc_top5: 0.5506 - 45ms/step          
Eval samples: 10000
Epoch 36/100
step 391/391 [==============================] - loss: 2.6769 - acc_top1: 0.3386 - acc_top5: 0.6423 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1202 - acc_top1: 0.2668 - acc_top5: 0.5518 - 46ms/step          
Eval samples: 10000
Epoch 37/100
step 391/391 [==============================] - loss: 2.3541 - acc_top1: 0.3409 - acc_top5: 0.6498 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7944 - acc_top1: 0.2655 - acc_top5: 0.5587 - 46ms/step          
Eval samples: 10000
Epoch 38/100
step 391/391 [==============================] - loss: 2.6344 - acc_top1: 0.3489 - acc_top5: 0.6554 - 93ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.1045 - acc_top1: 0.2676 - acc_top5: 0.5529 - 46ms/step          
Eval samples: 10000
Epoch 39/100
step 391/391 [==============================] - loss: 2.4571 - acc_top1: 0.3537 - acc_top5: 0.6611 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6990 - acc_top1: 0.2730 - acc_top5: 0.5585 - 46ms/step          
Eval samples: 10000
Epoch 40/100
step 391/391 [==============================] - loss: 2.5621 - acc_top1: 0.3624 - acc_top5: 0.6693 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7441 - acc_top1: 0.2717 - acc_top5: 0.5591 - 46ms/step          
Eval samples: 10000
Epoch 41/100
step 391/391 [==============================] - loss: 2.6501 - acc_top1: 0.3660 - acc_top5: 0.6726 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9928 - acc_top1: 0.2753 - acc_top5: 0.5627 - 49ms/step          
Eval samples: 10000
Epoch 42/100
step 391/391 [==============================] - loss: 2.3623 - acc_top1: 0.3721 - acc_top5: 0.6811 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7311 - acc_top1: 0.2707 - acc_top5: 0.5699 - 45ms/step          
Eval samples: 10000
Epoch 43/100
step 391/391 [==============================] - loss: 2.4758 - acc_top1: 0.3746 - acc_top5: 0.6850 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7034 - acc_top1: 0.2764 - acc_top5: 0.5635 - 46ms/step          
Eval samples: 10000
Epoch 44/100
step 391/391 [==============================] - loss: 2.5367 - acc_top1: 0.3816 - acc_top5: 0.6903 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9082 - acc_top1: 0.2750 - acc_top5: 0.5644 - 46ms/step          
Eval samples: 10000
Epoch 45/100
step 391/391 [==============================] - loss: 2.5759 - acc_top1: 0.3872 - acc_top5: 0.6998 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7683 - acc_top1: 0.2749 - acc_top5: 0.5676 - 46ms/step          
Eval samples: 10000
Epoch 46/100
step 391/391 [==============================] - loss: 2.5877 - acc_top1: 0.3934 - acc_top5: 0.7032 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7031 - acc_top1: 0.2860 - acc_top5: 0.5820 - 46ms/step          
Eval samples: 10000
Epoch 47/100
step 391/391 [==============================] - loss: 2.4155 - acc_top1: 0.3974 - acc_top5: 0.7081 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7535 - acc_top1: 0.2755 - acc_top5: 0.5733 - 51ms/step          
Eval samples: 10000
Epoch 48/100
step 391/391 [==============================] - loss: 2.5510 - acc_top1: 0.4023 - acc_top5: 0.7103 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0216 - acc_top1: 0.2804 - acc_top5: 0.5679 - 46ms/step          
Eval samples: 10000
Epoch 49/100
step 391/391 [==============================] - loss: 2.5306 - acc_top1: 0.4058 - acc_top5: 0.7172 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7141 - acc_top1: 0.2944 - acc_top5: 0.5844 - 45ms/step          
Eval samples: 10000
Epoch 50/100
step 391/391 [==============================] - loss: 2.2112 - acc_top1: 0.4125 - acc_top5: 0.7234 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7876 - acc_top1: 0.2884 - acc_top5: 0.5834 - 46ms/step          
Eval samples: 10000
Epoch 51/100
step 391/391 [==============================] - loss: 2.3451 - acc_top1: 0.4178 - acc_top5: 0.7263 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8956 - acc_top1: 0.3044 - acc_top5: 0.5890 - 46ms/step          
Eval samples: 10000
Epoch 52/100
step 391/391 [==============================] - loss: 2.2558 - acc_top1: 0.4235 - acc_top5: 0.7319 - 95ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9233 - acc_top1: 0.2966 - acc_top5: 0.5875 - 47ms/step          
Eval samples: 10000
Epoch 53/100
step 391/391 [==============================] - loss: 2.5057 - acc_top1: 0.4301 - acc_top5: 0.7376 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5221 - acc_top1: 0.3001 - acc_top5: 0.5928 - 45ms/step          
Eval samples: 10000
Epoch 54/100
step 391/391 [==============================] - loss: 2.8381 - acc_top1: 0.4361 - acc_top5: 0.7439 - 90ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6398 - acc_top1: 0.2976 - acc_top5: 0.5876 - 47ms/step          
Eval samples: 10000
Epoch 55/100
step 391/391 [==============================] - loss: 1.9737 - acc_top1: 0.4441 - acc_top5: 0.7489 - 91ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7225 - acc_top1: 0.2969 - acc_top5: 0.5878 - 45ms/step          
Eval samples: 10000
Epoch 56/100
step 391/391 [==============================] - loss: 2.1371 - acc_top1: 0.4475 - acc_top5: 0.7523 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5232 - acc_top1: 0.3012 - acc_top5: 0.5955 - 44ms/step          
Eval samples: 10000
Epoch 57/100
step 391/391 [==============================] - loss: 2.0962 - acc_top1: 0.4511 - acc_top5: 0.7565 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5724 - acc_top1: 0.3050 - acc_top5: 0.5974 - 45ms/step          
Eval samples: 10000
Epoch 58/100
step 391/391 [==============================] - loss: 1.9895 - acc_top1: 0.4586 - acc_top5: 0.7648 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6998 - acc_top1: 0.3015 - acc_top5: 0.5952 - 45ms/step          
Eval samples: 10000
Epoch 59/100
step 391/391 [==============================] - loss: 2.0773 - acc_top1: 0.4653 - acc_top5: 0.7685 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.4346 - acc_top1: 0.3036 - acc_top5: 0.5966 - 45ms/step          
Eval samples: 10000
Epoch 60/100
step 391/391 [==============================] - loss: 2.3247 - acc_top1: 0.4735 - acc_top5: 0.7756 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5396 - acc_top1: 0.3053 - acc_top5: 0.5955 - 45ms/step          
Eval samples: 10000
Epoch 61/100
step 391/391 [==============================] - loss: 1.6406 - acc_top1: 0.4773 - acc_top5: 0.7760 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.2420 - acc_top1: 0.2973 - acc_top5: 0.5894 - 45ms/step          
Eval samples: 10000
Epoch 62/100
step 391/391 [==============================] - loss: 1.6518 - acc_top1: 0.4863 - acc_top5: 0.7847 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6234 - acc_top1: 0.2973 - acc_top5: 0.5957 - 46ms/step          
Eval samples: 10000
Epoch 63/100
step 391/391 [==============================] - loss: 2.3919 - acc_top1: 0.4912 - acc_top5: 0.7874 - 90ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6038 - acc_top1: 0.3006 - acc_top5: 0.5924 - 45ms/step          
Eval samples: 10000
Epoch 64/100
step 391/391 [==============================] - loss: 2.2112 - acc_top1: 0.4944 - acc_top5: 0.7904 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7318 - acc_top1: 0.2889 - acc_top5: 0.5861 - 45ms/step          
Eval samples: 10000
Epoch 65/100
step 391/391 [==============================] - loss: 2.1017 - acc_top1: 0.5031 - acc_top5: 0.7968 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7789 - acc_top1: 0.2967 - acc_top5: 0.5918 - 44ms/step          
Eval samples: 10000
Epoch 66/100
step 391/391 [==============================] - loss: 2.0587 - acc_top1: 0.5076 - acc_top5: 0.8031 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9333 - acc_top1: 0.2832 - acc_top5: 0.5723 - 46ms/step          
Eval samples: 10000
Epoch 67/100
step 391/391 [==============================] - loss: 1.9153 - acc_top1: 0.5104 - acc_top5: 0.8038 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 3.0879 - acc_top1: 0.3011 - acc_top5: 0.5962 - 44ms/step          
Eval samples: 10000
Epoch 68/100
step 391/391 [==============================] - loss: 1.8406 - acc_top1: 0.5135 - acc_top5: 0.8097 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6704 - acc_top1: 0.3029 - acc_top5: 0.5970 - 46ms/step          
Eval samples: 10000
Epoch 69/100
step 391/391 [==============================] - loss: 1.7613 - acc_top1: 0.5165 - acc_top5: 0.8114 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.3267 - acc_top1: 0.3056 - acc_top5: 0.6006 - 44ms/step          
Eval samples: 10000
Epoch 70/100
step 391/391 [==============================] - loss: 1.7512 - acc_top1: 0.5238 - acc_top5: 0.8148 - 93ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.1417 - acc_top1: 0.3056 - acc_top5: 0.5988 - 47ms/step          
Eval samples: 10000
Epoch 71/100
step 391/391 [==============================] - loss: 1.9838 - acc_top1: 0.5281 - acc_top5: 0.8174 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5933 - acc_top1: 0.3185 - acc_top5: 0.6076 - 53ms/step          
Eval samples: 10000
Epoch 72/100
step 391/391 [==============================] - loss: 1.6904 - acc_top1: 0.5340 - acc_top5: 0.8231 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6118 - acc_top1: 0.3185 - acc_top5: 0.6119 - 45ms/step          
Eval samples: 10000
Epoch 73/100
step 391/391 [==============================] - loss: 1.7876 - acc_top1: 0.5394 - acc_top5: 0.8273 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6305 - acc_top1: 0.3168 - acc_top5: 0.6092 - 45ms/step          
Eval samples: 10000
Epoch 74/100
step 391/391 [==============================] - loss: 1.8091 - acc_top1: 0.5470 - acc_top5: 0.8321 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7842 - acc_top1: 0.3246 - acc_top5: 0.6078 - 46ms/step          
Eval samples: 10000
Epoch 75/100
step 391/391 [==============================] - loss: 1.5497 - acc_top1: 0.5534 - acc_top5: 0.8370 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8507 - acc_top1: 0.3188 - acc_top5: 0.6101 - 45ms/step          
Eval samples: 10000
Epoch 76/100
step 391/391 [==============================] - loss: 2.1434 - acc_top1: 0.5572 - acc_top5: 0.8404 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8459 - acc_top1: 0.3149 - acc_top5: 0.6078 - 46ms/step          
Eval samples: 10000
Epoch 77/100
step 391/391 [==============================] - loss: 1.5916 - acc_top1: 0.5660 - acc_top5: 0.8453 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.2425 - acc_top1: 0.3205 - acc_top5: 0.6156 - 47ms/step          
Eval samples: 10000
Epoch 78/100
step 391/391 [==============================] - loss: 1.9925 - acc_top1: 0.5728 - acc_top5: 0.8486 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.2278 - acc_top1: 0.3160 - acc_top5: 0.6146 - 45ms/step          
Eval samples: 10000
Epoch 79/100
step 391/391 [==============================] - loss: 1.7550 - acc_top1: 0.5779 - acc_top5: 0.8541 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.3391 - acc_top1: 0.3269 - acc_top5: 0.6111 - 47ms/step          
Eval samples: 10000
Epoch 80/100
step 391/391 [==============================] - loss: 1.7625 - acc_top1: 0.5830 - acc_top5: 0.8574 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5607 - acc_top1: 0.3062 - acc_top5: 0.6047 - 45ms/step          
Eval samples: 10000
Epoch 81/100
step 391/391 [==============================] - loss: 1.2804 - acc_top1: 0.5859 - acc_top5: 0.8603 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6926 - acc_top1: 0.3219 - acc_top5: 0.6136 - 46ms/step          
Eval samples: 10000
Epoch 82/100
step 391/391 [==============================] - loss: 1.5730 - acc_top1: 0.5913 - acc_top5: 0.8651 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.4792 - acc_top1: 0.3106 - acc_top5: 0.6033 - 45ms/step          
Eval samples: 10000
Epoch 83/100
step 391/391 [==============================] - loss: 1.7747 - acc_top1: 0.5996 - acc_top5: 0.8669 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.4888 - acc_top1: 0.3114 - acc_top5: 0.5989 - 44ms/step          
Eval samples: 10000
Epoch 84/100
step 391/391 [==============================] - loss: 1.3741 - acc_top1: 0.6046 - acc_top5: 0.8708 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5207 - acc_top1: 0.3173 - acc_top5: 0.6019 - 45ms/step          
Eval samples: 10000
Epoch 85/100
step 391/391 [==============================] - loss: 1.6394 - acc_top1: 0.6142 - acc_top5: 0.8751 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.9659 - acc_top1: 0.3062 - acc_top5: 0.5897 - 45ms/step          
Eval samples: 10000
Epoch 86/100
step 391/391 [==============================] - loss: 1.8119 - acc_top1: 0.6169 - acc_top5: 0.8782 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5617 - acc_top1: 0.3052 - acc_top5: 0.6028 - 45ms/step          
Eval samples: 10000
Epoch 87/100
step 391/391 [==============================] - loss: 1.1349 - acc_top1: 0.6194 - acc_top5: 0.8820 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7186 - acc_top1: 0.3088 - acc_top5: 0.5979 - 49ms/step          
Eval samples: 10000
Epoch 88/100
step 391/391 [==============================] - loss: 1.4958 - acc_top1: 0.6297 - acc_top5: 0.8854 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.3525 - acc_top1: 0.3133 - acc_top5: 0.6022 - 45ms/step          
Eval samples: 10000
Epoch 89/100
step 391/391 [==============================] - loss: 1.5459 - acc_top1: 0.6318 - acc_top5: 0.8883 - 86ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6850 - acc_top1: 0.3091 - acc_top5: 0.6042 - 46ms/step          
Eval samples: 10000
Epoch 90/100
step 391/391 [==============================] - loss: 1.4070 - acc_top1: 0.6391 - acc_top5: 0.8911 - 90ms/step          
Eval begin...
step 79/79 [==============================] - loss: 1.8997 - acc_top1: 0.3032 - acc_top5: 0.5996 - 45ms/step          
Eval samples: 10000
Epoch 91/100
step 391/391 [==============================] - loss: 1.7658 - acc_top1: 0.6371 - acc_top5: 0.8925 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.6297 - acc_top1: 0.3061 - acc_top5: 0.5952 - 46ms/step          
Eval samples: 10000
Epoch 92/100
step 391/391 [==============================] - loss: 1.1509 - acc_top1: 0.6494 - acc_top5: 0.8982 - 91ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.4249 - acc_top1: 0.3208 - acc_top5: 0.6074 - 46ms/step          
Eval samples: 10000
Epoch 93/100
step 391/391 [==============================] - loss: 1.1956 - acc_top1: 0.6519 - acc_top5: 0.8989 - 89ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7847 - acc_top1: 0.3181 - acc_top5: 0.6033 - 48ms/step          
Eval samples: 10000
Epoch 94/100
step 391/391 [==============================] - loss: 1.6424 - acc_top1: 0.6524 - acc_top5: 0.9005 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5296 - acc_top1: 0.3200 - acc_top5: 0.6046 - 46ms/step          
Eval samples: 10000
Epoch 95/100
step 391/391 [==============================] - loss: 1.2563 - acc_top1: 0.6582 - acc_top5: 0.9028 - 85ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5143 - acc_top1: 0.3215 - acc_top5: 0.6111 - 47ms/step          
Eval samples: 10000
Epoch 96/100
step 391/391 [==============================] - loss: 1.4293 - acc_top1: 0.6614 - acc_top5: 0.9047 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.1475 - acc_top1: 0.3281 - acc_top5: 0.6183 - 45ms/step          
Eval samples: 10000
Epoch 97/100
step 391/391 [==============================] - loss: 1.2284 - acc_top1: 0.6714 - acc_top5: 0.9101 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.7041 - acc_top1: 0.3215 - acc_top5: 0.6118 - 46ms/step          
Eval samples: 10000
Epoch 98/100
step 391/391 [==============================] - loss: 1.2298 - acc_top1: 0.6817 - acc_top5: 0.9140 - 87ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.5701 - acc_top1: 0.3266 - acc_top5: 0.6145 - 47ms/step          
Eval samples: 10000
Epoch 99/100
step 391/391 [==============================] - loss: 1.2673 - acc_top1: 0.6866 - acc_top5: 0.9168 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.4477 - acc_top1: 0.3184 - acc_top5: 0.6114 - 45ms/step          
Eval samples: 10000
Epoch 100/100
step 391/391 [==============================] - loss: 1.2454 - acc_top1: 0.6918 - acc_top5: 0.9195 - 88ms/step          
Eval begin...
step 79/79 [==============================] - loss: 2.8776 - acc_top1: 0.3278 - acc_top5: 0.6141 - 45ms/step          
Eval samples: 10000
登录后复制
       

五.实验对比

可以看出,加入注意力机制比不加入注意力机制的效果更好,精度更高,适当增大epoch,效果应当更显著

5.1 结果可视化

图一:resnet50 Top1、5准确率 【AI达人特训营】基于ResNet50的NAM注意力机制论文复现 - php中文网        

图二:resnet50-NAM Top1、5准确率
【AI达人特训营】基于ResNet50的NAM注意力机制论文复现 - php中文网        

5.2 消融实验

本项目对比了ResNet50和ResNet-NAM的对比可以看出,ResNet50-NAM效果比ResNet50好; 然而epoch只设置为100,后续增大epoch效果应该会更加显著

【AI达人特训营】基于ResNet50的NAM注意力机制论文复现 - php中文网        

以上就是【AI达人特训营】基于ResNet50的NAM注意力机制论文复现的详细内容,更多请关注php中文网其它相关文章!

最佳 Windows 性能的顶级免费优化软件
最佳 Windows 性能的顶级免费优化软件

每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。

下载
来源:php中文网
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
最新问题
开源免费商场系统广告
热门教程
更多>
最新下载
更多>
网站特效
网站源码
网站素材
前端模板
关于我们 免责申明 意见反馈 讲师合作 广告合作 最新更新 English
php中文网:公益在线php培训,帮助PHP学习者快速成长!
关注服务号 技术交流群
PHP中文网订阅号
每天精选资源文章推送
PHP中文网APP
随时随地碎片化学习
PHP中文网抖音号
发现有趣的

Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号