行人重识别(ReID)模型训练

P粉084495128
发布: 2025-07-22 13:43:18
原创
384人浏览过
该项目为软件杯百度行人跟踪赛题中的行人重识别模型训练,含数据集准备、模型搭建、训练三步骤。先下载并处理Market_net或MARS数据集,按规则整理数据,再搭建ShuffleNet、GhostNet、ResNet网络,最后分别训练这些模型,设置相关参数,以实现行人重识别。

☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜

行人重识别(reid)模型训练 - php中文网

前言:

 该项目是在准备今年软件杯百度行人跟踪赛题的其中一个项目:行人重识别模型的训练。总共就只有三个步骤:数据集的准备、模型的搭建、最后是模型训练。

下面是最终的跟踪效果图
登录后复制

          行人重识别(ReID)模型训练 - php中文网         

一、数据集的准备:

1、下载Market_net数据集或者MARS数据集

 我已经下载,并在AI Studio上开源了,有需要的小伙伴可以去我的个人中心查找。

怪兽AI知识库
怪兽AI知识库

企业知识库大模型 + 智能的AI问答机器人

怪兽AI知识库 51
查看详情 怪兽AI知识库
In [ ]
#解压Market_net数据集以及MARS数据集#!unzip -oq /home/aistudio/data/data9240/Market-1501-v15.09.15.zip!unzip -oq /home/aistudio/data/data76843/archive.zip
登录后复制
   

2、处理数据,获得训练所需要的数据“形式”。

  下面这个脚本是用来process_data Market1501 数据集的,把它处理成我们训练需要的形式,进行数据集处理之前,首先必须要了解数据集的结构,只有在准确了解数据集结构的前提下,才能够有效的处理数据。整个数据集处理的逻辑是将相同的人放到同一个同一个文件夹下。所以,从这个角度理解ReID的训练,会发现,训练ReID模型,本质上是在做图像分类任务。
Market-1501数据集:
  bounding_box_test
  bounding_box_train
  gt_bbox
  gt_query
  readme.txt
图片的命名规则:
以 0001_C1S1_00151_01.jpg为例
1)0001表示每个人的编号,从0001 -> 1501;
2)C1表示第一个摄像头(Camera1),共6个;
3)S1表示第一个录像片段(Sequence1);
4)000151表示C1S1的第000151帧的图片。(视频帧率为25fps);
5)01表示C1S1_000151这一帧上的第一个检测框;
以上内容仅作为一个了解即可。
登录后复制
   
In [ ]
import osfrom shutil import copyfile

download_path = 'Market-1501-v15.09.15' #解压数据集之后,数据集的根目录。if not os.path.isdir(download_path):    print('path wrong, please change the download_path')

save_path_root = 'data'save_path = save_path_root + '/paddle'if not os.path.isdir(save_path):
    os.mkdir(save_path)#-----------------------------------------#queryquery_path = download_path + '/query'query_save_path = save_path + '/query'if not os.path.isdir(query_save_path):
    os.mkdir(query_save_path)for root, dirs, files in os.walk(query_path, topdown=True):    for name in files:        if not name[-3:]=='jpg':            continue
        ID  = name.split('_')
        src_path = query_path + '/' + name
        dst_path = query_save_path + '/' + ID[0]        if not os.path.isdir(dst_path):
            os.mkdir(dst_path)
        copyfile(src_path, dst_path + '/' + name)#-----------------------------------------#multi-queryquery_path = download_path + '/gt_bbox'# for dukemtmc-reid, we do not need multi-queryif os.path.isdir(query_path):
    query_save_path = save_path + '/multi-query'
    if not os.path.isdir(query_save_path):
        os.mkdir(query_save_path)    for root, dirs, files in os.walk(query_path, topdown=True):        for name in files:            if not name[-3:]=='jpg':                continue
            ID  = name.split('_')
            src_path = query_path + '/' + name
            dst_path = query_save_path + '/' + ID[0]            if not os.path.isdir(dst_path):
                os.mkdir(dst_path)
            copyfile(src_path, dst_path + '/' + name)#-----------------------------------------#gallerygallery_path = download_path + '/bounding_box_test'gallery_save_path = save_path + '/gallery'if not os.path.isdir(gallery_save_path):
    os.mkdir(gallery_save_path)for root, dirs, files in os.walk(gallery_path, topdown=True):    for name in files:        if not name[-3:]=='jpg':            continue
        ID  = name.split('_')
        src_path = gallery_path + '/' + name
        dst_path = gallery_save_path + '/' + ID[0]        if not os.path.isdir(dst_path):
            os.mkdir(dst_path)
        copyfile(src_path, dst_path + '/' + name)#---------------------------------------#train_alltrain_path = download_path + '/bounding_box_train'train_save_path = save_path + '/train_all'if not os.path.isdir(train_save_path):
    os.mkdir(train_save_path)for root, dirs, files in os.walk(train_path, topdown=True):    for name in files:        if not name[-3:]=='jpg':            continue
        ID  = name.split('_')
        src_path = train_path + '/' + name
        dst_path = train_save_path + '/' + ID[0]        if not os.path.isdir(dst_path):
            os.mkdir(dst_path)
        copyfile(src_path, dst_path + '/' + name)#---------------------------------------#train_valtrain_path = download_path + '/bounding_box_train'train_save_path = save_path + '/train'val_save_path = save_path + '/val'if not os.path.isdir(train_save_path):
    os.mkdir(train_save_path)
    os.mkdir(val_save_path)for root, dirs, files in os.walk(train_path, topdown=True):    for name in files:        if not name[-3:]=='jpg':            continue
        ID  = name.split('_')
        src_path = train_path + '/' + name
        dst_path = train_save_path + '/' + ID[0]        if not os.path.isdir(dst_path):
            os.mkdir(dst_path)
            dst_path = val_save_path + '/' + ID[0]  #first image is used as val image
            os.mkdir(dst_path)
        copyfile(src_path, dst_path + '/' + name)
登录后复制
   
In [ ]
"""
用来获得类别数目。
"""root_dir = 'bbox_train/bbox_train'print(len(os.listdir(root_dir))) #所以说有751类,加上背景图片,有752类。
登录后复制
   

3、加载训练所需的数据集:

小细节的阐释:

在class Market_dataset类,__init__ 方法中有两种transforms:分别是transforms_train和transforms_test,会发现两种transforms有所不同。在train中我们多实用了RandomCrop、RandomHorizon和ColorJitter,而在测试中没有。因为我们想在训练集中通过这几种数据增广的方法来“模拟”行人跟踪中的遮挡的情况或者在晚上灯光较暗的时候。而在测试的时候或者说在使用模型进行正向推理的时候则不需要进行这种方式的数据增广。
登录后复制
   
In [ ]
import paddleimport osfrom PIL import Imageclass Market_dataset(paddle.io.Dataset):
    def __init__(self, path, train=False):
        self.img_list = []
        self.label_data = []
        self.train = train
        self.transforms_train = paddle.vision.transforms.Compose([
            paddle.vision.transforms.Resize((128, 64)),
            paddle.vision.transforms.RandomCrop((128,64)),
            paddle.vision.transforms.RandomHorizontalFlip(),
            paddle.vision.transforms.ColorJitter(0.4, 0.4, 0.4, 0.4),
            paddle.vision.transforms.ToTensor(),
            paddle.vision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
        self.transforms_test = paddle.vision.transforms.Compose([
            paddle.vision.transforms.Resize((128,64)),
            paddle.vision.transforms.ToTensor(),
            paddle.vision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
        ])
        count = 0
        for dir_ in os.listdir(path):            for img_path in os.listdir(os.path.join(path, dir_)):
                self.img_list.append(os.path.join(path, dir_, img_path))
                self.label_data.append(count)
            count +=1
        
    def __len__(self):
        return len(self.img_list)    def __getitem__(self, index):
        img_path = self.img_list[index]
        img_data = Image.open(img_path)        if img_data.mode != 'RGB':
            img_data = img_data.convert('RGB')
        label = self.label_data[index]        if self.train:
            img_data = self.transforms_train(img_data)        else:
            img_data = self.transforms_test(img_data)        return img_data, label#Maket1501"""train_data_path = 'data/paddle/train_all'
val_data_path = 'data/paddle/val'

train_data = Market_dataset(train_data_path, train=True)
val_data = Market_dataset(val_data_path, train=False)"""#print(train_data[123][1]) #[1]打印数据的标签,[0]打印数据。train_data_path = 'bbox_train/bbox_train'val_data_path = 'bbox_test/bbox_test'train_data = Market_dataset(train_data_path, train=True)
val_data = Market_dataset(val_data_path, train=False)print(train_data[123][1])
登录后复制
       
0
登录后复制
       
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddle/tensor/creation.py:143: DeprecationWarning: `np.object` is a deprecated alias for the builtin `object`. To silence this warning, use `object` by itself. Doing this will not modify any behavior and is safe. 
Deprecated in NumPy 1.20; for more details and guidance: https://numpy.org/devdocs/release/1.20.0-notes.html#deprecations
  if data.dtype == np.object:
登录后复制
       

二、特征提取(ShuffleNet、GhostNet、ResNet网络)的搭建:

1、ShuffleNet网络的搭建

In [ ]
"""
ShuffleNet网络模型的搭建
"""import paddleimport paddle.nn as nndef channel_shuffle(x, groups):

    g = groups

    x = paddle.reshape(x, (x.shape[0], g, x.shape[1] // g, x.shape[2], x.shape[3]))
    x = paddle.transpose(x, (0, 2, 1, 3, 4))
    x = paddle.reshape(x, (x.shape[0], -1, x.shape[3], x.shape[4]))    
    return  xclass InvertedResidual(nn.Layer):
    def __init__(self, inp, oup, stride):
        super(InvertedResidual, self).__init__()        if not (1 <= stride <= 3):            raise ValueError('illegal stride value')
        self.stride = stride

        branch_features = oup // 2
        assert (self.stride != 1) or (inp == branch_features << 1)        if self.stride > 1:
            self.branch2 = nn.Sequential(
                self.depthwise_conv(inp, inp, kernel_size=3, stride=self.stride, padding=1),
                nn.BatchNorm2D(inp),
                nn.Conv2D(inp, branch_features, kernel_size=1, stride=1, padding=0, bias_attr=False),
                nn.BatchNorm2D(branch_features),
                nn.ReLU(),
            )

        self.branch2 = nn.Sequential(
            nn.Conv2D(inp if (self.stride > 1) else branch_features,
                      branch_features, kernel_size=1, stride=1, padding=0, bias_attr=False),
            nn.BatchNorm2D(branch_features),
            nn.ReLU(),
            self.depthwise_conv(branch_features, branch_features, kernel_size=3, stride=self.stride, padding=1),
            nn.BatchNorm2D(branch_features),
            nn.Conv2D(branch_features, branch_features, kernel_size=1, stride=1, padding=0, bias_attr=False),
            nn.BatchNorm2D(branch_features),
            nn.ReLU(),
        )    @staticmethod
    def depthwise_conv(i, o, kernel_size, stride=1, padding=0, bias_attr=False):
        return nn.Conv2D(i, o, kernel_size, stride, padding, groups=i, bias_attr=False)    def forward(self, x):
        if self.stride == 1:
            x1, x2 = paddle.chunk(x, 2, axis=1)
            out = paddle.concat((x1, self.branch2(x2)), axis=1)        else:
            out = paddle.concat((self.branch2(x), self.branch2(x)), axis=1)

        out = channel_shuffle(out, 2)        return outclass ShuffleNetV2(nn.Layer):
    def __init__(self, stages_repeats, stages_out_channels, num_classes=1000, reid = False):
        super(ShuffleNetV2, self).__init__()        if len(stages_repeats) != 3:            raise ValueError('expected stages_repeats as list of 3 positive ints')        if len(stages_out_channels) != 5:            raise ValueError('expected stages_out_channels as list of 5 positive ints')
        self._stage_out_channels = stages_out_channels
        self.reid = reid

        input_channels = 3
        output_channels = self._stage_out_channels[0]
        self.conv1 = nn.Sequential(
            nn.Conv2D(input_channels, output_channels, 3, 2, 1, bias_attr=False),
            nn.BatchNorm2D(output_channels),
            nn.ReLU(),
        )
        input_channels = output_channels

        self.maxpool = nn.MaxPool2D(kernel_size=3, stride=(1,2), padding=1)

        stage_names = ['stage{}'.format(i) for i in [2, 3, 4]]        for name, repeats, output_channels in zip(
                stage_names, stages_repeats, self._stage_out_channels[1:]):
            seq = [InvertedResidual(input_channels, output_channels, 2)]            for i in range(repeats - 1):
                seq.append(InvertedResidual(output_channels, output_channels, 1))            setattr(self, name, nn.Sequential(*seq))
            input_channels = output_channels

        output_channels = self._stage_out_channels[-1]
        self.conv5 = nn.Sequential(
            nn.Conv2D(input_channels, output_channels, 1, 1, 0, bias_attr=False),
            nn.BatchNorm2D(output_channels),
            nn.ReLU(),
        )

        self.class_out = nn.Linear(output_channels, num_classes)    def forward(self, x):
        x = self.conv1(x)
        x = self.maxpool(x)
        x = self.stage2(x)
        x = self.stage3(x)
        x = self.stage4(x)
        x = self.conv5(x)
        x = x.mean([2, 3])  # globalpool
        if self.reid:
            x = paddle.divide(x, paddle.norm(x, p=2,dim=1,keepdim=True))            return x
        x = self.class_out(x)        return x
登录后复制
   

2、GhostNet网络搭建

In [ ]
"""
GhostNet网络模型的搭建
"""import paddleimport paddle.nn as nnimport mathdef _make_divisible(v, divisor, min_value=None):
   
    if min_value is None:
        min_value = divisor
    new_v = max(min_value, int(v + divisor / 2) // divisor * divisor)    # Make sure that round down does not go down by more than 10%.
    if new_v < 0.9 * v:
        new_v += divisor    return new_vclass SELayer(nn.Layer):
    def __init__(self, channel, reduction=4):
        super(SELayer, self).__init__()
        self.avg_pool = nn.AdaptiveAvgPool2D(1)
        self.fc = nn.Sequential(
                nn.Linear(channel, channel // reduction),
                nn.ReLU(),
                nn.Linear(channel // reduction, channel),        
            )    def forward(self, x):
        b, c, _, _ = x.shape
        y = self.avg_pool(x)
        y = paddle.reshape(y, [b, c])
        y = self.fc(y)
        y = paddle.reshape(y, [b, c, 1, 1])
        y = paddle.clip(y, 0, 1) #maybe problem
        return x * ydef depthwise_conv(inp, oup, kernel_size=3, stride=1, relu=False):
    return nn.Sequential(
        nn.Conv2D(inp, oup, kernel_size, stride, kernel_size//2, groups=inp, bias_attr=False),
        nn.BatchNorm2D(oup),
        nn.ReLU() if relu else nn.Sequential(),
    )class GhostModule(nn.Layer):
    def __init__(self, inp, oup, kernel_size=1, ratio=2, dw_size=3, stride=1, relu=True):
        super(GhostModule, self).__init__()
        self.oup = oup
        init_channels = math.ceil(oup / ratio)
        new_channels = init_channels*(ratio-1)

        self.primary_conv = nn.Sequential(
            nn.Conv2D(inp, init_channels, kernel_size, stride, kernel_size//2, bias_attr=False),
            nn.BatchNorm2D(init_channels),
            nn.ReLU() if relu else nn.Sequential(),
        )

        self.cheap_operation = nn.Sequential(
            nn.Conv2D(init_channels, new_channels, dw_size, 1, dw_size//2, groups=init_channels, bias_attr=False),
            nn.BatchNorm2D(new_channels),
            nn.ReLU() if relu else nn.Sequential(),
        )    def forward(self, x):
        x1 = self.primary_conv(x)
        x2 = self.cheap_operation(x1)
        out = paddle.concat([x1,x2], axis=1)        return out[:,:self.oup,:,:]class GhostBottleneck(nn.Layer):
    def __init__(self, inp, hidden_dim, oup, kernel_size, stride, use_se):
        super(GhostBottleneck, self).__init__()        assert stride in [1, 2]

        self.conv = nn.Sequential(            # pw
            GhostModule(inp, hidden_dim, kernel_size=1, relu=True),            # dw
            depthwise_conv(hidden_dim, hidden_dim, kernel_size, stride, relu=False) if stride==2 else nn.Sequential(),            # Squeeze-and-Excite
            SELayer(hidden_dim) if use_se else nn.Sequential(),            # pw-linear
            GhostModule(hidden_dim, oup, kernel_size=1, relu=False),
        )        if stride == 1 and inp == oup:
            self.shortcut = nn.Sequential()        else:
            self.shortcut = nn.Sequential(
                depthwise_conv(inp, inp, kernel_size, stride, relu=False),
                nn.Conv2D(inp, oup, 1, 1, 0, bias_attr=False),
                nn.BatchNorm2D(oup),
            )    def forward(self, x):
        return self.conv(x) + self.shortcut(x)class GhostNet(nn.Layer):
    def __init__(self, cfgs, num_classes=1000, width_mult=1.,reid=False):
        super(GhostNet, self).__init__()        # setting of inverted residual blocks
        self.cfgs = cfgs
        self.reid = reid        # building first layer
        output_channel = _make_divisible(16 * width_mult, 4)
        layers = [nn.Sequential(
            nn.Conv2D(3, output_channel, 3, 2, 1, bias_attr=False),
            nn.BatchNorm2D(output_channel),
            nn.ReLU()
        )]
        input_channel = output_channel        # building inverted residual blocks
        block = GhostBottleneck        for k, exp_size, c, use_se, s in self.cfgs:
            output_channel = _make_divisible(c * width_mult, 4)
            hidden_channel = _make_divisible(exp_size * width_mult, 4)
            layers.append(block(input_channel, hidden_channel, output_channel, k, s, use_se))
            input_channel = output_channel
        self.features = nn.Sequential(*layers)        # building last several layers
        output_channel = _make_divisible(exp_size * width_mult, 4)
        self.squeeze = nn.Sequential(
            nn.Conv2D(input_channel, output_channel, 1, 1, 0, bias_attr=False),
            nn.BatchNorm2D(output_channel),
            nn.ReLU(),
            nn.AdaptiveAvgPool2D((1, 1)),
        )
        input_channel = output_channel

        output_channel = 1280
        self.classifier = nn.Sequential(
            nn.Linear(input_channel, output_channel, bias_attr=False),
            nn.BatchNorm1D(output_channel),
            nn.ReLU(),
            nn.Dropout(0.2),
            nn.Linear(output_channel, num_classes),
        )        """self._initialize_weights()"""

    def forward(self, x):
        x = self.features(x)
        x = self.squeeze(x)
        x = paddle.reshape(x, [x.shape[0], -1])        if self.reid:
            x = paddle.divide(x, paddle.norm(x, p=2,axis=1,keepdim=True))            return x
        x = self.classifier(x)        return x
登录后复制
   

3、ResNet50网络搭建

In [ ]
"""
ResNet网络模型的搭建
"""import paddleimport paddle.nn as nnimport paddle.nn.functional as Fclass BasicBlock(nn.Layer):
    def __init__(self, c_in, c_out,is_downsample=False):
        super(BasicBlock,self).__init__()
        self.is_downsample = is_downsample        if is_downsample:
            self.conv1 = nn.Conv2D(c_in, c_out, 3, stride=2, padding=1, bias_attr=False)        else:
            self.conv1 = nn.Conv2D(c_in, c_out, 3, stride=1, padding=1, bias_attr=False)
        self.bn1 = nn.BatchNorm2D(c_out)
        self.relu = nn.ReLU()
        self.conv2 = nn.Conv2D(c_out,c_out,3,stride=1,padding=1, bias_attr=False)
        self.bn2 = nn.BatchNorm2D(c_out)        if is_downsample:
            self.downsample = nn.Sequential(
                nn.Conv2D(c_in, c_out, 1, stride=2, bias_attr=False),
                nn.BatchNorm2D(c_out)
            )        elif c_in != c_out:
            self.downsample = nn.Sequential(
                nn.Conv2D(c_in, c_out, 1, stride=1, bias_attr=False),
                nn.BatchNorm2D(c_out)
            )
            self.is_downsample = True

    def forward(self,x):
        y = self.conv1(x)
        y = self.bn1(y)
        y = self.relu(y)
        y = self.conv2(y)
        y = self.bn2(y)        if self.is_downsample:
            x = self.downsample(x)        return F.relu(x.add(y))def make_layers(c_in,c_out,repeat_times, is_downsample=False):
    blocks = []    for i in range(repeat_times):        if i ==0:
            blocks += [BasicBlock(c_in,c_out, is_downsample=is_downsample),]        else:
            blocks += [BasicBlock(c_out,c_out),]    return nn.Sequential(*blocks)class Net(nn.Layer):
    def __init__(self, num_classes=625 ,reid=False):
        super(Net,self).__init__()        # 3 128 64
        self.conv = nn.Sequential(
            nn.Conv2D(3,32,3,stride=1,padding=1),
            nn.BatchNorm2D(32),
            nn.ELU(),
            nn.Conv2D(32,32,3,stride=1,padding=1),
            nn.BatchNorm2D(32),
            nn.ELU(),
            nn.MaxPool2D(3,2,padding=1),
        )        # 32 64 32
        self.layer1 = make_layers(32,32,2,False)        # 32 64 32
        self.layer2 = make_layers(32,64,2,True)        # 64 32 16
        self.layer3 = make_layers(64,128,2,True)        # 128 16 8
        self.dense = nn.Sequential(
            nn.Dropout(p=0.6),
            nn.Linear(128*16*8, 128),
            nn.BatchNorm1D(128),
            nn.ELU()
        )        # 256 1 1 
        self.reid = reid
        self.batch_norm = nn.BatchNorm1D(128)
        self.classifier = nn.Sequential(
            nn.Linear(128, num_classes),
        )    
    def forward(self, x):
        x = self.conv(x)
        x = self.layer1(x)
        x = self.layer2(x)
        x = self.layer3(x)

        x = paddle.reshape(x, [x.shape[0],-1])        if self.reid:
            x = self.dense[0](x)
            x = self.dense[1](x)
            x = paddle.divide(x, paddle.norm(x, p=2, axis=1,keepdim=True))            return x
        x = self.dense(x)        # B x 128
        # classifier
        x = self.classifier(x)        return x
登录后复制
   

三、模型的训练:

In [ ]
input_define = paddle.static.InputSpec(shape=[-1,3,128,64], dtype="float32", name="img")
label_define = paddle.static.InputSpec(shape=[-1,1], dtype="int64", name="label")
登录后复制
   

1、ShuffleNetV2模型的训练:

In [ ]
"""
1、先对模型进行实例化,然后封装。
2、optimizerd的学习率(learning_rate)参数的设置。
tricks:稍微将以下关于learning_rate的一些小知识:如果learning_rate过大,会导致loss震荡无法收敛,这个时候可以适当调小一点学习率;
而如果learning_rate过小会使整个学习过程很长,收敛速度变慢;最后还有一点值得一提,“warm_up”,它的意思是在训练刚开始的时候学习率逐渐
一点一点的增加,最后到达设置的学习率,这个是适用于迁移学习的。
"""model = ShuffleNetV2([4, 8, 4], [24, 48, 96, 192, 512], num_classes=625, reid=False)
model = paddle.Model(model,inputs=input_define,labels=label_define) #用Paddle.Model()对模型进行封装optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())

model.prepare(optimizer=optimizer, #指定优化器
              loss=paddle.nn.CrossEntropyLoss(), #指定损失函数
              metrics=paddle.metric.Accuracy()) #指定评估方法model.fit(train_data=train_data,     #训练数据集
          eval_data=val_data,         #测试数据集
          batch_size=64,                  #一个批次的样本数量
          epochs=80,                      #迭代轮次
          save_dir="ShuffleNet_ReID", #把模型参数、优化器参数保存至自定义的文件夹
          save_freq=20,                    #设定每隔多少个epoch保存模型参数及优化器参数
          shuffle=True,            
)
登录后复制
   

2、GhostNet模型的训练:

In [ ]
cfgs = [        # k, t, c, SE, s 
        [3,  16,  16, 0, 1],
        [3,  48,  24, 0, 2],
        [3,  72,  24, 0, 1],
        [5,  72,  40, 1, 2],
        [5, 120,  40, 1, 1],
        [3, 240,  80, 0, 2],
        [3, 200,  80, 0, 1],
        [3, 184,  80, 0, 1],
        [3, 184,  80, 0, 1],
        [3, 480, 112, 1, 1],
        [3, 672, 112, 1, 1],
        [5, 672, 160, 1, 2],
        [5, 960, 160, 0, 1],
        [5, 960, 160, 1, 1],
        [5, 960, 160, 0, 1],
        [5, 960, 160, 1, 1]
    ]

model = GhostNet(cfgs=cfgs, num_classes=625, width_mult=1., reid=False)
model = paddle.Model(model,inputs=input_define,labels=label_define) 
optimizer = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())


model.prepare(optimizer=optimizer,
              loss=paddle.nn.CrossEntropyLoss(), 
              metrics=paddle.metric.Accuracy()) 

model.fit(train_data=train_data,     
          eval_data=val_data,         
          batch_size=64,                  
          epochs=80,                      
          save_dir="GhostNet_ReID", 
          save_freq=20,                   
          shuffle=True,        
)
登录后复制
       
The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/80
step  10/203 - loss: 6.9693 - acc: 0.0000e+00 - 282ms/step
step  20/203 - loss: 7.4490 - acc: 0.0016 - 273ms/step
step  30/203 - loss: 6.8297 - acc: 0.0031 - 277ms/step
step  40/203 - loss: 7.4432 - acc: 0.0035 - 284ms/step
登录后复制
       

3、ResNet50模型的训练:

In [8]
model = Net(num_classes=625, reid=False)
model = paddle.Model(model,inputs=input_define,labels=label_define) 
optimizer = paddle.optimizer.Adam(learning_rate=0.0005, parameters=model.parameters())

model.prepare(optimizer=optimizer, 
              loss=paddle.nn.CrossEntropyLoss(), 
              metrics=paddle.metric.Accuracy()) 

model.fit(train_data=train_data,     
          eval_data=val_data,        
          batch_size=128,                  
          epochs=80,                      
          save_dir="ResNet50_ReID_Mars", 
          save_freq=20,                    
          shuffle=True,             
)
登录后复制
       
The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/80
step   10/3984 - loss: 6.5432 - acc: 0.0070 - 411ms/step
step   20/3984 - loss: 6.2993 - acc: 0.0215 - 403ms/step
step   30/3984 - loss: 6.0832 - acc: 0.0320 - 401ms/step
step   40/3984 - loss: 6.1084 - acc: 0.0398 - 400ms/step
step   50/3984 - loss: 5.6830 - acc: 0.0469 - 399ms/step
step   60/3984 - loss: 5.6382 - acc: 0.0568 - 399ms/step
step   70/3984 - loss: 5.8908 - acc: 0.0635 - 398ms/step
登录后复制
       

以上就是行人重识别(ReID)模型训练的详细内容,更多请关注php中文网其它相关文章!

相关标签:
最佳 Windows 性能的顶级免费优化软件
最佳 Windows 性能的顶级免费优化软件

每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。

下载
来源:php中文网
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系admin@php.cn
最新问题
开源免费商场系统广告
热门教程
更多>
最新下载
更多>
网站特效
网站源码
网站素材
前端模板
关于我们 免责申明 意见反馈 讲师合作 广告合作 最新更新 English
php中文网:公益在线php培训,帮助PHP学习者快速成长!
关注服务号 技术交流群
PHP中文网订阅号
每天精选资源文章推送
PHP中文网APP
随时随地碎片化学习

Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号