该项目针对卡车垃圾倾倒问题,基于PaddleDetection的半监督PP-YOLOE模型实现检测,mAP@0.5超0.9。处理VOC数据集时,清洗修正标签、名称及宽高,划分并转换格式,利用部分标注与无标注数据。对比显示,半监督结合数据增强效果最佳(0.829),优于其他方案。
☞☞☞AI 智能聊天, 问答助手, AI 智能搜索, 免费无限量使用 DeepSeek R1 模型☜☜☜
![[ai特训营第三期]基于小样本及半监督下pp-yoloe实现卡车垃圾倾倒识别 - php中文网](https://img.php.cn/upload/article/202507/30/2025073011205438027.jpg)
随着城市化进程的加速,城市垃圾数量不断增多。然而,由于缺乏有效的垃圾处理和监管体系,一些不法分子开始利用卡车偷盗垃圾。这不仅严重破坏了环境卫生,还对人们的健康造成了威胁。 为了解决这个问题,我们开发此项目的项目,旨在通过智能技术手段,检测并阻止卡车偷盗垃圾的行为。本项目map@0.5高于0.9,表现优秀。同时利用了半监督的方法,充分利用了数据集。
        
该项目将采用先进的机器视觉技术和深度学习算法,对卡车运输过程中的垃圾进行实时监测和识别。如果发现有可疑的垃圾,系统将自动报警。
本项目数据为卡车倾倒垃圾数据集,为VOC格式,数据集中只给出部分图片的标注信息,并且存在格式错误,数据错误:下载地址
数据集导航
#解压数据集!unzip data/data198415/archive.zip
#下载PaddleDetection!git clone https://gitee.com/PaddlePaddle/PaddleDetection.git #从gitee上下载速度会快一些
#安装PaddleDetection相关依赖#!pip install motmetrics#!pip install pycocotools#!pip install -U scikit-image%cd ~/PaddleDetection/#!pip install -r requirements.txt!python setup.py install
#导入相关的包import randomimport osimport xml.dom.minidomimport cv2from PIL import Imageimport numpy as npimport pandas as pdimport shutilimport jsonimport globimport matplotlib.pyplot as pltimport matplotlib.patches as patchesimport seaborn as snsfrom matplotlib.font_manager import FontProperties
#重写xml文件,标签中有空格无法转COCOdef change_one_xml(xml_path, xml_dw, update_content):
    # 打开xml文档
    dom=xml.dom.minidom.parse(xml_path)
    root=dom.documentElement    # 查找修改路劲
    if 1:
        sub1=root.getElementsByTagName('name') 
        # 修改标签内容
        for i in range(len(sub1)):
            sub1[i].firstChild.data  = update_content        # 保存修改
    with open(xml_path,'w') as fh:
        dom.writexml(fh)# 欲修改文件for xml_path in os.listdir('/home/aistudio/truck_waste_dataset/xml_label'):    if xml_path!='.ipynb_checkpoints':
        xml_dw = './/object/name'
        # 想要修改成什么内容
        update_content = 'Truck_dumping_construction_waste'
        change_one_xml(os.path.join('/home/aistudio/truck_waste_dataset/xml_label',xml_path), xml_dw, update_content)#重写数据集图片和xml名称,;路径中有空格无法读取for path in os.listdir('/home/aistudio/truck_waste_dataset/images'):
    img_path = os.path.join('/home/aistudio/truck_waste_dataset/images',path)    if ' ' in img_path:
        img_path_ = img_path
        img_path_ = img_path_.replace(' ','_')
        os.rename(img_path,img_path_)for path in os.listdir('/home/aistudio/truck_waste_dataset/xml_label'):
    img_path = os.path.join('/home/aistudio/truck_waste_dataset/xml_label',path)    if ' ' in img_path:
        img_path_ = img_path
        img_path_ = img_path_.replace(' ','_')
        os.rename(img_path,img_path_)#重写xml文件,改正其某些图片宽高信息错误def change_one_xml(xml_path):
    # 打开xml文档
    img_list = os.listdir('/home/aistudio/truck_waste_dataset/images')
    dom=xml.dom.minidom.parse(xml_path)
    root=dom.documentElement    # 查找修改路劲
    sub1=root.getElementsByTagName('filename')
    sub1[0].firstChild.data  = root.getElementsByTagName('filename')[0].firstChild.data.replace(' ','_')    if   root.getElementsByTagName('filename')[0].firstChild.data.replace(' ','_') in img_list:
        H,W,C = cv2.imread('/home/aistudio/truck_waste_dataset/images/'+root.getElementsByTagName('filename')[0].firstChild.data.replace(' ','_')).shape
        sub1=root.getElementsByTagName('width')
        sub1[0].firstChild.data  = W
        sub1=root.getElementsByTagName('height')
        sub1[0].firstChild.data  = H        # 保存修改
    with open(xml_path,'w') as fh:
        dom.writexml(fh)# 欲修改文件for xml_path in os.listdir('/home/aistudio/truck_waste_dataset/xml_label'):    if xml_path!='.ipynb_checkpoints':        # 想要修改成什么内容
        change_one_xml(os.path.join('/home/aistudio/truck_waste_dataset/xml_label',xml_path))%cd /home/aistudio/truck_waste_dataset
/home/aistudio/truck_waste_dataset
# 生成train.txt和val.txtrandom.seed(2020)
xml_dir = 'xml_label'img_dir = 'images'xml_list = os.listdir('xml_label')
path_list = list()
extra_list = list()for img in os.listdir(img_dir):
    img_path = os.path.join(img_dir,img)
    xml_path = os.path.join(xml_dir,img.replace('jpg', 'xml'))    if img.replace('jpg', 'xml') in xml_list:
        path_list.append((img_path, xml_path))    else:
        extra_list.append(img_path)
random.shuffle(path_list)
ratio = 0.8train_f = open('train.txt', 'w') 
val_f = open('val.txt', 'w')
extra_f = open('extra.txt', 'w')for i ,content in enumerate(path_list):
    img, xml = content
    text = img + ' ' + xml + '\n'
    if i < len(path_list) * ratio:
        train_f.write(text)    else:
        val_f.write(text)for i ,content in enumerate(extra_list):
    extra_f.write(content+'\n')
train_f.close()
val_f.close()
extra_f.close()# 根据自己数据类别生成标签文档label = ['Truck_dumping_construction_waste']with open('label_list.txt', 'w') as f:    for text in label:
        f.write(text + '\n')%cd ~
/home/aistudio
len(os.listdir('truck_waste_dataset/images')),len(os.listdir('truck_waste_dataset/xml_label'))(714, 309)
#将数据集转化成COCO数据集#生成训练数据!python PaddleDetection/tools/x2coco.py --dataset_type voc \ --dataset_type voc \ --voc_anno_dir truck_waste_dataset \ --voc_anno_list truck_waste_dataset/train.txt \ --voc_label_list truck_waste_dataset/label_list.txt \ --voc_out_name truck_waste_dataset/train.json
Start converting ! 100%|██████████████████████████████████████| 148/148 [00:00<00:00, 19696.57it/s]
#生成测试数据!python PaddleDetection/tools/x2coco.py --dataset_type voc \ --dataset_type voc \ --voc_anno_dir truck_waste_dataset \ --voc_anno_list truck_waste_dataset/val.txt \ --voc_label_list truck_waste_dataset/label_list.txt \ --voc_out_name truck_waste_dataset/val.json
Start converting ! 100%|████████████████████████████████████████| 37/37 [00:00<00:00, 16474.44it/s]
#写入未标注图片#写入extra.jsonimport json
write_json_context=dict()                                                      #写入.json文件的大字典write_json_context['info']= {'description': '', 'url': '', 'version': '', 'year': 2021, 'contributor': '', 'date_created': '2021-07-25'}
write_json_context['categories']=[]
write_json_context['images']=[]
img_pathDir = 'truck_waste_dataset'with open('truck_waste_dataset/extra.txt','r') as fr:
    lines1=fr.readlines()for i,imageFile in enumerate(lines1):
    imagePath = os.path.join(img_pathDir,imageFile)                             #获取图片的绝对路径
    imagePath = imagePath.replace('\n','')
    image = Image.open(imagePath)                                               #读取图片,然后获取图片的宽和高
    W, H = image.size
    img_context={}                                                              #使用一个字典存储该图片信息
    #img_name=os.path.basename(imagePath)
    path = imageFile.split('\n')[0]
    path = path.split('/')[1]                                      #返回path最后的文件名。如果path以/或\结尾,那么就会返回空值
    img_context['file_name']=path
    src_front, src_back = os.path.splitext(imageFile)                           #将文件名和文件格式分开
    img_context['height']=H
    img_context['width']=W
    img_context['id']=i
    write_json_context['images'].append(img_context)
cat_context={}
cat_context['supercategory'] = 'none'cat_context['id'] = 1cat_context['name'] = 'Truck_dumping_construction_waste'write_json_context['categories'].append(cat_context)
name = os.path.join('truck_waste_dataset',"extra"+ '.json')with open(name,'w') as fw:                                                                #将字典信息写入.json文件中
    json.dump(write_json_context,fw,indent=2)###解决中文画图问题myfont = FontProperties(fname=r"NotoSansCJKsc-Medium.otf", size=12) plt.rcParams['figure.figsize'] = (12, 12) plt.rcParams['font.family']= myfont.get_family() plt.rcParams['font.sans-serif'] = myfont.get_name() plt.rcParams['axes.unicode_minus'] = False
# 加载训练集路径TRAIN_DIR = 'truck_waste_dataset/images/'TRAIN_CSV_PATH = 'truck_waste_dataset/train.json'# 加载训练集图片目录train_fns = glob.glob(TRAIN_DIR + '*')print('数据集图片数量: {}'.format(len(train_fns)))数据集图片数量: 714
def generate_anno_result(dataset_path, anno_file):
    with open(os.path.join(dataset_path, anno_file)) as f:
        anno = json.load(f)    
    total=[]    for img in anno['images']:
        hw = (img['height'],img['width'])
        total.append(hw)
    unique = set(total)
    
    ids=[]
    images_id=[]    for i in anno['annotations']:
        ids.append(i['id'])
        images_id.append(i['image_id'])    
    # 创建类别标签字典
    category_dic=dict([(i['id'],i['name']) for i in anno['categories']])
    counts_label=dict([(i['name'],0) for i in anno['categories']])    for i in anno['annotations']:
        counts_label[category_dic[i['category_id']]] += 1
    label_list = counts_label.keys()    # 各部分标签
    size = counts_label.values()    # 各部分大小
    train_fig = pd.DataFrame(anno['images'])
    train_anno = pd.DataFrame(anno['annotations'])
    df_train = pd.merge(left=train_fig, right=train_anno, how='inner', left_on='id', right_on='image_id')
    df_train['bbox_xmin'] = df_train['bbox'].apply(lambda x: x[0])
    df_train['bbox_ymin'] = df_train['bbox'].apply(lambda x: x[1])
    df_train['bbox_w'] = df_train['bbox'].apply(lambda x: x[2])
    df_train['bbox_h'] = df_train['bbox'].apply(lambda x: x[3])
    df_train['bbox_xcenter'] = df_train['bbox'].apply(lambda x: (x[0]+0.5*x[2]))
    df_train['bbox_ycenter'] = df_train['bbox'].apply(lambda x: (x[1]+0.5*x[3]))    print('最小目标面积(像素):', min(df_train.area))
    balanced = ''
    small_object = ''
    densely = ''
    # 判断样本是否均衡,给出结论
    if max(size) > 5 * min(size):        print('样本不均衡')
        balanced = 'c11'
    else:        print('样本均衡')
        balanced = 'c10'
    # 判断样本是否存在小目标,给出结论
    if min(df_train.area) < 900:        print('存在小目标')
        small_object = 'c21'
    else:        print('不存在小目标')
        small_object = 'c20'
    arr1=[]
    arr2=[]
    x=[]
    y=[]
    w=[]
    h=[]    for index, row in df_train.iterrows():        if index < 1000:            # 获取并记录坐标点
            x.append(row['bbox_xcenter'])
            y.append(row['bbox_ycenter'])
            w.append(row['bbox_w'])
            h.append(row['bbox_h'])    for i in range(len(x)):
        l = np.sqrt(w[i]**2+h[i]**2)
        arr2.append(l)        for j in range(len(x)):
            a=np.sqrt((x[i]-x[j])**2+(y[i]-y[j])**2)            if a != 0:
                arr1.append(a)
    arr1=np.matrix(arr1)    # print(arr1.min())
    # print(np.mean(arr2))
    # 判断是否密集型目标,具体逻辑还需优化
    if arr1.min() <  np.mean(arr2):        print('密集型目标')
        densely = 'c31'
    else:        print('非密集型目标')
        densely = 'c30'
    return balanced, small_object, densely# 分析训练集数据generate_anno_result('truck_waste_dataset', 'train.json')最小目标面积(像素): 1054.0 样本均衡 不存在小目标 密集型目标
('c10', 'c20', 'c31')# 读取训练集标注文件with open('truck_waste_dataset/train.json', 'r', encoding='utf-8') as f:
    train_data = json.load(f)
train_fig = pd.DataFrame(train_data['images'])ps = np.zeros(len(train_fig))for i in range(len(train_fig)):
    ps[i]=train_fig['width'][i] * train_fig['height'][i]/1e6plt.title('训练集图片大小分布', fontproperties=myfont)
sns.distplot(ps, bins=21,kde=False)<matplotlib.axes._subplots.AxesSubplot at 0x7f47a361be50>
/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/matplotlib/font_manager.py:1331: UserWarning: findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans (prop.get_family(), self.defaultFamily[fontext]))
<Figure size 1200x1200 with 1 Axes>
!python box_distribution.py --json_path truck_waste_dataset/train.json
Median of ratio_w is 0.691192492447878 Median of ratio_h is 0.6858630952380953 all_img with box: 148 all_ann: 152 Distribution saved as box_distribution.jpg Figure(640x480)
        
# 训练集目标大小统计结果train_anno = pd.DataFrame(train_data['annotations']) df_train = pd.merge(left=train_fig, right=train_anno, how='inner', left_on='id', right_on='image_id') df_train['bbox_xmin'] = df_train['bbox'].apply(lambda x: x[0]) df_train['bbox_ymin'] = df_train['bbox'].apply(lambda x: x[1]) df_train['bbox_w'] = df_train['bbox'].apply(lambda x: x[2]) df_train['bbox_h'] = df_train['bbox'].apply(lambda x: x[3]) df_train['bbox_xcenter'] = df_train['bbox'].apply(lambda x: (x[0]+0.5*x[2])) df_train['bbox_ycenter'] = df_train['bbox'].apply(lambda x: (x[1]+0.5*x[3])) df_train.area.describe()
count 152.000000 mean 68855.855263 std 126345.633227 min 1054.000000 25% 15445.500000 50% 30271.000000 75% 43236.000000 max 747664.000000 Name: area, dtype: float64
df_train['bbox_count'] = df_train.apply(lambda row: 1 if any(row.bbox) else 0, axis=1)
train_images_count = df_train.groupby('file_name').sum().reset_index()
plt.title('训练集目标个数分布', fontproperties=myfont)
sns.distplot(train_images_count['bbox_count'], bins=21,kde=True)<matplotlib.axes._subplots.AxesSubplot at 0x7f47a34f5f10>
<Figure size 1200x1200 with 1 Axes>
PaddleDetection团队结合 Dense Teacher前沿算法,针对 PP-YOLOE+提供了半监督学习方案。
半监督学习结合有标签数据和无标签数据,在大幅节省数据标注的情况下,依然达到较高的模型精度。在实际产业应用过程中,半监督学习是项目冷启动时常见的策略之一。
下表是,在仅采用5%、10%有标签数据进行监督学习,95%、90%无标签数据进行半监督学习的情况下,精度得到了1.2~2.5的提升。
        
#小样本方案!python PaddleDetection/tools/train.py -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_contrast_pcb.yml --use_vdl=True --eval
# 开始训练,训练环境为单卡V100(32G)#半监督方案!python PaddleDetection/tools/train.py -c configs/semi_det/denseteacher/denseteacher_ppyoloe_plus_crn_l_coco_semi010.yml --use_vdl=True --eval --amp
!python PaddleDetection/tools/eval.py -c configs/semi_det/denseteacher/denseteacher_ppyoloe_plus_crn_l_coco_semi010.yml
!python PaddleDetection/tools/infer.py -c configs/semi_det/denseteacher/denseteacher_ppyoloe_plus_crn_l_coco_semi010.yml \ --infer_dir=truck_waste_dataset/images --output_dir=images_semi_res \ --draw_threshold 0.1
!python PaddleDetection/tools/export_model.py \ -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_contrast_pcb.yml \ -o weights=output/ppyoloe_plus_crn_s_80e_contrast_pcb/model_final.pdparams
!python PaddleDetection/tools/export_model.py -c configs/semi_det/denseteacher/denseteacher_ppyoloe_plus_crn_l_coco_semi010.yml
!python PaddleDetection/tools/infer.py -c configs/ppyoloe/ppyoloe_plus_crn_s_80e_contrast_pcb.yml -o weights=output/ppyoloe_plus_crn_s_80e_contrast_pcb/model_final.pdparams --infer_dir=truck_waste_dataset/images --output_dir=images_res
| Column 1 | Column 2 | 
|---|---|
| @@##@@ | @@##@@ | 
可以进一步提高的方向:
| 方案 | ap @[ IoU=0.50:0.95] | 
|---|---|
| 小样本ppyoloe+数据增强 | 0.747 | 
| ppyoloe+算法+数据增强 | 0.804 | 
| 半监督ppyoloe+算法+数据增强 | 0.829 | 
![[AI特训营第三期]基于小样本及半监督下PP-YOLOE实现卡车垃圾倾倒识别 - php中文网](https://img.php.cn/upload/article/001/571/248/175384571257203.jpg)
![[AI特训营第三期]基于小样本及半监督下PP-YOLOE实现卡车垃圾倾倒识别 - php中文网](https://img.php.cn/upload/article/001/571/248/175384571355170.jpg)
以上就是[AI特训营第三期]基于小样本及半监督下PP-YOLOE实现卡车垃圾倾倒识别的详细内容,更多请关注php中文网其它相关文章!
                        
                        每个人都需要一台速度更快、更稳定的 PC。随着时间的推移,垃圾文件、旧注册表数据和不必要的后台进程会占用资源并降低性能。幸运的是,许多工具可以让 Windows 保持平稳运行。
                Copyright 2014-2025 https://www.php.cn/ All Rights Reserved | php.cn | 湘ICP备2023035733号