4000-520-616
欢迎来到免疫在线!(蚂蚁淘生物旗下平台)  请登录 |  免费注册 |  询价篮
主营:原厂直采,平行进口,授权代理(蚂蚁淘为您服务)
咨询热线电话
4000-520-616
当前位置: 首页 > 新闻动态 >
新闻详情
caffe模型训练全过程(一)脚本、数据准备与制作_dyingst..._CSDN博客
来自 : CSDN技术社区 发布时间:2021-03-26

1.首先建立工程文件夹 文件夹结构如下

|——project ├── create_imagenet.sh #生成lmdb文件的脚本 |——train_lmdb ├── data.mdb └── lock.mdb #存放输出的训练集lmdb文件 |——val_lmdb ├── data.mdb\\ └── lock.mdb #存放输出的测试集lmdb文件 ├── models #存放输出的模型 ├── solver_iter_2576.caffemodel └── solver_iter_2576.solverstate ├── other #其他备份文件 ├── solver.prototxt #solver配置文件 ├── train #测试数据集 ├── positivite l#存放类别1的图片 └── negative_eg #存放类别2的图片 ├── train_caffenet.sh #运行此脚本开始训练 ├── train.txt #存放训练集路径集合 ├── train_val.prototxt #caffe模型结构配置文件 ├── val #测试集数据 └── val.txt #测试训练图片
2.制作LMDB数据源 首先生成train.txt and val.txt两个包含路径的文本文件 其如下

train.txt

positivite/IMG_000001.jpg 1positivite/IMG_000002.jpg 1positivite/IMG_000003.jpg 1positivite/IMG_000008.jpg 1positivite/IMG_000010.jpg 1positivite/IMG_000014.jpg 1positivite/IMG_000016.jpg 1positivite/IMG_000017.jpg 1positivite/IMG_000018.jpg 1positivite/IMG_000020.jpg 1positivite/IMG_000022.jpg 1positivite/IMG_000023.jpg 1positivite/IMG_000026.jpg 1positivite/IMG_000028.jpg 1positivite/IMG_000029.jpg 1positivite/IMG_000031.jpg 1positivite/IMG_000032.jpg 1positivite/IMG_000037.jpg 1positivite/IMG_000039.jpg 1positivite/IMG_000040.jpg 1positivite/IMG_000042.jpg 1positivite/IMG_000044.jpg 1.....................
val.txtpositivite/IMG_000162.jpg 1positivite/IMG_000164.jpg 1positivite/IMG_000165.jpg 1positivite/IMG_000167.jpg 1positivite/IMG_000168.jpg 1positivite/IMG_000170.jpg 1positivite/IMG_000171.jpg 1positivite/IMG_000174.jpg 1positivite/IMG_000177.jpg 1positivite/IMG_000179.jpg 1positivite/IMG_000180.jpg 1positivite/IMG_000184.jpg 1positivite/IMG_000186.jpg 1positivite/IMG_000188.jpg 1positivite/IMG_000189.jpg 1positivite/IMG_000194.jpg 1positivite/IMG_000196.jpg 1positivite/IMG_000199.jpg 1positivite/IMG_000201.jpg 1positivite/IMG_000202.jpg 1positivite/IMG_000203.jpg 1negative_eg/IMG_000180_3.jpg 0negative_eg/IMG_000184_0.jpg 0negative_eg/IMG_000184_1.jpg 0negative_eg/IMG_000184_2.jpg 0negative_eg/IMG_000184_3.jpg 0negative_eg/IMG_000186_0.jpg 0 negative_eg/IMG_000186_1.jpg 0........................
3.修改一下create_imagenet.sh

主要就是改写汉语注释部分

#!/usr/bin/env sh# Create the imagenet lmdb inputs# N.B. set the path to the imagenet train val data dirsset -e#案例路径EXAMPLE /home/ubuntu/hudie_detection_case #数据根目录DATA /home/ubuntu/hudie_detection_case#caffebuild/tools的绝对路径TOOLS /home/ubuntu/caffe/caffe/build/tools#测试数据和训练数据根目录TRAIN_DATA_ROOT /home/ubuntu/hudie_detection_case/train/VAL_DATA_ROOT /home/ubuntu/hudie_detection_case/train/# Set RESIZE true to resize the images to 256x256. Leave as false if images have# already been resized using another tool.#根据需求是否需要把图片缩放成统一大小RESIZE trueif $RESIZE; then RESIZE_HEIGHT 227 RESIZE_WIDTH 227else RESIZE_HEIGHT 0 RESIZE_WIDTH 0fiif [ ! -d $TRAIN_DATA_ROOT ]; then echo Error: TRAIN_DATA_ROOT is not a path to a directory: $TRAIN_DATA_ROOT echo Set the TRAIN_DATA_ROOT variable in create_imagenet.sh to the path \\ where the ImageNet training data is stored. exit 1fiif [ ! -d $VAL_DATA_ROOT ]; then echo Error: VAL_DATA_ROOT is not a path to a directory: $VAL_DATA_ROOT echo Set the VAL_DATA_ROOT variable in create_imagenet.sh to the path \\ where the ImageNet validation data is stored. exit 1fiecho Creating train lmdb... GLOG_logtostderr 1 $TOOLS/convert_imageset \\ --resize_height $RESIZE_HEIGHT \\ --resize_width $RESIZE_WIDTH \\ --shuffle \\ $TRAIN_DATA_ROOT \\ $DATA/train.txt \\ $EXAMPLE/ilsvrc12_train_lmdbecho Creating val lmdb... GLOG_logtostderr 1 $TOOLS/convert_imageset \\ --resize_height $RESIZE_HEIGHT \\ --resize_width $RESIZE_WIDTH \\ --shuffle \\ $VAL_DATA_ROOT \\ $DATA/val.txt \\ $EXAMPLE/ilsvrc12_val_lmdbecho Done. 

这些准备完毕之后 运行sudo sh ./create_imagenet.sh 如果没有报错 恭喜 报错了 可能是caffe依赖包没有安装好或者重新执行上述步骤。

4.制作神经网络模型train_val.prototxt

这里使用的是AlexNet模型 此处主要修改输入文件路径 输出路径 以及softmax层的输出类数 已用黑体标出 分类个数

name: AlexNet layer { name: data type: Data top: data top: label include { phase: TRAIN } transform_param { mirror: true crop_size: 227 #mean_file: data/ilsvrc12/imagenet_mean.binaryproto } data_param { **source: /home/ubuntu/hudie_detection_case/ilsvrc12_train_lmdb ** batch_size: 2 backend: LMDB }}layer { name: data type: Data top: data top: label include { phase: TEST } transform_param { mirror: false crop_size: 227 #mean_file: data/ilsvrc12/imagenet_mean.binaryproto } data_param { **source: /home/ubuntu/hudie_detection_case/ilsvrc12_val_lmdb ** batch_size: 2 backend: LMDB }}layer { name: conv1 type: Convolution bottom: data top: conv1 param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 96 kernel_size: 11 stride: 4 weight_filler { type: gaussian std: 0.01 } bias_filler { type: constant value: 0 } }}layer { name: relu1 type: ReLU bottom: conv1 top: conv1 }layer { name: norm1 type: LRN bottom: conv1 top: norm1 lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 }}layer { name: pool1 type: Pooling bottom: norm1 top: pool1 pooling_param { pool: MAX kernel_size: 3 stride: 2 }}layer { name: conv2 type: Convolution bottom: pool1 top: conv2 param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 2 kernel_size: 5 group: 2 weight_filler { type: gaussian std: 0.01 } bias_filler { type: constant value: 0.1 } }}layer { name: relu2 type: ReLU bottom: conv2 top: conv2 }layer { name: norm2 type: LRN bottom: conv2 top: norm2 lrn_param { local_size: 5 alpha: 0.0001 beta: 0.75 }}layer { name: pool2 type: Pooling bottom: norm2 top: pool2 pooling_param { pool: MAX kernel_size: 3 stride: 2 }}layer { name: conv3 type: Convolution bottom: pool2 top: conv3 param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 weight_filler { type: gaussian std: 0.01 } bias_filler { type: constant value: 0 } }}layer { name: relu3 type: ReLU bottom: conv3 top: conv3 }layer { name: conv4 type: Convolution bottom: conv3 top: conv4 param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 384 pad: 1 kernel_size: 3 group: 2 weight_filler { type: gaussian std: 0.01 } bias_filler { type: constant value: 0.1 } }}layer { name: relu4 type: ReLU bottom: conv4 top: conv4 }layer { name: conv5 type: Convolution bottom: conv4 top: conv5 param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } convolution_param { num_output: 256 pad: 1 kernel_size: 3 group: 2 weight_filler { type: gaussian std: 0.01 } bias_filler { type: constant value: 0.1 } }}layer { name: relu5 type: ReLU bottom: conv5 top: conv5 }layer { name: pool5 type: Pooling bottom: conv5 top: pool5 pooling_param { pool: MAX kernel_size: 3 stride: 2 }}layer { name: fc6 type: InnerProduct bottom: pool5 top: fc6 param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: gaussian std: 0.005 } bias_filler { type: constant value: 0.1 } }}layer { name: relu6 type: ReLU bottom: fc6 top: fc6 }layer { name: drop6 type: Dropout bottom: fc6 top: fc6 dropout_param { dropout_ratio: 0.5 }}layer { name: fc7 type: InnerProduct bottom: fc6 top: fc7 param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { num_output: 4096 weight_filler { type: gaussian std: 0.005 } bias_filler { type: constant value: 0.1 } }}layer { name: relu7 type: ReLU bottom: fc7 top: fc7 }layer { name: drop7 type: Dropout bottom: fc7 top: fc7 dropout_param { dropout_ratio: 0.5 }}layer { name: fc8 type: InnerProduct bottom: fc7 top: fc8 param { lr_mult: 1 decay_mult: 1 } param { lr_mult: 2 decay_mult: 0 } inner_product_param { **num_output: 2** weight_filler { type: gaussian std: 0.01 } bias_filler { type: constant value: 0 } }}layer { name: accuracy type: Accuracy bottom: fc8 bottom: label top: accuracy include { phase: TEST }}layer { name: loss type: SoftmaxWithLoss bottom: fc8 bottom: label top: loss }
5.之后修改slover.prototxt

主要参数已黑体

**net: /home/ubuntu/hudie_detection_case/train_val.prototxt **test_iter: 1000test_interval: 1000#基础学习率**base_lr: 0.001**lr_policy: step gamma: 0.1stepsize: 100000#每训练20次显示信息**display: 20**max_iter: 450000momentum: 0.9weight_decay: 0.0005#每训练10000次保存模型 路径为models**snapshot: 10000**snapshot_prefix: models solver_mode: CPU

完成这些 后紧接着就是紧张而又缓慢的训练工作了 可能十几分钟 可能十几天。看你的数据量大小和模型法咋都了

6. 运行train_caffenet.sh

其内容如下

#!/usr/bin/env shset -e#caffe 路径/home/ubuntu/caffe/caffe/build/tools/caffe train \\ --solver /home/ubuntu/hudie_detection_case/solver.prototxt $ 

训练界面 部分截图 如下 接下来就等吧
\"这里写图片描述\"
\"这里写图片描述\"

\"\" \"\" \"\" 点赞 4 \"\" \"\" 评论 1

本文链接: http://slovergroup.immuno-online.com/view-780803.html

发布于 : 2021-03-26 阅读(0)
公司介绍
品牌分类
催化剂和助剂 Others
联络我们
服务热线:4000-520-616
(限工作日9:00-18:00)
QQ :1570468124
手机:18915418616