200字范文,内容丰富有趣,生活中的好帮手!
200字范文 > 【C++ Caffe】ubuntu下MNIST训练结果

【C++ Caffe】ubuntu下MNIST训练结果

时间:2022-08-06 10:19:38

相关推荐

【C++ Caffe】ubuntu下MNIST训练结果

训练过程在这我的这篇博客

/Feeryman_Lee/article/details/104523858

以下为训练结果

lichunlin@ThinkPad-T420:~/caffe/data/mnist$ ./get_mnist.sh Downloading...---02-26 15:29:13-- /exdb/mnist/train-images-idx3-ubyte.gzResolving ()... 216.165.22.6Connecting to ()|216.165.22.6|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 9912422 (9.5M) [application/x-gzip]Saving to: ‘train-images-idx3-ubyte.gz’train-images-idx3-u 12%[=> ] 1.15M --.-KB/s in 24m 17s -02-26 15:53:33 (826 B/s) - Read error at byte 1203539/9912422 (Connection timed out). Retrying.---02-26 15:53:34-- (try: 2) /exdb/mnist/train-images-idx3-ubyte.gzConnecting to ()|216.165.22.6|:80... connected.HTTP request sent, awaiting response... 206 Partial ContentLength: 9912422 (9.5M), 8708883 (8.3M) remaining [application/x-gzip]Saving to: ‘train-images-idx3-ubyte.gz’train-images-idx3-u 100%[++=================>] 9.45M 2.80KB/s in 51m 58s -02-26 16:45:34 (2.73 KB/s) - ‘train-images-idx3-ubyte.gz’ saved [9912422/9912422]---02-26 16:45:34-- /exdb/mnist/train-labels-idx1-ubyte.gzResolving ()... 216.165.22.6Connecting to ()|216.165.22.6|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 28881 (28K) [application/x-gzip]Saving to: ‘train-labels-idx1-ubyte.gz’train-labels-idx1-u 100%[===================>] 28.20K 5.24KB/s in 5.4s -02-26 16:45:49 (5.24 KB/s) - ‘train-labels-idx1-ubyte.gz’ saved [28881/28881]---02-26 16:45:49-- /exdb/mnist/t10k-images-idx3-ubyte.gzResolving ()... 216.165.22.6Connecting to ()|216.165.22.6|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 1648877 (1.6M) [application/x-gzip]Saving to: ‘t10k-images-idx3-ubyte.gz’t10k-images-idx3-ub 100%[===================>] 1.57M 3.61KB/s in 7m 39s -02-26 16:53:29 (3.51 KB/s) - ‘t10k-images-idx3-ubyte.gz’ saved [1648877/1648877]---02-26 16:53:29-- /exdb/mnist/t10k-labels-idx1-ubyte.gzResolving ()... 216.165.22.6Connecting to ()|216.165.22.6|:80... connected.HTTP request sent, awaiting response... 200 OKLength: 4542 (4.4K) [application/x-gzip]Saving to: ‘t10k-labels-idx1-ubyte.gz’t10k-labels-idx1-ub 100%[===================>] 4.44K 6.88KB/s in 0.6s -02-26 16:53:31 (6.88 KB/s) - ‘t10k-labels-idx1-ubyte.gz’ saved [4542/4542]lichunlin@ThinkPad-T420:~/caffe/data/mnist$ cd ..lichunlin@ThinkPad-T420:~/caffe/data$ cd ..lichunlin@ThinkPad-T420:~/caffe$ ./examples/mnist/create_mnist.sh Creating lmdb...I0226 18:49:42.627514 18492 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdbI0226 18:49:42.627812 18492 convert_mnist_data.cpp:88] A total of 60000 items.I0226 18:49:42.627828 18492 convert_mnist_data.cpp:89] Rows: 28 Cols: 28I0226 18:49:43.350973 18492 convert_mnist_data.cpp:108] Processed 60000 files.I0226 18:49:43.377131 18496 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdbI0226 18:49:43.377393 18496 convert_mnist_data.cpp:88] A total of 10000 items.I0226 18:49:43.377408 18496 convert_mnist_data.cpp:89] Rows: 28 Cols: 28I0226 18:49:43.497009 18496 convert_mnist_data.cpp:108] Processed 10000 files.Done.lichunlin@ThinkPad-T420:~/caffe$ ./examples/mnist/train_lenet.shI0226 22:37:11.614959 2442 caffe.cpp:197] Use CPU.I0226 22:37:11.615259 2442 solver.cpp:45] Initializing solver from parameters: test_iter: 100test_interval: 500base_lr: 0.01display: 100max_iter: 10000lr_policy: "inv"gamma: 0.0001power: 0.75momentum: 0.9weight_decay: 0.0005snapshot: 5000snapshot_prefix: "examples/mnist/lenet"solver_mode: CPUnet: "examples/mnist/lenet_train_test.prototxt"train_state {level: 0stage: ""}I0226 22:37:11.615427 2442 solver.cpp:102] Creating training net from net file: examples/mnist/lenet_train_test.prototxtI0226 22:37:11.615628 2442 net.cpp:296] The NetState phase (0) differed from the phase (1) specified by a rule in layer mnistI0226 22:37:11.615650 2442 net.cpp:296] The NetState phase (0) differed from the phase (1) specified by a rule in layer accuracyI0226 22:37:11.615666 2442 net.cpp:53] Initializing net from parameters: name: "LeNet"state {phase: TRAINlevel: 0stage: ""}layer {name: "mnist"type: "Data"top: "data"top: "label"include {phase: TRAIN}transform_param {scale: 0.00390625}data_param {source: "examples/mnist/mnist_train_lmdb"batch_size: 64backend: LMDB}}layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 20kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}}layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2}}layer {name: "conv2"type: "Convolution"bottom: "pool1"top: "conv2"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 50kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}}layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2}}layer {name: "ip1"type: "InnerProduct"bottom: "pool2"top: "ip1"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}}layer {name: "relu1"type: "ReLU"bottom: "ip1"top: "ip1"}layer {name: "ip2"type: "InnerProduct"bottom: "ip1"top: "ip2"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 10weight_filler {type: "xavier"}bias_filler {type: "constant"}}}layer {name: "loss"type: "SoftmaxWithLoss"bottom: "ip2"bottom: "label"top: "loss"}I0226 22:37:11.615937 2442 layer_factory.hpp:77] Creating layer mnistI0226 22:37:11.616030 2442 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_train_lmdbI0226 22:37:11.616065 2442 net.cpp:86] Creating Layer mnistI0226 22:37:11.616081 2442 net.cpp:382] mnist -> dataI0226 22:37:11.616111 2442 net.cpp:382] mnist -> labelI0226 22:37:11.619233 2442 data_layer.cpp:45] output data size: 64,1,28,28I0226 22:37:11.619426 2442 net.cpp:124] Setting up mnistI0226 22:37:11.619446 2442 net.cpp:131] Top shape: 64 1 28 28 (50176)I0226 22:37:11.619469 2442 net.cpp:131] Top shape: 64 (64)I0226 22:37:11.619483 2442 net.cpp:139] Memory required for data: 60I0226 22:37:11.619516 2442 layer_factory.hpp:77] Creating layer conv1I0226 22:37:11.619544 2442 net.cpp:86] Creating Layer conv1I0226 22:37:11.619558 2442 net.cpp:408] conv1 <- dataI0226 22:37:11.619580 2442 net.cpp:382] conv1 -> conv1I0226 22:37:11.619652 2442 net.cpp:124] Setting up conv1I0226 22:37:11.619668 2442 net.cpp:131] Top shape: 64 20 24 24 (737280)I0226 22:37:11.619689 2442 net.cpp:139] Memory required for data: 3150080I0226 22:37:11.619994 2442 layer_factory.hpp:77] Creating layer pool1I0226 22:37:11.61 2442 net.cpp:86] Creating Layer pool1I0226 22:37:11.65 2442 net.cpp:408] pool1 <- conv1I0226 22:37:11.63 2442 net.cpp:382] pool1 -> pool1I0226 22:37:11.64 2442 net.cpp:124] Setting up pool1I0226 22:37:11.68 2442 net.cpp:131] Top shape: 64 20 12 12 (184320)I0226 22:37:11.63 2442 net.cpp:139] Memory required for data: 3887360I0226 22:37:11.65 2442 layer_factory.hpp:77] Creating layer conv2I0226 22:37:11.63 2442 net.cpp:86] Creating Layer conv2I0226 22:37:11.60 2442 net.cpp:408] conv2 <- pool1I0226 22:37:11.69 2442 net.cpp:382] conv2 -> conv2I0226 22:37:11.620643 2442 net.cpp:124] Setting up conv2I0226 22:37:11.620668 2442 net.cpp:131] Top shape: 64 50 8 8 (204800)I0226 22:37:11.620684 2442 net.cpp:139] Memory required for data: 4706560I0226 22:37:11.620703 2442 layer_factory.hpp:77] Creating layer pool2I0226 22:37:11.620720 2442 net.cpp:86] Creating Layer pool2I0226 22:37:11.620971 2442 net.cpp:408] pool2 <- conv2I0226 22:37:11.621006 2442 net.cpp:382] pool2 -> pool2I0226 22:37:11.621078 2442 net.cpp:124] Setting up pool2I0226 22:37:11.621101 2442 net.cpp:131] Top shape: 64 50 4 4 (51200)I0226 22:37:11.621125 2442 net.cpp:139] Memory required for data: 4911360I0226 22:37:11.621209 2442 layer_factory.hpp:77] Creating layer ip1I0226 22:37:11.621284 2442 net.cpp:86] Creating Layer ip1I0226 22:37:11.621311 2442 net.cpp:408] ip1 <- pool2I0226 22:37:11.621346 2442 net.cpp:382] ip1 -> ip1I0226 22:37:11.625741 2442 net.cpp:124] Setting up ip1I0226 22:37:11.625774 2442 net.cpp:131] Top shape: 64 500 (32000)I0226 22:37:11.625785 2442 net.cpp:139] Memory required for data: 5039360I0226 22:37:11.625804 2442 layer_factory.hpp:77] Creating layer relu1I0226 22:37:11.625818 2442 net.cpp:86] Creating Layer relu1I0226 22:37:11.625828 2442 net.cpp:408] relu1 <- ip1I0226 22:37:11.625840 2442 net.cpp:369] relu1 -> ip1 (in-place)I0226 22:37:11.625862 2442 net.cpp:124] Setting up relu1I0226 22:37:11.625874 2442 net.cpp:131] Top shape: 64 500 (32000)I0226 22:37:11.625885 2442 net.cpp:139] Memory required for data: 5167360I0226 22:37:11.625896 2442 layer_factory.hpp:77] Creating layer ip2I0226 22:37:11.625914 2442 net.cpp:86] Creating Layer ip2I0226 22:37:11.625924 2442 net.cpp:408] ip2 <- ip1I0226 22:37:11.625941 2442 net.cpp:382] ip2 -> ip2I0226 22:37:11.626019 2442 net.cpp:124] Setting up ip2I0226 22:37:11.626030 2442 net.cpp:131] Top shape: 64 10 (640)I0226 22:37:11.626044 2442 net.cpp:139] Memory required for data: 5169920I0226 22:37:11.626060 2442 layer_factory.hpp:77] Creating layer lossI0226 22:37:11.626085 2442 net.cpp:86] Creating Layer lossI0226 22:37:11.626096 2442 net.cpp:408] loss <- ip2I0226 22:37:11.626109 2442 net.cpp:408] loss <- labelI0226 22:37:11.626124 2442 net.cpp:382] loss -> lossI0226 22:37:11.626142 2442 layer_factory.hpp:77] Creating layer lossI0226 22:37:11.626168 2442 net.cpp:124] Setting up lossI0226 22:37:11.626178 2442 net.cpp:131] Top shape: (1)I0226 22:37:11.626191 2442 net.cpp:134]with loss weight 1I0226 22:37:11.626219 2442 net.cpp:139] Memory required for data: 5169924I0226 22:37:11.626231 2442 net.cpp:200] loss needs backward computation.I0226 22:37:11.626246 2442 net.cpp:200] ip2 needs backward computation.I0226 22:37:11.626258 2442 net.cpp:200] relu1 needs backward computation.I0226 22:37:11.626268 2442 net.cpp:200] ip1 needs backward computation.I0226 22:37:11.626279 2442 net.cpp:200] pool2 needs backward computation.I0226 22:37:11.626291 2442 net.cpp:200] conv2 needs backward computation.I0226 22:37:11.626302 2442 net.cpp:200] pool1 needs backward computation.I0226 22:37:11.626313 2442 net.cpp:200] conv1 needs backward computation.I0226 22:37:11.626325 2442 net.cpp:202] mnist does not need backward computation.I0226 22:37:11.626335 2442 net.cpp:244] This network produces output lossI0226 22:37:11.626353 2442 net.cpp:257] Network initialization done.I0226 22:37:11.626588 2442 solver.cpp:190] Creating test net (#0) specified by net file: examples/mnist/lenet_train_test.prototxtI0226 22:37:11.626632 2442 net.cpp:296] The NetState phase (1) differed from the phase (0) specified by a rule in layer mnistI0226 22:37:11.626688 2442 net.cpp:53] Initializing net from parameters: name: "LeNet"state {phase: TEST}layer {name: "mnist"type: "Data"top: "data"top: "label"include {phase: TEST}transform_param {scale: 0.00390625}data_param {source: "examples/mnist/mnist_test_lmdb"batch_size: 100backend: LMDB}}layer {name: "conv1"type: "Convolution"bottom: "data"top: "conv1"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 20kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}}layer {name: "pool1"type: "Pooling"bottom: "conv1"top: "pool1"pooling_param {pool: MAXkernel_size: 2stride: 2}}layer {name: "conv2"type: "Convolution"bottom: "pool1"top: "conv2"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 50kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}}}layer {name: "pool2"type: "Pooling"bottom: "conv2"top: "pool2"pooling_param {pool: MAXkernel_size: 2stride: 2}}layer {name: "ip1"type: "InnerProduct"bottom: "pool2"top: "ip1"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}}}layer {name: "relu1"type: "ReLU"bottom: "ip1"top: "ip1"}layer {name: "ip2"type: "InnerProduct"bottom: "ip1"top: "ip2"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 10weight_filler {type: "xavier"}bias_filler {type: "constant"}}}layer {name: "accuracy"type: "Accuracy"bottom: "ip2"bottom: "label"top: "accuracy"include {phase: TEST}}layer {name: "loss"type: "SoftmaxWithLoss"bottom: "ip2"bottom: "label"top: "loss"}I0226 22:37:11.626976 2442 layer_factory.hpp:77] Creating layer mnistI0226 22:37:11.627063 2442 db_lmdb.cpp:35] Opened lmdb examples/mnist/mnist_test_lmdbI0226 22:37:11.627090 2442 net.cpp:86] Creating Layer mnistI0226 22:37:11.627108 2442 net.cpp:382] mnist -> dataI0226 22:37:11.627128 2442 net.cpp:382] mnist -> labelI0226 22:37:11.627156 2442 data_layer.cpp:45] output data size: 100,1,28,28I0226 22:37:11.627238 2442 net.cpp:124] Setting up mnistI0226 22:37:11.627250 2442 net.cpp:131] Top shape: 100 1 28 28 (78400)I0226 22:37:11.627265 2442 net.cpp:131] Top shape: 100 (100)I0226 22:37:11.627276 2442 net.cpp:139] Memory required for data: 314000I0226 22:37:11.627323 2442 layer_factory.hpp:77] Creating layer label_mnist_1_splitI0226 22:37:11.627349 2442 net.cpp:86] Creating Layer label_mnist_1_splitI0226 22:37:11.627360 2442 net.cpp:408] label_mnist_1_split <- labelI0226 22:37:11.627377 2442 net.cpp:382] label_mnist_1_split -> label_mnist_1_split_0I0226 22:37:11.627393 2442 net.cpp:382] label_mnist_1_split -> label_mnist_1_split_1I0226 22:37:11.627410 2442 net.cpp:124] Setting up label_mnist_1_splitI0226 22:37:11.627421 2442 net.cpp:131] Top shape: 100 (100)I0226 22:37:11.627435 2442 net.cpp:131] Top shape: 100 (100)I0226 22:37:11.627449 2442 net.cpp:139] Memory required for data: 314800I0226 22:37:11.627460 2442 layer_factory.hpp:77] Creating layer conv1I0226 22:37:11.627487 2442 net.cpp:86] Creating Layer conv1I0226 22:37:11.627501 2442 net.cpp:408] conv1 <- dataI0226 22:37:11.627524 2442 net.cpp:382] conv1 -> conv1I0226 22:37:11.627596 2442 net.cpp:124] Setting up conv1I0226 22:37:11.627625 2442 net.cpp:131] Top shape: 100 20 24 24 (1152000)I0226 22:37:11.627648 2442 net.cpp:139] Memory required for data: 4922800I0226 22:37:11.627669 2442 layer_factory.hpp:77] Creating layer pool1I0226 22:37:11.627686 2442 net.cpp:86] Creating Layer pool1I0226 22:37:11.627724 2442 net.cpp:408] pool1 <- conv1I0226 22:37:11.627744 2442 net.cpp:382] pool1 -> pool1I0226 22:37:11.627763 2442 net.cpp:124] Setting up pool1I0226 22:37:11.627779 2442 net.cpp:131] Top shape: 100 20 12 12 (288000)I0226 22:37:11.627791 2442 net.cpp:139] Memory required for data: 6074800I0226 22:37:11.627802 2442 layer_factory.hpp:77] Creating layer conv2I0226 22:37:11.627822 2442 net.cpp:86] Creating Layer conv2I0226 22:37:11.627837 2442 net.cpp:408] conv2 <- pool1I0226 22:37:11.627851 2442 net.cpp:382] conv2 -> conv2I0226 22:37:11.628115 2442 net.cpp:124] Setting up conv2I0226 22:37:11.628129 2442 net.cpp:131] Top shape: 100 50 8 8 (320000)I0226 22:37:11.628141 2442 net.cpp:139] Memory required for data: 7354800I0226 22:37:11.628159 2442 layer_factory.hpp:77] Creating layer pool2I0226 22:37:11.628172 2442 net.cpp:86] Creating Layer pool2I0226 22:37:11.628182 2442 net.cpp:408] pool2 <- conv2I0226 22:37:11.628197 2442 net.cpp:382] pool2 -> pool2I0226 22:37:11.628216 2442 net.cpp:124] Setting up pool2I0226 22:37:11.628226 2442 net.cpp:131] Top shape: 100 50 4 4 (80000)I0226 22:37:11.628239 2442 net.cpp:139] Memory required for data: 7674800I0226 22:37:11.628254 2442 layer_factory.hpp:77] Creating layer ip1I0226 22:37:11.628270 2442 net.cpp:86] Creating Layer ip1I0226 22:37:11.628281 2442 net.cpp:408] ip1 <- pool2I0226 22:37:11.628295 2442 net.cpp:382] ip1 -> ip1I0226 22:37:11.632292 2442 net.cpp:124] Setting up ip1I0226 22:37:11.632319 2442 net.cpp:131] Top shape: 100 500 (50000)I0226 22:37:11.632335 2442 net.cpp:139] Memory required for data: 7874800I0226 22:37:11.632359 2442 layer_factory.hpp:77] Creating layer relu1I0226 22:37:11.632377 2442 net.cpp:86] Creating Layer relu1I0226 22:37:11.632388 2442 net.cpp:408] relu1 <- ip1I0226 22:37:11.632402 2442 net.cpp:369] relu1 -> ip1 (in-place)I0226 22:37:11.632417 2442 net.cpp:124] Setting up relu1I0226 22:37:11.632426 2442 net.cpp:131] Top shape: 100 500 (50000)I0226 22:37:11.632438 2442 net.cpp:139] Memory required for data: 8074800I0226 22:37:11.632448 2442 layer_factory.hpp:77] Creating layer ip2I0226 22:37:11.632467 2442 net.cpp:86] Creating Layer ip2I0226 22:37:11.632478 2442 net.cpp:408] ip2 <- ip1I0226 22:37:11.632493 2442 net.cpp:382] ip2 -> ip2I0226 22:37:11.632560 2442 net.cpp:124] Setting up ip2I0226 22:37:11.632571 2442 net.cpp:131] Top shape: 100 10 (1000)I0226 22:37:11.632583 2442 net.cpp:139] Memory required for data: 8078800I0226 22:37:11.632597 2442 layer_factory.hpp:77] Creating layer ip2_ip2_0_splitI0226 22:37:11.632611 2442 net.cpp:86] Creating Layer ip2_ip2_0_splitI0226 22:37:11.632622 2442 net.cpp:408] ip2_ip2_0_split <- ip2I0226 22:37:11.632634 2442 net.cpp:382] ip2_ip2_0_split -> ip2_ip2_0_split_0I0226 22:37:11.632648 2442 net.cpp:382] ip2_ip2_0_split -> ip2_ip2_0_split_1I0226 22:37:11.632663 2442 net.cpp:124] Setting up ip2_ip2_0_splitI0226 22:37:11.632673 2442 net.cpp:131] Top shape: 100 10 (1000)I0226 22:37:11.632684 2442 net.cpp:131] Top shape: 100 10 (1000)I0226 22:37:11.632694 2442 net.cpp:139] Memory required for data: 8086800I0226 22:37:11.632704 2442 layer_factory.hpp:77] Creating layer accuracyI0226 22:37:11.632726 2442 net.cpp:86] Creating Layer accuracyI0226 22:37:11.632737 2442 net.cpp:408] accuracy <- ip2_ip2_0_split_0I0226 22:37:11.632750 2442 net.cpp:408] accuracy <- label_mnist_1_split_0I0226 22:37:11.632762 2442 net.cpp:382] accuracy -> accuracyI0226 22:37:11.632777 2442 net.cpp:124] Setting up accuracyI0226 22:37:11.632786 2442 net.cpp:131] Top shape: (1)I0226 22:37:11.632797 2442 net.cpp:139] Memory required for data: 8086804I0226 22:37:11.632807 2442 layer_factory.hpp:77] Creating layer lossI0226 22:37:11.632822 2442 net.cpp:86] Creating Layer lossI0226 22:37:11.632830 2442 net.cpp:408] loss <- ip2_ip2_0_split_1I0226 22:37:11.632843 2442 net.cpp:408] loss <- label_mnist_1_split_1I0226 22:37:11.632855 2442 net.cpp:382] loss -> lossI0226 22:37:11.632870 2442 layer_factory.hpp:77] Creating layer lossI0226 22:37:11.632928 2442 net.cpp:124] Setting up lossI0226 22:37:11.632939 2442 net.cpp:131] Top shape: (1)I0226 22:37:11.632951 2442 net.cpp:134]with loss weight 1I0226 22:37:11.632970 2442 net.cpp:139] Memory required for data: 8086808I0226 22:37:11.632982 2442 net.cpp:200] loss needs backward computation.I0226 22:37:11.632993 2442 net.cpp:202] accuracy does not need backward computation.I0226 22:37:11.633005 2442 net.cpp:200] ip2_ip2_0_split needs backward computation.I0226 22:37:11.633015 2442 net.cpp:200] ip2 needs backward computation.I0226 22:37:11.633025 2442 net.cpp:200] relu1 needs backward computation.I0226 22:37:11.633036 2442 net.cpp:200] ip1 needs backward computation.I0226 22:37:11.633047 2442 net.cpp:200] pool2 needs backward computation.I0226 22:37:11.633057 2442 net.cpp:200] conv2 needs backward computation.I0226 22:37:11.633067 2442 net.cpp:200] pool1 needs backward computation.I0226 22:37:11.633077 2442 net.cpp:200] conv1 needs backward computation.I0226 22:37:11.633090 2442 net.cpp:202] label_mnist_1_split does not need backward computation.I0226 22:37:11.633100 2442 net.cpp:202] mnist does not need backward computation.I0226 22:37:11.633108 2442 net.cpp:244] This network produces output accuracyI0226 22:37:11.633121 2442 net.cpp:244] This network produces output lossI0226 22:37:11.633142 2442 net.cpp:257] Network initialization done.I0226 22:37:11.633199 2442 solver.cpp:57] Solver scaffolding done.I0226 22:37:11.633236 2442 caffe.cpp:239] Starting OptimizationI0226 22:37:11.633246 2442 solver.cpp:289] Solving LeNetI0226 22:37:11.633256 2442 solver.cpp:290] Learning Rate Policy: invI0226 22:37:11.633992 2442 solver.cpp:347] Iteration 0, Testing net (#0)I0226 22:37:17.371232 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:37:17.609834 2442 solver.cpp:414]Test net output #0: accuracy = 0.0899I0226 22:37:17.609889 2442 solver.cpp:414]Test net output #1: loss = 2.40369 (* 1 = 2.40369 loss)I0226 22:37:17.707661 2442 solver.cpp:239] Iteration 0 (0 iter/s, 6.074s/100 iters), loss = 2.46128I0226 22:37:17.707710 2442 solver.cpp:258]Train net output #0: loss = 2.46128 (* 1 = 2.46128 loss)I0226 22:37:17.707731 2442 sgd_solver.cpp:112] Iteration 0, lr = 0.01I0226 22:37:27.450498 2442 solver.cpp:239] Iteration 100 (10.2648 iter/s, 9.742s/100 iters), loss = 0.203452I0226 22:37:27.450556 2442 solver.cpp:258]Train net output #0: loss = 0.203452 (* 1 = 0.203452 loss)I0226 22:37:27.450573 2442 sgd_solver.cpp:112] Iteration 100, lr = 0.00992565I0226 22:37:37.300575 2442 solver.cpp:239] Iteration 200 (10.1523 iter/s, 9.85s/100 iters), loss = 0.127584I0226 22:37:37.300628 2442 solver.cpp:258]Train net output #0: loss = 0.127584 (* 1 = 0.127584 loss)I0226 22:37:37.300644 2442 sgd_solver.cpp:112] Iteration 200, lr = 0.00985258I0226 22:37:47.227037 2442 solver.cpp:239] Iteration 300 (10.0746 iter/s, 9.926s/100 iters), loss = 0.152882I0226 22:37:47.227156 2442 solver.cpp:258]Train net output #0: loss = 0.152882 (* 1 = 0.152882 loss)I0226 22:37:47.227174 2442 sgd_solver.cpp:112] Iteration 300, lr = 0.00978075I0226 22:37:57.398344 2442 solver.cpp:239] Iteration 400 (9.83188 iter/s, 10.171s/100 iters), loss = 0.0781792I0226 22:37:57.398398 2442 solver.cpp:258]Train net output #0: loss = 0.078179 (* 1 = 0.078179 loss)I0226 22:37:57.398416 2442 sgd_solver.cpp:112] Iteration 400, lr = 0.00971013I0226 22:38:07.812530 2442 solver.cpp:347] Iteration 500, Testing net (#0)I0226 22:38:14.445535 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:38:14.759711 2442 solver.cpp:414]Test net output #0: accuracy = 0.9719I0226 22:38:14.759773 2442 solver.cpp:414]Test net output #1: loss = 0.0836109 (* 1 = 0.0836109 loss)I0226 22:38:14.888417 2442 solver.cpp:239] Iteration 500 (5.71755 iter/s, 17.49s/100 iters), loss = 0.0671692I0226 22:38:14.888476 2442 solver.cpp:258]Train net output #0: loss = 0.0671691 (* 1 = 0.0671691 loss)I0226 22:38:14.888497 2442 sgd_solver.cpp:112] Iteration 500, lr = 0.00964069I0226 22:38:25.550348 2442 solver.cpp:239] Iteration 600 (9.37998 iter/s, 10.661s/100 iters), loss = 0.0683537I0226 22:38:25.550529 2442 solver.cpp:258]Train net output #0: loss = 0.0683536 (* 1 = 0.0683536 loss)I0226 22:38:25.550545 2442 sgd_solver.cpp:112] Iteration 600, lr = 0.0095724I0226 22:38:35.250106 2442 solver.cpp:239] Iteration 700 (10.3103 iter/s, 9.699s/100 iters), loss = 0.119176I0226 22:38:35.250159 2442 solver.cpp:258]Train net output #0: loss = 0.119176 (* 1 = 0.119176 loss)I0226 22:38:35.250174 2442 sgd_solver.cpp:112] Iteration 700, lr = 0.00950522I0226 22:38:44.834887 2442 solver.cpp:239] Iteration 800 (10.4341 iter/s, 9.584s/100 iters), loss = 0.191517I0226 22:38:44.834939 2442 solver.cpp:258]Train net output #0: loss = 0.191517 (* 1 = 0.191517 loss)I0226 22:38:44.834955 2442 sgd_solver.cpp:112] Iteration 800, lr = 0.00943913I0226 22:38:54.965384 2442 solver.cpp:239] Iteration 900 (9.87167 iter/s, 10.13s/100 iters), loss = 0.174257I0226 22:38:54.965435 2442 solver.cpp:258]Train net output #0: loss = 0.174257 (* 1 = 0.174257 loss)I0226 22:38:54.965449 2442 sgd_solver.cpp:112] Iteration 900, lr = 0.00937411I0226 22:38:58.649479 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:39:05.269567 2442 solver.cpp:347] Iteration 1000, Testing net (#0)I0226 22:39:11.354693 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:39:11.597376 2442 solver.cpp:414]Test net output #0: accuracy = 0.9808I0226 22:39:11.597430 2442 solver.cpp:414]Test net output #1: loss = 0.0600035 (* 1 = 0.0600035 loss)I0226 22:39:11.690896 2442 solver.cpp:239] Iteration 1000 (5.97907 iter/s, 16.725s/100 iters), loss = 0.120871I0226 22:39:11.690948 2442 solver.cpp:258]Train net output #0: loss = 0.120871 (* 1 = 0.120871 loss)I0226 22:39:11.690968 2442 sgd_solver.cpp:112] Iteration 1000, lr = 0.00931012I0226 22:39:21.372987 2442 solver.cpp:239] Iteration 1100 (10.3284 iter/s, 9.682s/100 iters), loss = 0.00811403I0226 22:39:21.373046 2442 solver.cpp:258]Train net output #0: loss = 0.00811392 (* 1 = 0.00811392 loss)I0226 22:39:21.373065 2442 sgd_solver.cpp:112] Iteration 1100, lr = 0.00924715I0226 22:39:31.063477 2442 solver.cpp:239] Iteration 1200 (10.3199 iter/s, 9.69s/100 iters), loss = 0.0164351I0226 22:39:31.063616 2442 solver.cpp:258]Train net output #0: loss = 0.0164349 (* 1 = 0.0164349 loss)I0226 22:39:31.063635 2442 sgd_solver.cpp:112] Iteration 1200, lr = 0.00918515I0226 22:39:40.919028 2442 solver.cpp:239] Iteration 1300 (10.1471 iter/s, 9.855s/100 iters), loss = 0.0228002I0226 22:39:40.919081 2442 solver.cpp:258]Train net output #0: loss = 0.0228001 (* 1 = 0.0228001 loss)I0226 22:39:40.919096 2442 sgd_solver.cpp:112] Iteration 1300, lr = 0.00912412I0226 22:39:50.860584 2442 solver.cpp:239] Iteration 1400 (10.0594 iter/s, 9.941s/100 iters), loss = 0.00871326I0226 22:39:50.860630 2442 solver.cpp:258]Train net output #0: loss = 0.00871314 (* 1 = 0.00871314 loss)I0226 22:39:50.860648 2442 sgd_solver.cpp:112] Iteration 1400, lr = 0.00906403I0226 22:40:00.505358 2442 solver.cpp:347] Iteration 1500, Testing net (#0)I0226 22:40:06.325935 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:40:06.561522 2442 solver.cpp:414]Test net output #0: accuracy = 0.9833I0226 22:40:06.561571 2442 solver.cpp:414]Test net output #1: loss = 0.0514972 (* 1 = 0.0514972 loss)I0226 22:40:06.656510 2442 solver.cpp:239] Iteration 1500 (6.33112 iter/s, 15.795s/100 iters), loss = 0.0838645I0226 22:40:06.656564 2442 solver.cpp:258]Train net output #0: loss = 0.0838644 (* 1 = 0.0838644 loss)I0226 22:40:06.656580 2442 sgd_solver.cpp:112] Iteration 1500, lr = 0.00900485I0226 22:40:16.378629 2442 solver.cpp:239] Iteration 1600 (10.2859 iter/s, 9.722s/100 iters), loss = 0.141069I0226 22:40:16.378680 2442 solver.cpp:258]Train net output #0: loss = 0.141069 (* 1 = 0.141069 loss)I0226 22:40:16.378697 2442 sgd_solver.cpp:112] Iteration 1600, lr = 0.00894657I0226 22:40:26.211732 2442 solver.cpp:239] Iteration 1700 (10.1698 iter/s, 9.833s/100 iters), loss = 0.0447211I0226 22:40:26.211783 2442 solver.cpp:258]Train net output #0: loss = 0.044721 (* 1 = 0.044721 loss)I0226 22:40:26.211796 2442 sgd_solver.cpp:112] Iteration 1700, lr = 0.00888916I0226 22:40:35.953567 2442 solver.cpp:239] Iteration 1800 (10.2659 iter/s, 9.741s/100 iters), loss = 0.0174017I0226 22:40:35.953621 2442 solver.cpp:258]Train net output #0: loss = 0.0174016 (* 1 = 0.0174016 loss)I0226 22:40:35.953637 2442 sgd_solver.cpp:112] Iteration 1800, lr = 0.0088326I0226 22:40:42.954711 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:40:45.889737 2442 solver.cpp:239] Iteration 1900 (10.0644 iter/s, 9.936s/100 iters), loss = 0.105484I0226 22:40:45.889784 2442 solver.cpp:258]Train net output #0: loss = 0.105484 (* 1 = 0.105484 loss)I0226 22:40:45.889801 2442 sgd_solver.cpp:112] Iteration 1900, lr = 0.00877687I0226 22:40:55.702467 2442 solver.cpp:347] Iteration 2000, Testing net (#0)I0226 22:41:01.541447 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:41:01.797310 2442 solver.cpp:414]Test net output #0: accuracy = 0.9847I0226 22:41:01.797366 2442 solver.cpp:414]Test net output #1: loss = 0.0454274 (* 1 = 0.0454274 loss)I0226 22:41:01.895714 2442 solver.cpp:239] Iteration 2000 (6.24805 iter/s, 16.005s/100 iters), loss = 0.00678412I0226 22:41:01.895766 2442 solver.cpp:258]Train net output #0: loss = 0.00678401 (* 1 = 0.00678401 loss)I0226 22:41:01.895781 2442 sgd_solver.cpp:112] Iteration 2000, lr = 0.00872196I0226 22:41:12.291903 2442 solver.cpp:239] Iteration 2100 (9.61908 iter/s, 10.396s/100 iters), loss = 0.0128036I0226 22:41:12.291952 2442 solver.cpp:258]Train net output #0: loss = 0.0128035 (* 1 = 0.0128035 loss)I0226 22:41:12.291972 2442 sgd_solver.cpp:112] Iteration 2100, lr = 0.00866784I0226 22:41:22.466570 2442 solver.cpp:239] Iteration 2200 (9.82898 iter/s, 10.174s/100 iters), loss = 0.0142575I0226 22:41:22.466740 2442 solver.cpp:258]Train net output #0: loss = 0.0142573 (* 1 = 0.0142573 loss)I0226 22:41:22.466753 2442 sgd_solver.cpp:112] Iteration 2200, lr = 0.0086145I0226 22:41:32.224442 2442 solver.cpp:239] Iteration 2300 (10.2491 iter/s, 9.757s/100 iters), loss = 0.0805506I0226 22:41:32.224493 2442 solver.cpp:258]Train net output #0: loss = 0.0805505 (* 1 = 0.0805505 loss)I0226 22:41:32.224509 2442 sgd_solver.cpp:112] Iteration 2300, lr = 0.00856192I0226 22:41:42.291199 2442 solver.cpp:239] Iteration 2400 (9.93443 iter/s, 10.066s/100 iters), loss = 0.00802351I0226 22:41:42.291260 2442 solver.cpp:258]Train net output #0: loss = 0.00802335 (* 1 = 0.00802335 loss)I0226 22:41:42.291280 2442 sgd_solver.cpp:112] Iteration 2400, lr = 0.00851008I0226 22:41:51.945683 2442 solver.cpp:347] Iteration 2500, Testing net (#0)I0226 22:41:57.714076 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:41:57.967310 2442 solver.cpp:414]Test net output #0: accuracy = 0.984I0226 22:41:57.967372 2442 solver.cpp:414]Test net output #1: loss = 0.0476773 (* 1 = 0.0476773 loss)I0226 22:41:58.071966 2442 solver.cpp:239] Iteration 2500 (6.33714 iter/s, 15.78s/100 iters), loss = 0.0273458I0226 22:41:58.07 2442 solver.cpp:258]Train net output #0: loss = 0.0273456 (* 1 = 0.0273456 loss)I0226 22:41:58.072034 2442 sgd_solver.cpp:112] Iteration 2500, lr = 0.00845897I0226 22:42:07.869709 2442 solver.cpp:239] Iteration 2600 (10.2072 iter/s, 9.797s/100 iters), loss = 0.103635I0226 22:42:07.869753 2442 solver.cpp:258]Train net output #0: loss = 0.103635 (* 1 = 0.103635 loss)I0226 22:42:07.869774 2442 sgd_solver.cpp:112] Iteration 2600, lr = 0.00840857I0226 22:42:17.519234 2442 solver.cpp:239] Iteration 2700 (10.3638 iter/s, 9.649s/100 iters), loss = 0.0917561I0226 22:42:17.519301 2442 solver.cpp:258]Train net output #0: loss = 0.091756 (* 1 = 0.091756 loss)I0226 22:42:17.519318 2442 sgd_solver.cpp:112] Iteration 2700, lr = 0.00835886I0226 22:42:27.523949 2442 solver.cpp:239] Iteration 2800 (9.996 iter/s, 10.004s/100 iters), loss = 0.00693497I0226 22:42:27.524004 2442 solver.cpp:258]Train net output #0: loss = 0.00693486 (* 1 = 0.00693486 loss)I0226 22:42:27.524020 2442 sgd_solver.cpp:112] Iteration 2800, lr = 0.00830984I0226 22:42:28.321317 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:42:37.224385 2442 solver.cpp:239] Iteration 2900 (10.3093 iter/s, 9.7s/100 iters), loss = 0.0207115I0226 22:42:37.224438 2442 solver.cpp:258]Train net output #0: loss = 0.0207114 (* 1 = 0.0207114 loss)I0226 22:42:37.224454 2442 sgd_solver.cpp:112] Iteration 2900, lr = 0.00826148I0226 22:42:46.819875 2442 solver.cpp:347] Iteration 3000, Testing net (#0)I0226 22:42:52.522219 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:42:52.777567 2442 solver.cpp:414]Test net output #0: accuracy = 0.987I0226 22:42:52.777616 2442 solver.cpp:414]Test net output #1: loss = 0.0408713 (* 1 = 0.0408713 loss)I0226 22:42:52.876366 2442 solver.cpp:239] Iteration 3000 (6.38937 iter/s, 15.651s/100 iters), loss = 0.0119719I0226 22:42:52.876420 2442 solver.cpp:258]Train net output #0: loss = 0.0119718 (* 1 = 0.0119718 loss)I0226 22:42:52.876435 2442 sgd_solver.cpp:112] Iteration 3000, lr = 0.00821377I0226 22:43:02.562208 2442 solver.cpp:239] Iteration 3100 (10.3252 iter/s, 9.685s/100 iters), loss = 0.00936197I0226 22:43:02.562374 2442 solver.cpp:258]Train net output #0: loss = 0.00936181 (* 1 = 0.00936181 loss)I0226 22:43:02.562392 2442 sgd_solver.cpp:112] Iteration 3100, lr = 0.0081667I0226 22:43:12.228516 2442 solver.cpp:239] Iteration 3200 (10.3455 iter/s, 9.666s/100 iters), loss = 0.0061I0226 22:43:12.228567 2442 solver.cpp:258]Train net output #0: loss = 0.00609983 (* 1 = 0.00609983 loss)I0226 22:43:12.228585 2442 sgd_solver.cpp:112] Iteration 3200, lr = 0.00812025I0226 22:43:21.852133 2442 solver.cpp:239] Iteration 3300 (10.3918 iter/s, 9.623s/100 iters), loss = 0.0449563I0226 22:43:21.852195 2442 solver.cpp:258]Train net output #0: loss = 0.0449562 (* 1 = 0.0449562 loss)I0226 22:43:21.852208 2442 sgd_solver.cpp:112] Iteration 3300, lr = 0.00807442I0226 22:43:31.578043 2442 solver.cpp:239] Iteration 3400 (10.2828 iter/s, 9.725s/100 iters), loss = 0.0159143I0226 22:43:31.578091 2442 solver.cpp:258]Train net output #0: loss = 0.0159141 (* 1 = 0.0159141 loss)I0226 22:43:31.578109 2442 sgd_solver.cpp:112] Iteration 3400, lr = 0.00802918I0226 22:43:41.093394 2442 solver.cpp:347] Iteration 3500, Testing net (#0)I0226 22:43:46.909528 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:43:47.146631 2442 solver.cpp:414]Test net output #0: accuracy = 0.9827I0226 22:43:47.146687 2442 solver.cpp:414]Test net output #1: loss = 0.0511475 (* 1 = 0.0511475 loss)I0226 22:43:47.242857 2442 solver.cpp:239] Iteration 3500 (6.38407 iter/s, 15.664s/100 iters), loss = 0.00554056I0226 22:43:47.242905 2442 solver.cpp:258]Train net output #0: loss = 0.0055404 (* 1 = 0.0055404 loss)I0226 22:43:47.242923 2442 sgd_solver.cpp:112] Iteration 3500, lr = 0.00798454I0226 22:43:56.946308 2442 solver.cpp:239] Iteration 3600 (10.3061 iter/s, 9.703s/100 iters), loss = 0.0293207I0226 22:43:56.946359 2442 solver.cpp:258]Train net output #0: loss = 0.0293205 (* 1 = 0.0293205 loss)I0226 22:43:56.946374 2442 sgd_solver.cpp:112] Iteration 3600, lr = 0.00794046I0226 22:44:06.620342 2442 solver.cpp:239] Iteration 3700 (10.3381 iter/s, 9.673s/100 iters), loss = 0.0197716I0226 22:44:06.620400 2442 solver.cpp:258]Train net output #0: loss = 0.0197715 (* 1 = 0.0197715 loss)I0226 22:44:06.620417 2442 sgd_solver.cpp:112] Iteration 3700, lr = 0.00789695I0226 22:44:10.976276 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:44:16.277290 2442 solver.cpp:239] Iteration 3800 (10.3563 iter/s, 9.656s/100 iters), loss = 0.00992929I0226 22:44:16.277463 2442 solver.cpp:258]Train net output #0: loss = 0.00992913 (* 1 = 0.00992913 loss)I0226 22:44:16.277480 2442 sgd_solver.cpp:112] Iteration 3800, lr = 0.007854I0226 22:44:25.952764 2442 solver.cpp:239] Iteration 3900 (10.3359 iter/s, 9.675s/100 iters), loss = 0.0400946I0226 22:44:25.952816 2442 solver.cpp:258]Train net output #0: loss = 0.0400944 (* 1 = 0.0400944 loss)I0226 22:44:25.952831 2442 sgd_solver.cpp:112] Iteration 3900, lr = 0.00781158I0226 22:44:35.516759 2442 solver.cpp:347] Iteration 4000, Testing net (#0)I0226 22:44:41.269843 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:44:41.504411 2442 solver.cpp:414]Test net output #0: accuracy = 0.9894I0226 22:44:41.504463 2442 solver.cpp:414]Test net output #1: loss = 0.0313221 (* 1 = 0.0313221 loss)I0226 22:44:41.597961 2442 solver.cpp:239] Iteration 4000 (6.39182 iter/s, 15.645s/100 iters), loss = 0.0144327I0226 22:44:41.598017 2442 solver.cpp:258]Train net output #0: loss = 0.0144325 (* 1 = 0.0144325 loss)I0226 22:44:41.598042 2442 sgd_solver.cpp:112] Iteration 4000, lr = 0.00776969I0226 22:44:51.331691 2442 solver.cpp:239] Iteration 4100 (10.2743 iter/s, 9.733s/100 iters), loss = 0.0163888I0226 22:44:51.331867 2442 solver.cpp:258]Train net output #0: loss = 0.0163887 (* 1 = 0.0163887 loss)I0226 22:44:51.331887 2442 sgd_solver.cpp:112] Iteration 4100, lr = 0.00772833I0226 22:45:00.979555 2442 solver.cpp:239] Iteration 4200 (10.3659 iter/s, 9.647s/100 iters), loss = 0.0113799I0226 22:45:00.979607 2442 solver.cpp:258]Train net output #0: loss = 0.0113797 (* 1 = 0.0113797 loss)I0226 22:45:00.979621 2442 sgd_solver.cpp:112] Iteration 4200, lr = 0.00768748I0226 22:45:10.706171 2442 solver.cpp:239] Iteration 4300 (10.2817 iter/s, 9.726s/100 iters), loss = 0.0473539I0226 22:45:10.706226 2442 solver.cpp:258]Train net output #0: loss = 0.0473537 (* 1 = 0.0473537 loss)I0226 22:45:10.706243 2442 sgd_solver.cpp:112] Iteration 4300, lr = 0.00764712I0226 22:45:20.392432 2442 solver.cpp:239] Iteration 4400 (10.3242 iter/s, 9.686s/100 iters), loss = 0.0242695I0226 22:45:20.392488 2442 solver.cpp:258]Train net output #0: loss = 0.0242693 (* 1 = 0.0242693 loss)I0226 22:45:20.392509 2442 sgd_solver.cpp:112] Iteration 4400, lr = 0.00760726I0226 22:45:30.040961 2442 solver.cpp:347] Iteration 4500, Testing net (#0)I0226 22:45:35.948333 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:45:36.191329 2442 solver.cpp:414]Test net output #0: accuracy = 0.9873I0226 22:45:36.191385 2442 solver.cpp:414]Test net output #1: loss = 0.0384393 (* 1 = 0.0384393 loss)I0226 22:45:36.287822 2442 solver.cpp:239] Iteration 4500 (6.29129 iter/s, 15.895s/100 iters), loss = 0.00526828I0226 22:45:36.287874 2442 solver.cpp:258]Train net output #0: loss = 0.0052681 (* 1 = 0.0052681 loss)I0226 22:45:36.287909 2442 sgd_solver.cpp:112] Iteration 4500, lr = 0.00756788I0226 22:45:46.326690 2442 solver.cpp:239] Iteration 4600 (9.96214 iter/s, 10.038s/100 iters), loss = 0.0157644I0226 22:45:46.326746 2442 solver.cpp:258]Train net output #0: loss = 0.0157642 (* 1 = 0.0157642 loss)I0226 22:45:46.326761 2442 sgd_solver.cpp:112] Iteration 4600, lr = 0.00752897I0226 22:45:54.865051 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:45:56.655055 2442 solver.cpp:239] Iteration 4700 (9.68242 iter/s, 10.328s/100 iters), loss = 0.00579816I0226 22:45:56.655112 2442 solver.cpp:258]Train net output #0: loss = 0.00579801 (* 1 = 0.00579801 loss)I0226 22:45:56.655128 2442 sgd_solver.cpp:112] Iteration 4700, lr = 0.00749052I0226 22:46:07.484783 2442 solver.cpp:239] Iteration 4800 (9.23446 iter/s, 10.829s/100 iters), loss = 0.0136092I0226 22:46:07.484933 2442 solver.cpp:258]Train net output #0: loss = 0.013609 (* 1 = 0.013609 loss)I0226 22:46:07.484949 2442 sgd_solver.cpp:112] Iteration 4800, lr = 0.00745253I0226 22:46:17.634359 2442 solver.cpp:239] Iteration 4900 (9.85319 iter/s, 10.149s/100 iters), loss = 0.00842682I0226 22:46:17.634413 2442 solver.cpp:258]Train net output #0: loss = 0.00842665 (* 1 = 0.00842665 loss)I0226 22:46:17.634428 2442 sgd_solver.cpp:112] Iteration 4900, lr = 0.00741498I0226 22:46:27.167671 2442 solver.cpp:464] Snapshotting to binary proto file examples/mnist/lenet_iter_5000.caffemodelI0226 22:46:27.177868 2442 sgd_solver.cpp:284] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_5000.solverstateI0226 22:46:27.182534 2442 solver.cpp:347] Iteration 5000, Testing net (#0)I0226 22:46:32.830696 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:46:33.063828 2442 solver.cpp:414]Test net output #0: accuracy = 0.9899I0226 22:46:33.063880 2442 solver.cpp:414]Test net output #1: loss = 0.0316008 (* 1 = 0.0316008 loss)I0226 22:46:33.157487 2442 solver.cpp:239] Iteration 5000 (6.44205 iter/s, 15.523s/100 iters), loss = 0.0346689I0226 22:46:33.157538 2442 solver.cpp:258]Train net output #0: loss = 0.0346687 (* 1 = 0.0346687 loss)I0226 22:46:33.157557 2442 sgd_solver.cpp:112] Iteration 5000, lr = 0.00737788I0226 22:46:42.803390 2442 solver.cpp:239] Iteration 5100 (10.3681 iter/s, 9.645s/100 iters), loss = 0.0176471I0226 22:46:42.803572 2442 solver.cpp:258]Train net output #0: loss = 0.0176469 (* 1 = 0.0176469 loss)I0226 22:46:42.803607 2442 sgd_solver.cpp:112] Iteration 5100, lr = 0.0073412I0226 22:46:52.343144 2442 solver.cpp:239] Iteration 5200 (10.4833 iter/s, 9.539s/100 iters), loss = 0.00705752I0226 22:46:52.343199 2442 solver.cpp:258]Train net output #0: loss = 0.00705736 (* 1 = 0.00705736 loss)I0226 22:46:52.343215 2442 sgd_solver.cpp:112] Iteration 5200, lr = 0.00730495I0226 22:47:01.882566 2442 solver.cpp:239] Iteration 5300 (10.4833 iter/s, 9.539s/100 iters), loss = 0.00257235I0226 22:47:01.882619 2442 solver.cpp:258]Train net output #0: loss = 0.0025722 (* 1 = 0.0025722 loss)I0226 22:47:01.882635 2442 sgd_solver.cpp:112] Iteration 5300, lr = 0.00726911I0226 22:47:11.436738 2442 solver.cpp:239] Iteration 5400 (10.4668 iter/s, 9.554s/100 iters), loss = 0.00728917I0226 22:47:11.436789 2442 solver.cpp:258]Train net output #0: loss = 0.00728904 (* 1 = 0.00728904 loss)I0226 22:47:11.436803 2442 sgd_solver.cpp:112] Iteration 5400, lr = 0.00723368I0226 22:47:20.922358 2442 solver.cpp:347] Iteration 5500, Testing net (#0)I0226 22:47:26.806264 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:47:27.066781 2442 solver.cpp:414]Test net output #0: accuracy = 0.9897I0226 22:47:27.066843 2442 solver.cpp:414]Test net output #1: loss = 0.0339334 (* 1 = 0.0339334 loss)I0226 22:47:27.164624 2442 solver.cpp:239] Iteration 5500 (6.35849 iter/s, 15.727s/100 iters), loss = 0.0067735I0226 22:47:27.164736 2442 solver.cpp:258]Train net output #0: loss = 0.00677338 (* 1 = 0.00677338 loss)I0226 22:47:27.164757 2442 sgd_solver.cpp:112] Iteration 5500, lr = 0.00719865I0226 22:47:36.799551 2442 solver.cpp:239] Iteration 5600 (10.3799 iter/s, 9.634s/100 iters), loss = 0.000724167I0226 22:47:36.799612 2442 solver.cpp:258]Train net output #0: loss = 0.000724045 (* 1 = 0.000724045 loss)I0226 22:47:36.799628 2442 sgd_solver.cpp:112] Iteration 5600, lr = 0.00716402I0226 22:47:38.824903 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:47:46.896533 2442 solver.cpp:239] Iteration 5700 (9.90491 iter/s, 10.096s/100 iters), loss = 0.0032787I0226 22:47:46.896577 2442 solver.cpp:258]Train net output #0: loss = 0.00327857 (* 1 = 0.00327857 loss)I0226 22:47:46.896590 2442 sgd_solver.cpp:112] Iteration 5700, lr = 0.00712977I0226 22:47:57.388976 2442 solver.cpp:239] Iteration 5800 (9.53107 iter/s, 10.492s/100 iters), loss = 0.0372791I0226 22:47:57.389109 2442 solver.cpp:258]Train net output #0: loss = 0.037279 (* 1 = 0.037279 loss)I0226 22:47:57.389125 2442 sgd_solver.cpp:112] Iteration 5800, lr = 0.0070959I0226 22:48:06.790045 2442 solver.cpp:239] Iteration 5900 (10.6383 iter/s, 9.4s/100 iters), loss = 0.00496813I0226 22:48:06.790096 2442 solver.cpp:258]Train net output #0: loss = 0.00496801 (* 1 = 0.00496801 loss)I0226 22:48:06.790110 2442 sgd_solver.cpp:112] Iteration 5900, lr = 0.0070624I0226 22:48:16.121098 2442 solver.cpp:347] Iteration 6000, Testing net (#0)I0226 22:48:21.66 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:48:21.844125 2442 solver.cpp:414]Test net output #0: accuracy = 0.9909I0226 22:48:21.844178 2442 solver.cpp:414]Test net output #1: loss = 0.0288429 (* 1 = 0.0288429 loss)I0226 22:48:21.934967 2442 solver.cpp:239] Iteration 6000 (6.60328 iter/s, 15.144s/100 iters), loss = 0.00401383I0226 22:48:21.935021 2442 solver.cpp:258]Train net output #0: loss = 0.00401372 (* 1 = 0.00401372 loss)I0226 22:48:21.935039 2442 sgd_solver.cpp:112] Iteration 6000, lr = 0.00702927I0226 22:48:31.014570 2442 solver.cpp:239] Iteration 6100 (11.0144 iter/s, 9.079s/100 iters), loss = 0.00256507I0226 22:48:31.014737 2442 solver.cpp:258]Train net output #0: loss = 0.00256497 (* 1 = 0.00256497 loss)I0226 22:48:31.014755 2442 sgd_solver.cpp:112] Iteration 6100, lr = 0.0069965I0226 22:48:40.115618 2442 solver.cpp:239] Iteration 6200 (10.989 iter/s, 9.1s/100 iters), loss = 0.00779089I0226 22:48:40.115670 2442 solver.cpp:258]Train net output #0: loss = 0.00779079 (* 1 = 0.00779079 loss)I0226 22:48:40.115684 2442 sgd_solver.cpp:112] Iteration 6200, lr = 0.00696408I0226 22:48:49.190863 2442 solver.cpp:239] Iteration 6300 (11.0193 iter/s, 9.075s/100 iters), loss = 0.0148993I0226 22:48:49.190919 2442 solver.cpp:258]Train net output #0: loss = 0.0148992 (* 1 = 0.0148992 loss)I0226 22:48:49.190935 2442 sgd_solver.cpp:112] Iteration 6300, lr = 0.00693201I0226 22:48:58.256402 2442 solver.cpp:239] Iteration 6400 (11.0314 iter/s, 9.065s/100 iters), loss = 0.00686445I0226 22:48:58.256455 2442 solver.cpp:258]Train net output #0: loss = 0.00686436 (* 1 = 0.00686436 loss)I0226 22:48:58.256474 2442 sgd_solver.cpp:112] Iteration 6400, lr = 0.00690029I0226 22:49:07.221040 2442 solver.cpp:347] Iteration 6500, Testing net (#0)I0226 22:49:12.603580 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:49:12.828657 2442 solver.cpp:414]Test net output #0: accuracy = 0.9896I0226 22:49:12.828713 2442 solver.cpp:414]Test net output #1: loss = 0.0310469 (* 1 = 0.0310469 loss)I0226 22:49:12.918390 2442 solver.cpp:239] Iteration 6500 (6.82082 iter/s, 14.661s/100 iters), loss = 0.00574645I0226 22:49:12.918442 2442 solver.cpp:258]Train net output #0: loss = 0.00574635 (* 1 = 0.00574635 loss)I0226 22:49:12.918457 2442 sgd_solver.cpp:112] Iteration 6500, lr = 0.0068689I0226 22:49:18.189663 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:49:21.990790 2442 solver.cpp:239] Iteration 6600 (11.0229 iter/s, 9.072s/100 iters), loss = 0.0330389I0226 22:49:21.990844 2442 solver.cpp:258]Train net output #0: loss = 0.0330388 (* 1 = 0.0330388 loss)I0226 22:49:21.990856 2442 sgd_solver.cpp:112] Iteration 6600, lr = 0.00683784I0226 22:49:31.068985 2442 solver.cpp:239] Iteration 6700 (11.0156 iter/s, 9.078s/100 iters), loss = 0.0119388I0226 22:49:31.069037 2442 solver.cpp:258]Train net output #0: loss = 0.0119387 (* 1 = 0.0119387 loss)I0226 22:49:31.069051 2442 sgd_solver.cpp:112] Iteration 6700, lr = 0.00680711I0226 22:49:40.151279 2442 solver.cpp:239] Iteration 6800 (11.0108 iter/s, 9.082s/100 iters), loss = 0.00151411I0226 22:49:40.151388 2442 solver.cpp:258]Train net output #0: loss = 0.00151402 (* 1 = 0.00151402 loss)I0226 22:49:40.151407 2442 sgd_solver.cpp:112] Iteration 6800, lr = 0.0067767I0226 22:49:49.222928 2442 solver.cpp:239] Iteration 6900 (11.0241 iter/s, 9.071s/100 iters), loss = 0.00391463I0226 22:49:49.222980 2442 solver.cpp:258]Train net output #0: loss = 0.00391453 (* 1 = 0.00391453 loss)I0226 22:49:49.222995 2442 sgd_solver.cpp:112] Iteration 6900, lr = 0.0067466I0226 22:49:58.208765 2442 solver.cpp:347] Iteration 7000, Testing net (#0)I0226 22:50:03.612239 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:50:03.838024 2442 solver.cpp:414]Test net output #0: accuracy = 0.9904I0226 22:50:03.838075 2442 solver.cpp:414]Test net output #1: loss = 0.0300491 (* 1 = 0.0300491 loss)I0226 22:50:03.927436 2442 solver.cpp:239] Iteration 7000 (6.80087 iter/s, 14.704s/100 iters), loss = 0.00687729I0226 22:50:03.927487 2442 solver.cpp:258]Train net output #0: loss = 0.0068772 (* 1 = 0.0068772 loss)I0226 22:50:03.927505 2442 sgd_solver.cpp:112] Iteration 7000, lr = 0.00671681I0226 22:50:12.997606 2442 solver.cpp:239] Iteration 7100 (11.0254 iter/s, 9.07s/100 iters), loss = 0.0138477I0226 22:50:12.997923 2442 solver.cpp:258]Train net output #0: loss = 0.0138476 (* 1 = 0.0138476 loss)I0226 22:50:12.997941 2442 sgd_solver.cpp:112] Iteration 7100, lr = 0.00668733I0226 22:50:22.059113 2442 solver.cpp:239] Iteration 7200 (11.0363 iter/s, 9.061s/100 iters), loss = 0.0080553I0226 22:50:22.059166 2442 solver.cpp:258]Train net output #0: loss = 0.0080552 (* 1 = 0.0080552 loss)I0226 22:50:22.059180 2442 sgd_solver.cpp:112] Iteration 7200, lr = 0.00665815I0226 22:50:31.139458 2442 solver.cpp:239] Iteration 7300 (11.0132 iter/s, 9.08s/100 iters), loss = 0.0308229I0226 22:50:31.139509 2442 solver.cpp:258]Train net output #0: loss = 0.0308228 (* 1 = 0.0308228 loss)I0226 22:50:31.139528 2442 sgd_solver.cpp:112] Iteration 7300, lr = 0.00662927I0226 22:50:40.195005 2442 solver.cpp:239] Iteration 7400 (11.0436 iter/s, 9.055s/100 iters), loss = 0.00373491I0226 22:50:40.195057 2442 solver.cpp:258]Train net output #0: loss = 0.00373482 (* 1 = 0.00373482 loss)I0226 22:50:40.195071 2442 sgd_solver.cpp:112] Iteration 7400, lr = 0.00660067I0226 22:50:48.808516 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:50:49.173398 2442 solver.cpp:347] Iteration 7500, Testing net (#0)I0226 22:50:54.556639 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:50:54.781818 2442 solver.cpp:414]Test net output #0: accuracy = 0.99I0226 22:50:54.781873 2442 solver.cpp:414]Test net output #1: loss = 0.0321095 (* 1 = 0.0321095 loss)I0226 22:50:54.871702 2442 solver.cpp:239] Iteration 7500 (6.81385 iter/s, 14.676s/100 iters), loss = 0.00325872I0226 22:50:54.871752 2442 solver.cpp:258]Train net output #0: loss = 0.00325864 (* 1 = 0.00325864 loss)I0226 22:50:54.871767 2442 sgd_solver.cpp:112] Iteration 7500, lr = 0.00657236I0226 22:51:03.959707 2442 solver.cpp:239] Iteration 7600 (11.0047 iter/s, 9.087s/100 iters), loss = 0.0034388I0226 22:51:03.959767 2442 solver.cpp:258]Train net output #0: loss = 0.00343871 (* 1 = 0.00343871 loss)I0226 22:51:03.959781 2442 sgd_solver.cpp:112] Iteration 7600, lr = 0.00654433I0226 22:51:13.010762 2442 solver.cpp:239] Iteration 7700 (11.0497 iter/s, 9.05s/100 iters), loss = 0.0260679I0226 22:51:13.010815 2442 solver.cpp:258]Train net output #0: loss = 0.0260678 (* 1 = 0.0260678 loss)I0226 22:51:13.010829 2442 sgd_solver.cpp:112] Iteration 7700, lr = 0.00651658I0226 22:51:22.080538 2442 solver.cpp:239] Iteration 7800 (11.0266 iter/s, 9.069s/100 iters), loss = 0.00266021I0226 22:51:22.080646 2442 solver.cpp:258]Train net output #0: loss = 0.00266013 (* 1 = 0.00266013 loss)I0226 22:51:22.080663 2442 sgd_solver.cpp:112] Iteration 7800, lr = 0.00648911I0226 22:51:31.160694 2442 solver.cpp:239] Iteration 7900 (11.0132 iter/s, 9.08s/100 iters), loss = 0.00704976I0226 22:51:31.160763 2442 solver.cpp:258]Train net output #0: loss = 0.00704968 (* 1 = 0.00704968 loss)I0226 22:51:31.160782 2442 sgd_solver.cpp:112] Iteration 7900, lr = 0.0064619I0226 22:51:40.139366 2442 solver.cpp:347] Iteration 8000, Testing net (#0)I0226 22:51:45.531428 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:51:45.757876 2442 solver.cpp:414]Test net output #0: accuracy = 0.9901I0226 22:51:45.757930 2442 solver.cpp:414]Test net output #1: loss = 0.030962 (* 1 = 0.030962 loss)I0226 22:51:45.847748 2442 solver.cpp:239] Iteration 8000 (6.80921 iter/s, 14.686s/100 iters), loss = 0.00426692I0226 22:51:45.847800 2442 solver.cpp:258]Train net output #0: loss = 0.00426683 (* 1 = 0.00426683 loss)I0226 22:51:45.847815 2442 sgd_solver.cpp:112] Iteration 8000, lr = 0.00643496I0226 22:51:54.916476 2442 solver.cpp:239] Iteration 8100 (11.0278 iter/s, 9.068s/100 iters), loss = 0.00814418I0226 22:51:54.916622 2442 solver.cpp:258]Train net output #0: loss = 0.00814409 (* 1 = 0.00814409 loss)I0226 22:51:54.916640 2442 sgd_solver.cpp:112] Iteration 8100, lr = 0.00640827I0226 22:52:04.000636 2442 solver.cpp:239] Iteration 8200 (11.0084 iter/s, 9.084s/100 iters), loss = 0.0103396I0226 22:52:04.000689 2442 solver.cpp:258]Train net output #0: loss = 0.0103395 (* 1 = 0.0103395 loss)I0226 22:52:04.000702 2442 sgd_solver.cpp:112] Iteration 8200, lr = 0.00638185I0226 22:52:13.055968 2442 solver.cpp:239] Iteration 8300 (11.0436 iter/s, 9.055s/100 iters), loss = 0.0209491I0226 22:52:13.056023 2442 solver.cpp:258]Train net output #0: loss = 0.020949 (* 1 = 0.020949 loss)I0226 22:52:13.056041 2442 sgd_solver.cpp:112] Iteration 8300, lr = 0.00635568I0226 22:52:22.116808 2442 solver.cpp:239] Iteration 8400 (11.0375 iter/s, 9.06s/100 iters), loss = 0.00696749I0226 22:52:22.116861 2442 solver.cpp:258]Train net output #0: loss = 0.00696739 (* 1 = 0.00696739 loss)I0226 22:52:22.116875 2442 sgd_solver.cpp:112] Iteration 8400, lr = 0.00632975I0226 22:52:25.107352 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:52:31.087731 2442 solver.cpp:347] Iteration 8500, Testing net (#0)I0226 22:52:36.476758 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:52:36.702246 2442 solver.cpp:414]Test net output #0: accuracy = 0.9909I0226 22:52:36.702301 2442 solver.cpp:414]Test net output #1: loss = 0.0295619 (* 1 = 0.0295619 loss)I0226 22:52:36.792059 2442 solver.cpp:239] Iteration 8500 (6.81431 iter/s, 14.675s/100 iters), loss = 0.00802257I0226 22:52:36.792111 2442 solver.cpp:258]Train net output #0: loss = 0.00802247 (* 1 = 0.00802247 loss)I0226 22:52:36.792124 2442 sgd_solver.cpp:112] Iteration 8500, lr = 0.00630407I0226 22:52:45.857764 2442 solver.cpp:239] Iteration 8600 (11.0314 iter/s, 9.065s/100 iters), loss = 0.000618256I0226 22:52:45.857817 2442 solver.cpp:258]Train net output #0: loss = 0.000618144 (* 1 = 0.000618144 loss)I0226 22:52:45.857832 2442 sgd_solver.cpp:112] Iteration 8600, lr = 0.00627864I0226 22:52:54.917385 2442 solver.cpp:239] Iteration 8700 (11.0387 iter/s, 9.059s/100 iters), loss = 0.00374745I0226 22:52:54.917438 2442 solver.cpp:258]Train net output #0: loss = 0.00374734 (* 1 = 0.00374734 loss)I0226 22:52:54.917454 2442 sgd_solver.cpp:112] Iteration 8700, lr = 0.00625344I0226 22:53:03.966257 2442 solver.cpp:239] Iteration 8800 (11.0522 iter/s, 9.048s/100 iters), loss = 0.00128679I0226 22:53:03.966358 2442 solver.cpp:258]Train net output #0: loss = 0.00128669 (* 1 = 0.00128669 loss)I0226 22:53:03.966377 2442 sgd_solver.cpp:112] Iteration 8800, lr = 0.00622847I0226 22:53:13.013511 2442 solver.cpp:239] Iteration 8900 (11.0534 iter/s, 9.047s/100 iters), loss = 0.000865528I0226 22:53:13.013563 2442 solver.cpp:258]Train net output #0: loss = 0.000865408 (* 1 = 0.000865408 loss)I0226 22:53:13.013576 2442 sgd_solver.cpp:112] Iteration 8900, lr = 0.00620374I0226 22:53:21.981493 2442 solver.cpp:347] Iteration 9000, Testing net (#0)I0226 22:53:27.345178 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:53:27.570178 2442 solver.cpp:414]Test net output #0: accuracy = 0.9905I0226 22:53:27.570235 2442 solver.cpp:414]Test net output #1: loss = 0.0283523 (* 1 = 0.0283523 loss)I0226 22:53:27.659744 2442 solver.cpp:239] Iteration 9000 (6.8278 iter/s, 14.646s/100 iters), loss = 0.0144745I0226 22:53:27.659796 2442 solver.cpp:258]Train net output #0: loss = 0.0144743 (* 1 = 0.0144743 loss)I0226 22:53:27.659812 2442 sgd_solver.cpp:112] Iteration 9000, lr = 0.00617924I0226 22:53:36.718863 2442 solver.cpp:239] Iteration 9100 (11.0387 iter/s, 9.059s/100 iters), loss = 0.00866055I0226 22:53:36.719024 2442 solver.cpp:258]Train net output #0: loss = 0.00866042 (* 1 = 0.00866042 loss)I0226 22:53:36.719039 2442 sgd_solver.cpp:112] Iteration 9100, lr = 0.00615496I0226 22:53:45.783699 2442 solver.cpp:239] Iteration 9200 (11.0327 iter/s, 9.064s/100 iters), loss = 0.00280615I0226 22:53:45.783751 2442 solver.cpp:258]Train net output #0: loss = 0.00280601 (* 1 = 0.00280601 loss)I0226 22:53:45.783769 2442 sgd_solver.cpp:112] Iteration 9200, lr = 0.0061309I0226 22:53:54.853555 2442 solver.cpp:239] Iteration 9300 (11.0266 iter/s, 9.069s/100 iters), loss = 0.00804057I0226 22:53:54.853613 2442 solver.cpp:258]Train net output #0: loss = 0.00804044 (* 1 = 0.00804044 loss)I0226 22:53:54.853633 2442 sgd_solver.cpp:112] Iteration 9300, lr = 0.00610706I0226 22:54:01.190043 2443 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:54:03.901945 2442 solver.cpp:239] Iteration 9400 (11.0522 iter/s, 9.048s/100 iters), loss = 0.0300722I0226 22:54:03.90 2442 solver.cpp:258]Train net output #0: loss = 0.0300721 (* 1 = 0.0300721 loss)I0226 22:54:03.90 2442 sgd_solver.cpp:112] Iteration 9400, lr = 0.00608343I0226 22:54:12.897828 2442 solver.cpp:347] Iteration 9500, Testing net (#0)I0226 22:54:18.294420 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:54:18.524183 2442 solver.cpp:414]Test net output #0: accuracy = 0.9886I0226 22:54:18.524237 2442 solver.cpp:414]Test net output #1: loss = 0.0350763 (* 1 = 0.0350763 loss)I0226 22:54:18.614092 2442 solver.cpp:239] Iteration 9500 (6.79717 iter/s, 14.712s/100 iters), loss = 0.00340371I0226 22:54:18.614145 2442 solver.cpp:258]Train net output #0: loss = 0.00340357 (* 1 = 0.00340357 loss)I0226 22:54:18.614158 2442 sgd_solver.cpp:112] Iteration 9500, lr = 0.00606002I0226 22:54:27.679172 2442 solver.cpp:239] Iteration 9600 (11.0314 iter/s, 9.065s/100 iters), loss = 0.00235499I0226 22:54:27.679229 2442 solver.cpp:258]Train net output #0: loss = 0.00235484 (* 1 = 0.00235484 loss)I0226 22:54:27.679244 2442 sgd_solver.cpp:112] Iteration 9600, lr = 0.00603682I0226 22:54:36.747495 2442 solver.cpp:239] Iteration 9700 (11.0278 iter/s, 9.068s/100 iters), loss = 0.00282941I0226 22:54:36.747551 2442 solver.cpp:258]Train net output #0: loss = 0.00282926 (* 1 = 0.00282926 loss)I0226 22:54:36.747570 2442 sgd_solver.cpp:112] Iteration 9700, lr = 0.00601382I0226 22:54:45.798738 2442 solver.cpp:239] Iteration 9800 (11.0485 iter/s, 9.051s/100 iters), loss = 0.00863724I0226 22:54:45.798919 2442 solver.cpp:258]Train net output #0: loss = 0.0086371 (* 1 = 0.0086371 loss)I0226 22:54:45.798943 2442 sgd_solver.cpp:112] Iteration 9800, lr = 0.00599102I0226 22:54:54.860203 2442 solver.cpp:239] Iteration 9900 (11.0363 iter/s, 9.061s/100 iters), loss = 0.00483852I0226 22:54:54.860255 2442 solver.cpp:258]Train net output #0: loss = 0.00483838 (* 1 = 0.00483838 loss)I0226 22:54:54.860270 2442 sgd_solver.cpp:112] Iteration 9900, lr = 0.00596843I0226 22:55:03.856971 2442 solver.cpp:464] Snapshotting to binary proto file examples/mnist/lenet_iter_10000.caffemodelI0226 22:55:03.867449 2442 sgd_solver.cpp:284] Snapshotting solver state to binary proto file examples/mnist/lenet_iter_10000.solverstateI0226 22:55:03.909601 2442 solver.cpp:327] Iteration 10000, loss = 0.00331884I0226 22:55:03.909649 2442 solver.cpp:347] Iteration 10000, Testing net (#0)I0226 22:55:09.291905 2444 data_layer.cpp:73] Restarting data prefetching from start.I0226 22:55:09.518225 2442 solver.cpp:414]Test net output #0: accuracy = 0.9916I0226 22:55:09.518285 2442 solver.cpp:414]Test net output #1: loss = 0.0280047 (* 1 = 0.0280047 loss)I0226 22:55:09.518299 2442 solver.cpp:332] Optimization Done.I0226 22:55:09.518311 2442 caffe.cpp:250] Optimization Done.

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。