200字范文,内容丰富有趣,生活中的好帮手!
200字范文 > 使用PYTORCH复现ALEXNET实现MNIST手写数字识别

使用PYTORCH复现ALEXNET实现MNIST手写数字识别

时间:2018-11-02 18:58:03

相关推荐

使用PYTORCH复现ALEXNET实现MNIST手写数字识别

网络介绍:

Alexnet网络是CV领域最经典的网络结构之一了,在横空出世,并在当年夺下了不少比赛的冠军,下面是Alexnet的网络结构:

网络结构较为简单,共有五个卷积层和三个全连接层,原文作者在训练时使用了多卡一起训练,具体细节可以阅读原文得到。

Alexnet文章链接:http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf

作者在网络中使用了Relu激活函数和Dropout等方法来防止过拟合,更多细节看文章。

数据集介绍

使用的是MNIST手写数字识别数据集,torchvision中自带有数据集的下载地址。

定义网络结构

就按照网络结构图中一层一层的定义就行,其中第1,2,5层卷积层后面接有Max pooling层和Relu激活函数,五层卷积之后得到图像的特征表示,送入全连接层中进行分类。

# !/usr/bin/python3# -*- coding:utf-8 -*-# Author:WeiFeng Liu# @Time: /11/2 下午3:25import torchimport torch.nn as nnimport torch.nn.functional as Fimport torchvision.transforms as transformsimport torch.optim as optimclass AlexNet(nn.Module):def __init__(self,width_mult=1):super(AlexNet,self).__init__()#定义每一个就卷积层self.layer1 = nn.Sequential(#卷积层 #输入图像为1*28*28nn.Conv2d(1,32,kernel_size=3,padding=1),#池化层nn.MaxPool2d(kernel_size=2,stride=2) , #池化层特征图通道数不改变,每个特征图的分辨率变小#激活函数Relunn.ReLU(inplace=True),)self.layer2 = nn.Sequential(nn.Conv2d(32,64,kernel_size=3,padding=1),nn.MaxPool2d(kernel_size=2,stride=2),nn.ReLU(inplace=True),)self.layer3 = nn.Sequential(nn.Conv2d(64,128,kernel_size=3,padding=1),)self.layer4 = nn.Sequential(nn.Conv2d(128,256,kernel_size=3,padding=1),)self.layer5 = nn.Sequential(nn.Conv2d(256,256,kernel_size=3,padding=1),nn.MaxPool2d(kernel_size=3, stride=2),nn.ReLU(inplace=True),)#定义全连接层self.fc1 = nn.Linear(256 * 3 * 3,1024)self.fc2 = nn.Linear(1024,512)self.fc3 = nn.Linear(512,10)#对应十个类别的输出def forward(self,x):x = self.layer1(x)x = self.layer2(x)x = self.layer3(x)x = self.layer4(x)x = self.layer5(x)x = x.view(-1,256*3*3)x = self.fc1(x)x = self.fc2(x)x = self.fc3(x)return x

训练

# !/usr/bin/python3# -*- coding:utf-8 -*-# Author:WeiFeng Liu# @Time: /11/2 下午3:38import torchimport torch.nn as nnimport torch.nn.functional as Fimport torchvisionimport torch.optim as optimimport torchvision.transforms as transformsfrom alexnet import AlexNetimport cv2from utils import plot_image,plot_curve,one_hot#定义使用GPUdevice = torch.device("cuda" if torch.cuda.is_available() else "cpu")#设置超参数epochs = 30batch_size = 256lr = 0.01train_loader = torch.utils.data.DataLoader(torchvision.datasets.MNIST('mnist_data',train=True,download=True,transform = pose([torchvision.transforms.ToTensor(),#数据归一化torchvision.transforms.Normalize((0.1307,),(0.3081,))])),batch_size = batch_size,shuffle = True)test_loader = torch.utils.data.DataLoader(torchvision.datasets.MNIST('mnist_data/',train=False,download=True,transform = pose([torchvision.transforms.ToTensor(),torchvision.transforms.Normalize((0.1307,),(0.3081,))])),batch_size = 256,shuffle = False)#定义损失函数criterion = nn.CrossEntropyLoss()#定义网络net = AlexNet().to(device)#定义优化器optimzer = optim.SGD(net.parameters(),lr=lr,momentum = 0.9)#traintrain_loss = []for epoch in range(epochs):sum_loss = 0.0for batch_idx,(x,y) in enumerate(train_loader):print(x.shape)x = x.to(device)y = y.to(device)#梯度清零optimzer.zero_grad()pred = net(x)loss = criterion(pred, y)loss.backward()optimzer.step()train_loss.append(loss.item())sum_loss += loss.item()if batch_idx % 100 == 99:print('[%d, %d] loss: %.03f'% (epoch + 1, batch_idx + 1, sum_loss / 100))sum_loss = 0.0torch.save(net.state_dict(),'/home/lwf/code/pytorch学习/alexnet图像分类/model/model.pth')plot_curve(train_loss)

使用交叉熵损失函数和SGD优化器来训练网络,训练后保存模型至本地。

训练过程中损失函数的收敛过程:

测试准确率

完整代码:/SPECTRELWF/pytorch-cnn-study/tree/main/Alexnet-MNIST

个人主页:http://liuweifeng.top:8090/

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。