做网站找什么公司工作,顺德龙江网站建设,郑州网站建设推广渠道,秦皇岛学网站建设4、训练函数
4.1 调用训练函数
train(epochs, net, train_loader, device, optimizer, test_loader, true_value)因为每一个epoch训练结束后#xff0c;我们需要测试一下这个网络的性能#xff0c;所有会在训练函数中频繁调用测试函数#xff0c;所有测试函数中所有需要的…4、训练函数
4.1 调用训练函数
train(epochs, net, train_loader, device, optimizer, test_loader, true_value)因为每一个epoch训练结束后我们需要测试一下这个网络的性能所有会在训练函数中频繁调用测试函数所有测试函数中所有需要的参数训练函数都需要 这七个参数是训练一个神经网络所需要的最少参数
4.2 训练函数
训练函数中所有训练集进行多次迭代而每次迭代又会将数据分成多个批次进行迭代
def train(epochs, net, train_loader, device, optimizer, test_loader, true_value):for epoch in range(1, epochs 1):net.train()all_train_loss []for batch_idx, (data, target) in enumerate(train_loader):data data.to(device)target target.to(device)optimizer.zero_grad()output net(data)loss F.cross_entropy(output, target)loss.backward()optimizer.step()cur_train_loss loss.item()all_train_loss.append(cur_train_loss)train_loss np.round(np.mean(all_train_loss) * 1000, 2)print(\nepoch step:, epoch)print(training loss: , train_loss)test(net, test_loader, device, true_value, epoch)print(\nTraining finished)定义训练函数安装epochs迭代数据进入pytorch的训练模式all_train_loss 存放训练集5万张图片的损失值按照batch取数据数据进入GPU标签进入GPU梯度清零当前batch进入网络后得到输出根据输出得到当前损失反向传播梯度下降获取损失的损失值PyTorch框架中的数据把当前batch的损失加入all_train_loss数组中结束batch的迭代将5张图片的损失计算出来并且进行求平均这里乘以1000是因为我觉得计算出的损失太小了所以乘以1000方便看损失的变化保留两位有效数字打印当前epoch打印损失调用测试函数测试当前训练的网络的性能结束epoch的迭代打印训练完成
5、LeNet
5.1 网络结构
LeNet可以说是首次提出卷积神经网络的模型 主要包含下面的网络层
5*5的二维卷积sigmoid激活函数这里使用了relu5*5的二维卷积sigmoid激活函数数据一维化全连接层全连接层softmax分类器
将网络结构打印出来 LeNet( -------(conv1): Conv2d(1, 10, kernel_size(5, 5), stride(1, 1)) -------(conv2): Conv2d(10, 20, kernel_size(5, 5), stride(1, 1)) -------(conv2_drop): Dropout2d(p0.5, inplaceFalse) -------(fc1): Linear(in_features320, out_features50, biasTrue) -------(fc2): Linear(in_features50, out_features10, biasTrue) ) 5.2 PyTorch构建LeNet
class LeNet(nn.Module):def __init__(self, num_classes):super(LeNet, self).__init__()self.conv1 nn.Conv2d(1, 10, kernel_size5)self.conv2 nn.Conv2d(10, 20, kernel_size5)self.conv2_drop nn.Dropout2d()self.fc1 nn.Linear(320, 50)self.fc2 nn.Linear(50, num_classes)def forward(self, x):x F.relu(F.max_pool2d(self.conv1(x), 2))x F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))x x.view(-1, 320)x F.relu(self.fc1(x))x F.dropout(x, trainingself.training)x self.fc2(x)return F.log_softmax(x, dim1)这个时候已经是一个完整的项目了看看10个epoch训练过程的打印 D:\conda\envs\pytorch\python.exe A:\0_MNIST\train.py Reading data… train_data: (60000, 28, 28) train_label (60000,) test_data: (10000, 28, 28) test_label (10000,) Initialize neural network test loss: 2301.68 test accuracy: 11.3 % epoch step: 1 training loss: 634.74 test loss: 158.03 test accuracy: 95.29 % epoch step: 2 training loss: 324.04 test loss: 107.62 test accuracy: 96.55 % epoch step: 3 training loss: 271.25 test loss: 88.43 test accuracy: 97.04 % epoch step: 4 training loss: 236.69 test loss: 70.94 test accuracy: 97.61 % epoch step: 5 training loss: 211.05 test loss: 69.69 test accuracy: 97.72 % epoch step: 6 training loss: 199.28 test loss: 62.04 test accuracy: 97.98 % epoch step: 7 training loss: 187.11 test loss: 59.65 test accuracy: 97.98 % epoch step: 8 training loss: 178.79 test loss: 53.89 test accuracy: 98.2 % epoch step: 9 training loss: 168.75 test loss: 51.83 test accuracy: 98.43 % epoch step: 10 training loss: 160.83 test loss: 50.35 test accuracy: 98.4 % Training finished 进程已结束退出代码为 0 可以看出基本上只要一个epoch就可以得到很好的训练效果了后续的epoch中的提升比较小