当前位置: 首页 > news >正文

制作网站花都橱柜企业网站模板

制作网站花都,橱柜企业网站模板,文字图片设计制作在线,wordpress主分类#x1f368; 本文为#x1f517;365天深度学习训练营 中的学习记录博客 #x1f366; 参考文章地址#xff1a; 365天深度学习训练营-第J1周#xff1a;ResNet-50算法实战与解析 #x1f356; 作者#xff1a;K同学啊 理论知识储备 深度残差网络ResNet#xff08;dee…  本文为365天深度学习训练营 中的学习记录博客 参考文章地址 365天深度学习训练营-第J1周ResNet-50算法实战与解析 作者K同学啊 理论知识储备 深度残差网络ResNetdeep residual network在2015年由何凯明等提出因为它简单与实用并存随后很多研究都是建立在ResNet-50或者ResNet-101基础上完成的。 ResNet主要解决深度卷积网络在深度加深时候的“退化”问题。 在一般的卷积神经网络中增大网络深度后带来的第一个问题就是梯度消失、爆炸这个问题在Szegedy提出BN后被顺利解决。BN层能对各层的输出做归一化这样梯度在反向层层传递后仍能保持大小稳定不会出现过小或过大的情况。但是作者发现加了BN后再加大深度仍然不容易收敛其提到了第二个问题——准确率下降问题层级大到一定程度时准确率就会饱和然后迅速下降。这种下降既不是梯度消失引起的也不是过拟合造成的而是由于网络过于复杂以至于光靠不加约束的放养式的训练很难达到理想的错误率。准确率下降问题不是网络结构本身的问题而是现有的训练方式不够理想造成的。当前广泛使用的训练方法无论是SGD还是RMSProp或是Adam都无法在网络深度变大后达到理论上最优的收敛结果。还可以证明只要有理想的训练方式更深的网络肯定会比较浅的网络效果要好。证明过程也很简单假设在一种网络A的后面添加几层形成新的网络B如果增加的层级只是对A的输出做了个恒等映射identity mapping即A的输出经过新增的层级变成B的输出后没有发生变化这样网络A和网络B的错误率就是相等的也就证明了加深后的网络不会比加深前的网络效果差。   一、前期准备 1.设置GPU import torch from torch import nn import torchvision from torchvision import transforms,datasets,models import matplotlib.pyplot as plt import os,PIL,pathlib device torch.device(cuda if torch.cuda.is_available() else cpu) device device(typecuda) 2.导入数据 data_dir ./J1/ data_dir pathlib.Path(data_dir)image_count len(list(data_dir.glob(*/*/*.jpg))) print(图片总数为,image_count) 图片总数为 565 classNames [str(path).split(\\)[2] for path in data_dir.glob(bird_photos/*/)] classNames [Bananaquit, Black Skimmer, Black Throated Bushtiti, Cockatoo] train_transforms transforms.Compose([transforms.Resize([224, 224]),transforms.RandomRotation(45),#随机旋转-45到45度之间随机选 # transforms.CenterCrop(224),#从中心开始裁剪transforms.RandomHorizontalFlip(p0.5),#随机水平翻转 选择一个概率概率 # transforms.RandomVerticalFlip(p0.5),#随机垂直翻转 # transforms.ColorJitter(brightness0.2, contrast0.1, saturation0.1, hue0.1),#参数1为亮度参数2为对比度参数3为饱和度参数4为色相 # transforms.RandomGrayscale(p0.025),#概率转换成灰度率3通道就是RGBtransforms.ToTensor(),transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])#均值标准差])# test_transforms transforms.Compose([ # transforms.Resize([224, 224]), # transforms.ToTensor(), # transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) # ]) total_data datasets.ImageFolder(./J1/bird_photos/,transformtrain_transforms) total_data Dataset ImageFolderNumber of datapoints: 565Root location: ./J1/bird_photos/StandardTransform Transform: Compose(Resize(size[224, 224], interpolationPIL.Image.BILINEAR)RandomRotation(degrees[-45.0, 45.0], resampleFalse, expandFalse)RandomHorizontalFlip(p0.5)ToTensor()Normalize(mean[0.485, 0.456, 0.406], std[0.229, 0.224, 0.225])) classNames total_data.classes classNames [Bananaquit, Black Skimmer, Black Throated Bushtiti, Cockatoo] total_data.class_to_idx {Bananaquit: 0,Black Skimmer: 1,Black Throated Bushtiti: 2,Cockatoo: 3} 3.数据集划分 train_size int(0.8*len(total_data)) test_size len(total_data) - train_size train_dataset, test_dataset torch.utils.data.random_split(total_data,[train_size,test_size]) train_dataset,test_dataset (torch.utils.data.dataset.Subset at 0x1a6883fe310,torch.utils.data.dataset.Subset at 0x1a6883fe370) train_size,test_size (452, 113) batch_size 32 train_dl torch.utils.data.DataLoader(train_dataset,batch_sizebatch_size,shuffleTrue,num_workers1) test_dl torch.utils.data.DataLoader(test_dataset,batch_sizebatch_size,shuffleTrue,num_workers1) imgs, labels next(iter(train_dl)) imgs.shape torch.Size([32, 3, 224, 224]) 4. 数据可视化 import numpy as np# 指定图片大小图像大小为20宽、5高的绘图(单位为英寸inch) plt.figure(figsize(20, 5)) for i, imgs in enumerate(imgs[:20]):npimg imgs.numpy().transpose((1,2,0))npimg npimg * np.array((0.229, 0.224, 0.225)) np.array((0.485, 0.456, 0.406))npimg npimg.clip(0, 1)# 将整个figure分成2行10列绘制第i1个子图。plt.subplot(2, 10, i1)plt.imshow(npimg)plt.axis(off) for X,y in test_dl:print(Shape of X [N, C, H, W]:, X.shape)print(Shape of y:, y.shape)break Shape of X [N, C, H, W]: torch.Size([32, 3, 224, 224]) Shape of y: torch.Size([32]) 二、构建ResNet50网络 n_class 4Same Padding def autopad(k, pNone): # kernel, padding# Pad to sameif p is None:p k // 2 if isinstance(k, int) else [x // 2 for x in k] # auto-padreturn p Identity Block class IdentityBlock(nn.Module):def __init__(self, in_channel, kernel_size, filters):super(IdentityBlock, self).__init__()filters1, filters2, filters3 filtersself.conv1 nn.Sequential(nn.Conv2d(in_channel, filters1, 1, stride1, padding0, biasFalse),nn.BatchNorm2d(filters1),nn.ReLU(True))self.conv2 nn.Sequential(nn.Conv2d(filters1, filters2, kernel_size, stride1, paddingautopad(kernel_size), biasFalse),nn.BatchNorm2d(filters2),nn.ReLU(True))self.conv3 nn.Sequential(nn.Conv2d(filters2, filters3, 1, stride1, padding0, biasFalse),nn.BatchNorm2d(filters3))self.relu nn.ReLU(True)def forward(self, x):x1 self.conv1(x)x1 self.conv2(x1)x1 self.conv3(x1)x x1 xself.relu(x)return x Conv Block class ConvBlock(nn.Module):def __init__(self, in_channel, kernel_size, filters, stride2):super(ConvBlock, self).__init__()filters1, filters2, filters3 filtersself.conv1 nn.Sequential(nn.Conv2d(in_channel, filters1, 1, stridestride, padding0, biasFalse),nn.BatchNorm2d(filters1),nn.ReLU(True))self.conv2 nn.Sequential(nn.Conv2d(filters1, filters2, kernel_size, stride1, paddingautopad(kernel_size), biasFalse),nn.BatchNorm2d(filters2),nn.ReLU(True))self.conv3 nn.Sequential(nn.Conv2d(filters2, filters3, 1, stride1, padding0, biasFalse),nn.BatchNorm2d(filters3))self.conv4 nn.Sequential(nn.Conv2d(in_channel, filters3, 1, stridestride, padding0, biasFalse),nn.BatchNorm2d(filters3))self.relu nn.ReLU(True)def forward(self, x):x1 self.conv1(x)x1 self.conv2(x1)x1 self.conv3(x1)x2 self.conv4(x)x x1 x2self.relu(x)return x 构建ResNet-50 class ResNet50(nn.Module):def __init__(self, classes1000):super(ResNet50, self).__init__()self.conv1 nn.Sequential(nn.Conv2d(3, 64, 7, stride2, padding3, biasFalse, padding_modezeros),nn.BatchNorm2d(64),nn.ReLU(),nn.MaxPool2d(kernel_size3, stride2, padding0))self.conv2 nn.Sequential(ConvBlock(64, 3, [64, 64, 256], stride1),IdentityBlock(256, 3, [64, 64, 256]),IdentityBlock(256, 3, [64, 64, 256]))self.conv3 nn.Sequential(ConvBlock(256, 3, [128, 128, 512]),IdentityBlock(512, 3, [128, 128, 512]),IdentityBlock(512, 3, [128, 128, 512]),IdentityBlock(512, 3, [128, 128, 512]))self.conv4 nn.Sequential(ConvBlock(512, 3, [256, 256, 1024]),IdentityBlock(1024, 3, [256, 256, 1024]),IdentityBlock(1024, 3, [256, 256, 1024]),IdentityBlock(1024, 3, [256, 256, 1024]),IdentityBlock(1024, 3, [256, 256, 1024]),IdentityBlock(1024, 3, [256, 256, 1024]))self.conv5 nn.Sequential(ConvBlock(1024, 3, [512, 512, 2048]),IdentityBlock(2048, 3, [512, 512, 2048]),IdentityBlock(2048, 3, [512, 512, 2048]))self.pool nn.AvgPool2d(kernel_size7, stride7, padding0)self.fc nn.Linear(2048, n_class)def forward(self, x):x self.conv1(x)x self.conv2(x)x self.conv3(x)x self.conv4(x)x self.conv5(x)x self.pool(x)x torch.flatten(x, start_dim1)x self.fc(x)return xmodel ResNet50().to(device) # 查看网络结构 import torchsummary torchsummary.summary(model, (3, 224, 224)) print(model) 三、训练模型 1.优化器设置 # 优化器设置 optimizer torch.optim.Adam(model.parameters(), lr1e-4)#要训练什么参数/ scheduler torch.optim.lr_scheduler.StepLR(optimizer, step_size5, gamma0.92)#学习率每5个epoch衰减成原来的1/10 loss_fn nn.CrossEntropyLoss() 2.编写训练函数 # 训练循环 def train(dataloader, model, loss_fn, optimizer):size len(dataloader.dataset) # 训练集的大小num_batches len(dataloader) # 批次数目train_loss, train_acc 0, 0 # 初始化训练损失和正确率for X, y in dataloader: # 获取图片及其标签X, y X.to(device), y.to(device)# 计算预测误差pred model(X) # 网络输出loss loss_fn(pred, y) # 计算网络输出和真实值之间的差距targets为真实值计算二者差值即为损失# 反向传播optimizer.zero_grad() # grad属性归零loss.backward() # 反向传播optimizer.step() # 每一步自动更新# 记录acc与losstrain_acc (pred.argmax(1) y).type(torch.float).sum().item()train_loss loss.item()train_acc / sizetrain_loss / num_batchesreturn train_acc, train_loss 3.编写测试函数 def test (dataloader, model, loss_fn):size len(dataloader.dataset) # 测试集的大小一共10000张图片num_batches len(dataloader) # 批次数目8255/328向上取整test_loss, test_acc 0, 0# 当不进行训练时停止梯度更新节省计算内存消耗with torch.no_grad():for imgs, target in dataloader:imgs, target imgs.to(device), target.to(device)# 计算losstarget_pred model(imgs)loss loss_fn(target_pred, target)test_loss loss.item()test_acc (target_pred.argmax(1) target).type(torch.float).sum().item()test_acc / sizetest_loss / num_batchesreturn test_acc, test_loss 4、正式训练 epochs 20 train_loss [] train_acc [] test_loss [] test_acc [] best_acc 0for epoch in range(epochs):model.train()epoch_train_acc, epoch_train_loss train(train_dl, model, loss_fn, optimizer)scheduler.step()#学习率衰减model.eval()epoch_test_acc, epoch_test_loss test(test_dl, model, loss_fn)# 保存最优模型if epoch_test_acc best_acc:best_acc epoch_train_accstate {state_dict: model.state_dict(),#字典里key就是各层的名字值就是训练好的权重best_acc: best_acc,optimizer : optimizer.state_dict(),}train_acc.append(epoch_train_acc)train_loss.append(epoch_train_loss)test_acc.append(epoch_test_acc)test_loss.append(epoch_test_loss)template (Epoch:{:2d}, Train_acc:{:.1f}%, Train_loss:{:.3f}, Test_acc:{:.1f}%Test_loss:{:.3f})print(template.format(epoch1, epoch_train_acc*100, epoch_train_loss, epoch_test_acc*100, epoch_test_loss)) print(Done) print(best_acc,best_acc) Epoch:19, Train_acc:88.9%, Train_loss:0.264, Test_acc:87.6%Test_loss:0.347 Epoch:20, Train_acc:86.1%, Train_loss:0.481, Test_acc:87.6%Test_loss:0.319 Done best_acc 0.911504424778761四、结果可视化 1.Loss与Accuracy图 import matplotlib.pyplot as plt #隐藏警告 import warnings warnings.filterwarnings(ignore) #忽略警告信息 plt.rcParams[font.sans-serif] [SimHei] # 用来正常显示中文标签 plt.rcParams[axes.unicode_minus] False # 用来正常显示负号 plt.rcParams[figure.dpi] 100 #分辨率epochs_range range(epochs)plt.figure(figsize(12, 3)) plt.subplot(1, 2, 1)plt.plot(epochs_range, train_acc, labelTraining Accuracy) plt.plot(epochs_range, test_acc, labelTest Accuracy) plt.legend(loclower right) plt.title(Training and Validation Accuracy)plt.subplot(1, 2, 2) plt.plot(epochs_range, train_loss, labelTraining Loss) plt.plot(epochs_range, test_loss, labelTest Loss) plt.legend(locupper right) plt.title(Training and Validation Loss) plt.show() 2.指定图片进行预测 from PIL import Imageclasses list(total_data.class_to_idx)def predict_one_img(image_path,model,transform,classes):test_img Image.open(image_path).convert(RGB)plt.imshow(test_img)test_img transform(test_img)img test_img.to(device).unsqueeze(0)model.eval()output model(img)_,pred torch.max(output,1)pred_class classes[pred]print(f预测结果是{pred_class}) predict_one_img(./J1/bird_photos/Bananaquit/047.jpg, model, train_transforms, classNames) 预测结果是Bananaquit
http://www.w-s-a.com/news/64846/

相关文章:

  • 中国建设招聘信息网站昆明做网站建设的公司排名
  • 那些网站可以做自媒体wordpress 分类seo
  • 淮安市盱眙县建设局网站北京西站到八达岭长城最快路线
  • 在线免费网站企业查查官网入口官网
  • 天津网站优化公司哪家专业超融合系统
  • 邹平网站建设公司报价网站建设备案多长时间
  • 三合一网站开发教程wordpress主题汉化中文版
  • 广州网站建设高端全网营销图片
  • 措勤网站建设罗定城乡建设局网站
  • 苏州建网站流程wordpress不显示内容你
  • 网站流量数据golang建设网站
  • 2020电商网站排行榜如何开设网站
  • 绍兴seo网站管理创新的网站建站
  • 做网站需要的图片网站的视频怎么下载
  • 教人做家务的网站滕州网站建设网站行吗
  • 湖北专业的网瘾学校哪家口碑好seo百度百科
  • 保定网站制作软件网页制作工具程
  • o2o网站建设教程计算机培训班培训费用
  • 赤峰网站制作php智能建站系统
  • 做高防鞋 哪个网站能上架net网站开发net网站开发
  • 做网站公司郑州推广计划步骤
  • 网站建设计无形资产外国做美食视频网站
  • 创立一个网站需要什么网推技巧
  • 网站的会员功能怎么做wordpress主题开拓右边栏
  • 做个一般的网站要多少钱nas 建网站
  • 网页设计作品源代码彼岸花坊网站seo测评
  • 用什么软件做动漫视频网站好环保网站设计价格
  • 合肥网站设计服投稿网站源码
  • 为什么很多网站用php做上海口碑最好的装修公司排名
  • 运城网站推广找人做小程序要多少钱