当前位置: 首页 > news >正文

个人备案网站描述o2o系统

个人备案网站描述,o2o系统,ssc网站建设,网站设计的主要内容深度学习网络模型——RepVGG网络详解0 前言1 RepVGG Block详解2 结构重参数化2.1 融合Conv2d和BN2.2 Conv2dBN融合实验(Pytorch)2.3 将1x1卷积转换成3x3卷积2.4 将BN转换成3x3卷积2.5 多分支融合2.6 结构重参数化实验(Pytorch)3 模型配置论文名称#xff1a; RepVGG: Making V… 深度学习网络模型——RepVGG网络详解0 前言1 RepVGG Block详解2 结构重参数化2.1 融合Conv2d和BN2.2 Conv2dBN融合实验(Pytorch)2.3 将1x1卷积转换成3x3卷积2.4 将BN转换成3x3卷积2.5 多分支融合2.6 结构重参数化实验(Pytorch)3 模型配置论文名称 RepVGG: Making VGG-style ConvNets Great Again论文下载地址 https://arxiv.org/abs/2101.03697官方源码Pytorch实现 https://github.com/DingXiaoH/RepVGG0 前言 1 RepVGG Block详解 2 结构重参数化 2.1 融合Conv2d和BN 2.2 Conv2dBN融合实验(Pytorch) from collections import OrderedDictimport numpy as np import torch import torch.nn as nndef main():torch.random.manual_seed(0)f1 torch.randn(1, 2, 3, 3)module nn.Sequential(OrderedDict(convnn.Conv2d(in_channels2, out_channels2, kernel_size3, stride1, padding1, biasFalse),bnnn.BatchNorm2d(num_features2)))module.eval()with torch.no_grad():output1 module(f1)print(output1)# fuse conv bnkernel module.conv.weight running_mean module.bn.running_meanrunning_var module.bn.running_vargamma module.bn.weightbeta module.bn.biaseps module.bn.epsstd (running_var eps).sqrt()t (gamma / std).reshape(-1, 1, 1, 1) # [ch] - [ch, 1, 1, 1]kernel kernel * tbias beta - running_mean * gamma / stdfused_conv nn.Conv2d(in_channels2, out_channels2, kernel_size3, stride1, padding1, biasTrue)fused_conv.load_state_dict(OrderedDict(weightkernel, biasbias))with torch.no_grad():output2 fused_conv(f1)print(output2)np.testing.assert_allclose(output1.numpy(), output2.numpy(), rtol1e-03, atol1e-05)print(convert module has been tested, and the result looks good!)if __name__ __main__:main() 终端输出结果 2.3 将1x1卷积转换成3x3卷积 2.4 将BN转换成3x3卷积 代码截图如下所示 2.5 多分支融合 代码截图 图像演示 2.6 结构重参数化实验(Pytorch) import time import torch.nn as nn import numpy as np import torchdef conv_bn(in_channels, out_channels, kernel_size, stride, padding, groups1):result nn.Sequential()result.add_module(conv, nn.Conv2d(in_channelsin_channels, out_channelsout_channels,kernel_sizekernel_size, stridestride, paddingpadding,groupsgroups, biasFalse))result.add_module(bn, nn.BatchNorm2d(num_featuresout_channels))return resultclass RepVGGBlock(nn.Module):def __init__(self, in_channels, out_channels, kernel_size3,stride1, padding1, dilation1, groups1, padding_modezeros, deployFalse):super(RepVGGBlock, self).__init__()self.deploy deployself.groups groupsself.in_channels in_channelsself.nonlinearity nn.ReLU()if deploy:self.rbr_reparam nn.Conv2d(in_channelsin_channels, out_channelsout_channels,kernel_sizekernel_size, stridestride,paddingpadding, dilationdilation, groupsgroups,biasTrue, padding_modepadding_mode)else:self.rbr_identity nn.BatchNorm2d(num_featuresin_channels) \if out_channels in_channels and stride 1 else Noneself.rbr_dense conv_bn(in_channelsin_channels, out_channelsout_channels, kernel_sizekernel_size,stridestride, paddingpadding, groupsgroups)self.rbr_1x1 conv_bn(in_channelsin_channels, out_channelsout_channels, kernel_size1,stridestride, padding0, groupsgroups)def forward(self, inputs):if hasattr(self, rbr_reparam):return self.nonlinearity(self.rbr_reparam(inputs))if self.rbr_identity is None:id_out 0else:id_out self.rbr_identity(inputs)return self.nonlinearity(self.rbr_dense(inputs) self.rbr_1x1(inputs) id_out)def get_equivalent_kernel_bias(self):kernel3x3, bias3x3 self._fuse_bn_tensor(self.rbr_dense)kernel1x1, bias1x1 self._fuse_bn_tensor(self.rbr_1x1)kernelid, biasid self._fuse_bn_tensor(self.rbr_identity)return kernel3x3 self._pad_1x1_to_3x3_tensor(kernel1x1) kernelid, bias3x3 bias1x1 biasiddef _pad_1x1_to_3x3_tensor(self, kernel1x1):if kernel1x1 is None:return 0else:return torch.nn.functional.pad(kernel1x1, [1, 1, 1, 1])def _fuse_bn_tensor(self, branch):if branch is None:return 0, 0if isinstance(branch, nn.Sequential):kernel branch.conv.weightrunning_mean branch.bn.running_meanrunning_var branch.bn.running_vargamma branch.bn.weightbeta branch.bn.biaseps branch.bn.epselse:assert isinstance(branch, nn.BatchNorm2d)if not hasattr(self, id_tensor):input_dim self.in_channels // self.groupskernel_value np.zeros((self.in_channels, input_dim, 3, 3), dtypenp.float32)for i in range(self.in_channels):kernel_value[i, i % input_dim, 1, 1] 1self.id_tensor torch.from_numpy(kernel_value).to(branch.weight.device)kernel self.id_tensorrunning_mean branch.running_meanrunning_var branch.running_vargamma branch.weightbeta branch.biaseps branch.epsstd (running_var eps).sqrt()t (gamma / std).reshape(-1, 1, 1, 1)return kernel * t, beta - running_mean * gamma / stddef switch_to_deploy(self):if hasattr(self, rbr_reparam):returnkernel, bias self.get_equivalent_kernel_bias()self.rbr_reparam nn.Conv2d(in_channelsself.rbr_dense.conv.in_channels,out_channelsself.rbr_dense.conv.out_channels,kernel_sizeself.rbr_dense.conv.kernel_size, strideself.rbr_dense.conv.stride,paddingself.rbr_dense.conv.padding, dilationself.rbr_dense.conv.dilation,groupsself.rbr_dense.conv.groups, biasTrue)self.rbr_reparam.weight.data kernelself.rbr_reparam.bias.data biasfor para in self.parameters():para.detach_()self.__delattr__(rbr_dense)self.__delattr__(rbr_1x1)if hasattr(self, rbr_identity):self.__delattr__(rbr_identity)if hasattr(self, id_tensor):self.__delattr__(id_tensor)self.deploy Truedef main():f1 torch.randn(1, 64, 64, 64)block RepVGGBlock(in_channels64, out_channels64)block.eval()with torch.no_grad():output1 block(f1)start_time time.time()for _ in range(100):block(f1)print(fconsume time: {time.time() - start_time})# re-parameterizationblock.switch_to_deploy()output2 block(f1)start_time time.time()for _ in range(100):block(f1)print(fconsume time: {time.time() - start_time})np.testing.assert_allclose(output1.numpy(), output2.numpy(), rtol1e-03, atol1e-05)print(convert module has been tested, and the result looks good!)if __name__ __main__:main() 终端输出结果如下 通过对比能够发现结构重参数化后推理速度翻倍了并且转换前后的输出保持一致。 3 模型配置
http://www.w-s-a.com/news/167890/

相关文章:

  • 重庆忠县网站建设报价网页构建
  • 怎么自己做单页网站怎么在阿里做网站
  • 公司网站重新备案做电商没几个能赚钱的
  • 网站开发我们都能解决怎样做网站吸引客户
  • 网站首页图片切换代码wordpress minfy
  • 什么程序做网站收录好企业搭建网站的必要性
  • 建设网站主题建站必须要域名吗
  • 网站建设海报设计购物平台网站建设框架
  • 湖北在线网站建设建一个网站迈年
  • 上班自己花钱做的网站网站首页的动态怎么做
  • 台州网站建设哪家便宜沧州最新消息今天
  • 建设网站 请示 报告wordpress会员制
  • 青岛建网站人做网站怎么赚钱广告
  • 网站建设哪家好公司跨境电商展会2023
  • 设计大神云集的网站是南通市 网站设计
  • 心理咨询网站模板企业画册封面设计
  • 做网站 南京网站建设的重难点分析
  • 深圳做网站980移动网站开发语言
  • 网站评论怎么做seo关键词优化方法
  • 市级部门网站建设自评报告网站优化文章怎么做
  • 可不可以异地建设网站学做网站培训班要多少钱
  • 茌平网站建设公司免费的云服务器有哪些
  • 手机网站单页面铜陵网站制作公司
  • 网站logo怎么做才清晰千库网官网首页登录
  • 山西省建设银行网站首页长沙网站建设制作
  • 襄阳市做网站 优帮云百度搜索次数统计
  • 自己做视频直播网站盐城做网站多少钱
  • 买个网站服务器多少钱重庆做的好的房产网站
  • 深圳定制建站网站建设推广关键词怎么设置
  • 宝山网站建设 网站外包修改wordpress版权