当前位置: 首页 > news >正文

网站开发人员 生活wordpress打开文章

网站开发人员 生活,wordpress打开文章,信息发布网站设计,来宾北京网站建设模型部署推理 模型部署模型推理 我们会将PyTorch训练好的模型转换为ONNX 格式#xff0c;然后使用ONNX Runtime运行它进行推理 1、ONNX ONNX( Open Neural Network Exchange) 是 Facebook (现Meta) 和微软在2017年共同发布的#xff0c;用于标准描述计算图的一种格式…模型部署推理 模型部署模型推理 我们会将PyTorch训练好的模型转换为ONNX 格式然后使用ONNX Runtime运行它进行推理 1、ONNX ONNX( Open Neural Network Exchange) 是 Facebook (现Meta) 和微软在2017年共同发布的用于标准描述计算图的一种格式。ONNX通过定义一组与环境和平台无关的标准格式使AI模型可以在不同框架和环境下交互使用ONNX可以看作深度学习框架和部署端的桥梁就像编译器的中间语言一样 由于各框架兼容性不一我们通常只用 ONNX 表示更容易部署的静态图。硬件和软件厂商只需要基于ONNX标准优化模型性能让所有兼容ONNX标准的框架受益 ONNX主要关注在模型预测方面使用不同框架训练的模型转化为ONNX格式后可以很容易的部署在兼容ONNX的运行环境中 ONNX官网https://onnx.ai/ONNX GitHubhttps://github.com/onnx/onnx [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-4hoUBZ88-1692614464568)(attachment:image-2.png)] [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-PlCTmLyk-1692614464569)(attachment:image.png)] 2、ONNX Runtime ONNX Runtime官网https://www.onnxruntime.ai/ONNX Runtime GitHubhttps://github.com/microsoft/onnxruntime ONNX Runtime 是由微软维护的一个跨平台机器学习推理加速器它直接对接ONNX可以直接读取.onnx文件并实现推理不需要再把 .onnx 格式的文件转换成其他格式的文件 PyTorch借助ONNX Runtime也完成了部署的最后一公里构建了 PyTorch -- ONNX -- ONNX Runtime 部署流水线 安装onnx pip install onnx 安装onnx runtime pip install onnxruntime # 使用CPU进行推理 pip install onnxruntime-gpu # 使用GPU进行推理 注意ONNX和ONNX Runtime之间的适配关系。我们可以访问ONNX Runtime的Github进行查看 网址https://github.com/microsoft/onnxruntime/blob/master/docs/Versioning.md [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-NVBVlhGG-1692614464569)(attachment:image.png)] ONNX Runtime和CUDA之间的适配关系 网址https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-6x0xvNMn-1692614464569)(attachment:image-2.png)] ONNX Runtime、TensorRT和CUDA的匹配关系 [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-G7NPCXmY-1692614464569)(attachment:image-3.png)] 3、模型转换为ONNX格式 用torch.onnx.export()把模型转换成 ONNX 格式的函数模型导成onnx格式前我们必须调用model.eval()或者model.train(False)以确保我们的模型处在推理模式下 import torch.onnx # 转换的onnx格式的名称文件后缀需为.onnx onnx_file_name resnet50.onnx # 我们需要转换的模型将torch_model设置为自己的模型 model torchvision.models.resnet50(pretrainedTrue) # 加载权重将model.pth转换为自己的模型权重 model model.load_state_dict(torch.load(resnet50.pt)) # 导出模型前必须调用model.eval()或者model.train(False) model.eval() # dummy_input就是一个输入的实例仅提供输入shape、type等信息 batch_size 1 # 随机的取值当设置dynamic_axes后影响不大 dummy_input torch.randn(batch_size, 3, 224, 224, requires_gradTrue) # 这组输入对应的模型输出 output model(dummy_input) # 导出模型 torch.onnx.export(model, # 模型的名称dummy_input, # 一组实例化输入onnx_file_name, # 文件保存路径/名称export_paramsTrue, # 如果指定为True或默认, 参数也会被导出. 如果你要导出一个没训练过的就设为 False.opset_version10, # ONNX 算子集的版本当前已更新到15do_constant_foldingTrue, # 是否执行常量折叠优化input_names [conv1], # 输入模型的张量的名称output_names [fc], # 输出模型的张量的名称# dynamic_axes将batch_size的维度指定为动态# 后续进行推理的数据可以与导出的dummy_input的batch_size不同dynamic_axes{conv1 : {0 : batch_size}, fc : {0 : batch_size}})注 算子版本对照文档https://github.com/onnx/onnx/blob/main/docs/Operators.md ONNX模型的检验 我们需要检测下我们的模型文件是否可用我们将通过onnx.checker.check_model()进行检验 import onnx # 我们可以使用异常处理的方法进行检验 try:# 当我们的模型不可用时将会报出异常onnx.checker.check_model(self.onnx_model) except onnx.checker.ValidationError as e:print(The model is invalid: %s%e) else:# 模型可用时将不会报出异常并会输出“The model is valid!”print(The model is valid!)ONNX模型可视化 使用netron做可视化。下载地址https://netron.app/ [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-iEgN86DI-1692614464569)(attachment:image.png)] 模型的输入输出信息 [外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-qzyKV8ba-1692614464570)(attachment:image-2.png)] 使用ONNX Runtime进行推理 import onnxruntime # 需要进行推理的onnx模型文件名称 onnx_file_name xxxxxx.onnx# onnxruntime.InferenceSession用于获取一个 ONNX Runtime 推理器 ort_session onnxruntime.InferenceSession(onnx_file_name, providers[CPUExecutionProvider]) # session_fp32 onnxruntime.InferenceSession(resnet50.onnx, providers[CUDAExecutionProvider]) # session_fp32 onnxruntime.InferenceSession(resnet50.onnx, providers[OpenVINOExecutionProvider])# 构建字典的输入数据字典的key需要与我们构建onnx模型时的input_names相同 # 输入的input_img 也需要改变为ndarray格式 # ort_inputs {conv_1: input_img} #建议使用下面这种方法,因为避免了手动输入key ort_inputs {ort_session.get_inputs()[0].name:input_img}# run是进行模型的推理第一个参数为输出张量名的列表一般情况可以设置为None # 第二个参数为构建的输入值的字典 # 由于返回的结果被列表嵌套因此我们需要进行[0]的索引 ort_output ort_session.run(None,ort_inputs)[0] # output {ort_session.get_outputs()[0].name} # ort_output ort_session.run([output], ort_inputs)[0] 注意 PyTorch模型的输入为tensor而ONNX的输入为array因此我们需要对张量进行变换或者直接将数据读取为array格式输入的array的shape应该和我们导出模型的dummy_input的shape相同如果图片大小不一样我们应该先进行resize操作run的结果是一个列表我们需要进行索引操作才能获得array格式的结果在构建输入的字典时我们需要注意字典的key应与导出ONNX格式设置的input_name相同 完整代码 1. 安装下载 #!pip install onnx -i https://pypi.tuna.tsinghua.edu.cn/simple #!pip install onnxruntime -i https://pypi.tuna.tsinghua.edu.cn/simple #!pip install torch -i https://pypi.tuna.tsinghua.edu.cn/simple # Download ImageNet labels #!wget https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt2、定义模型 import torch import io import time from PIL import Image import torchvision.transforms as transforms from torchvision import datasets import onnx import onnxruntime import torchvision import numpy as np from torch import nn import torch.nn.init as initonnx_file resnet50.onnx save_dir ./resnet50.pt# 下载预训练模型 Resnet50 torchvision.models.resnet50(pretrainedTrue)# 保存 模型权重 torch.save(Resnet50.state_dict(), save_dir)print(Resnet50)D:\Users\xulele\Anaconda3\lib\site-packages\torchvision\models\_utils.py:208: UserWarning: The parameter pretrained is deprecated since 0.13 and may be removed in the future, please use weights instead.warnings.warn( D:\Users\xulele\Anaconda3\lib\site-packages\torchvision\models\_utils.py:223: UserWarning: Arguments other than a weight enum or None for weights are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing weightsResNet50_Weights.IMAGENET1K_V1. You can also use weightsResNet50_Weights.DEFAULT to get the most up-to-date weights.warnings.warn(msg)ResNet((conv1): Conv2d(3, 64, kernel_size(7, 7), stride(2, 2), padding(3, 3), biasFalse)(bn1): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(maxpool): MaxPool2d(kernel_size3, stride2, padding1, dilation1, ceil_modeFalse)(layer1): Sequential((0): Bottleneck((conv1): Conv2d(64, 64, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(64, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(downsample): Sequential((0): Conv2d(64, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(1): Bottleneck((conv1): Conv2d(256, 64, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(64, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue))(2): Bottleneck((conv1): Conv2d(256, 64, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(64, 64, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(64, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(64, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)))(layer2): Sequential((0): Bottleneck((conv1): Conv2d(256, 128, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(128, 128, kernel_size(3, 3), stride(2, 2), padding(1, 1), biasFalse)(bn2): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(128, 512, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(downsample): Sequential((0): Conv2d(256, 512, kernel_size(1, 1), stride(2, 2), biasFalse)(1): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(1): Bottleneck((conv1): Conv2d(512, 128, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(128, 128, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(128, 512, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue))(2): Bottleneck((conv1): Conv2d(512, 128, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(128, 128, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(128, 512, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue))(3): Bottleneck((conv1): Conv2d(512, 128, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(128, 128, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(128, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(128, 512, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)))(layer3): Sequential((0): Bottleneck((conv1): Conv2d(512, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(256, 256, kernel_size(3, 3), stride(2, 2), padding(1, 1), biasFalse)(bn2): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(256, 1024, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(1024, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(downsample): Sequential((0): Conv2d(512, 1024, kernel_size(1, 1), stride(2, 2), biasFalse)(1): BatchNorm2d(1024, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(1): Bottleneck((conv1): Conv2d(1024, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(256, 1024, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(1024, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue))(2): Bottleneck((conv1): Conv2d(1024, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(256, 1024, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(1024, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue))(3): Bottleneck((conv1): Conv2d(1024, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(256, 1024, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(1024, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue))(4): Bottleneck((conv1): Conv2d(1024, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(256, 1024, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(1024, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue))(5): Bottleneck((conv1): Conv2d(1024, 256, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(256, 256, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(256, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(256, 1024, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(1024, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)))(layer4): Sequential((0): Bottleneck((conv1): Conv2d(1024, 512, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(512, 512, kernel_size(3, 3), stride(2, 2), padding(1, 1), biasFalse)(bn2): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(512, 2048, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(2048, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)(downsample): Sequential((0): Conv2d(1024, 2048, kernel_size(1, 1), stride(2, 2), biasFalse)(1): BatchNorm2d(2048, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)))(1): Bottleneck((conv1): Conv2d(2048, 512, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(512, 512, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(512, 2048, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(2048, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue))(2): Bottleneck((conv1): Conv2d(2048, 512, kernel_size(1, 1), stride(1, 1), biasFalse)(bn1): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv2): Conv2d(512, 512, kernel_size(3, 3), stride(1, 1), padding(1, 1), biasFalse)(bn2): BatchNorm2d(512, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(conv3): Conv2d(512, 2048, kernel_size(1, 1), stride(1, 1), biasFalse)(bn3): BatchNorm2d(2048, eps1e-05, momentum0.1, affineTrue, track_running_statsTrue)(relu): ReLU(inplaceTrue)))(avgpool): AdaptiveAvgPool2d(output_size(1, 1))(fc): Linear(in_features2048, out_features1000, biasTrue) )3. 模型导出为ONNX格式 batch_size 1 # just a random number # 先加载模型结构 loaded_model torchvision.models.resnet50() # 在加载模型权重 loaded_model.load_state_dict(torch.load(save_dir)) #单卡GPU # loaded_model.cuda()# 将模型设置为推理模式 loaded_model.eval() # Input to the model x torch.randn(batch_size, 3, 224, 224, requires_gradTrue) torch_out loaded_model(x) torch_outtensor([[-5.8050e-01, 7.5065e-02, 1.9404e-01, -9.1107e-01, 9.9716e-01,-1.2941e00, -1.3402e-01, -6.4496e-01, 6.0434e-01, -1.6355e00,-1.5187e-01, 1.0285e00, -9.0719e-02, -2.6877e-01, -1.2656e00,-7.9748e-01, -1.3802e00, -9.6179e-01, 5.3512e-01, 8.3388e-02,-6.2868e-01, 1.5385e-01, -2.5405e-01, 4.3549e-01, -3.2834e-02,-8.9873e-01, -1.7059e00, -8.5661e-01, -1.4386e00, -2.0589e00,-2.3464e00, -3.6227e-01, -3.5712e00, -1.6644e00, -3.0064e-01,-1.8671e00, 7.5745e-01, -2.3606e00, 1.2460e-01, 2.7504e-01,-2.1071e-01, -2.6051e00, 4.9932e-02, -3.0857e-01, -1.5757e-02,5.6365e-02, 1.0149e-01, -2.4776e00, 1.7863e00, -2.1650e00,1.8615e00, -2.8109e00, -2.0084e00, -5.4413e-01, 8.8444e-01,-8.8331e-01, 7.3980e-02, -2.0061e00, 5.5653e-01, 7.1335e-01,4.6456e-01, 1.0112e00, 4.2683e-01, -1.8685e-01, -1.1910e00,1.6901e-01, -7.3501e-01, -2.4989e-01, -2.7711e-01, 1.8286e00,-1.1317e00, 1.9985e00, 4.0941e-01, 2.7733e-01, -5.1216e-02,3.1703e-01, -2.1450e-01, 1.5035e00, 1.2469e00, 3.6729e00,-1.2205e00, -2.9484e-01, -3.2170e-01, -2.1006e00, -1.2326e-01,3.9842e-01, -3.5075e-01, 1.5957e-01, -4.8100e-01, 1.2830e00,-1.1557e00, 2.9266e-01, 6.7955e-01, 1.2951e00, -1.7461e-01,-3.4974e00, 9.8954e-01, -1.1453e00, -1.5246e00, 7.6012e-01,-2.7971e-01, -1.0384e-01, -1.3282e00, 3.7075e-01, -1.0879e00,-2.2167e00, -1.6805e00, 1.5793e-01, -1.2778e00, -3.4896e-01,6.2826e-01, 1.7638e00, -8.2627e-01, 6.5328e-01, 5.1948e-01,-1.5375e00, -2.7378e00, -6.8703e-02, -1.5729e00, -2.1919e00,-1.0581e00, -2.9345e00, -3.2737e00, -2.5095e00, -2.5462e00,-3.4298e00, 1.0801e00, -4.6679e-02, -7.1422e-01, -1.1388e00,-2.2512e00, -9.3222e-01, 2.7792e-01, -2.4730e-01, -1.3677e00,-1.1018e00, -2.3430e00, 1.1828e00, 1.5632e00, -2.6486e00,-2.2285e00, -8.2680e-01, -1.9754e00, -1.5034e00, -2.1048e00,1.0566e00, -6.0091e-01, -2.2394e00, -1.0461e00, -1.4851e00,9.9063e-02, 4.5648e-01, -3.0590e00, -5.1038e-02, -2.2756e00,-1.5584e00, -2.6344e00, -1.3177e00, -2.4749e00, 1.3347e-01,-1.8447e00, -1.9380e00, -1.1397e00, -9.6618e-01, -4.7473e-01,-8.1531e-01, -2.0591e00, -2.2707e00, -2.1579e00, -8.4820e-01,-1.8621e00, -1.0359e00, -1.7589e00, -5.1326e-01, -1.9336e00,-2.4361e00, -3.0598e00, -1.5690e00, 7.9418e-01, -2.0329e00,-1.4686e00, -1.3989e00, -1.2050e00, -4.6212e-01, -2.1246e00,3.9028e-02, -1.3888e00, -8.1794e-01, -3.2460e00, -2.9345e-01,-1.5963e00, -1.4708e00, -1.7513e00, -1.0326e00, -2.5880e00,-3.5845e-02, -1.8802e00, -2.0279e00, -2.2119e00, -5.6981e-01,-1.4423e00, -5.3841e-01, -2.4736e-01, 1.4031e-01, -1.1382e00,-1.3424e00, -1.5412e-01, -1.5119e00, -8.1195e-01, -2.3688e00,-3.1494e00, -1.2997e00, -2.0867e00, -1.5811e00, -1.1873e00,-1.4610e00, 4.6883e-01, -1.3841e00, -2.3627e00, -5.0272e-01,-2.2311e00, 2.8236e-01, -1.4063e00, -6.1543e-01, 2.2254e-01,-1.8209e00, -2.2796e00, -1.4799e00, -9.3366e-01, -4.5269e-01,-1.5885e00, -3.5685e-01, -7.9922e-01, -1.7434e00, -1.3543e00,-5.9424e-01, -7.4004e-02, -4.8574e-01, -9.4252e-01, -1.1784e00,-1.0762e00, -7.0929e-01, -2.3507e00, -1.5668e00, -2.8629e00,-9.7854e-01, -7.7075e-01, -2.1660e00, -2.3006e-01, -6.7149e-01,-8.6158e-01, -1.7104e-02, -1.9825e00, -7.7517e-01, -3.8014e-01,-2.1186e00, -9.2220e-01, -9.2850e-01, -1.2418e00, 9.7522e-02,-3.6667e-03, -2.1291e00, -2.8809e00, -1.3699e00, -1.5959e00,-6.5653e-01, -1.2664e00, -2.8341e-01, -1.5526e00, -7.1795e-01,-4.8103e-01, -1.6648e00, -8.2810e-01, -1.6934e00, -1.3563e00,-1.6123e00, -1.1855e00, -1.2475e00, -1.3781e00, -9.8912e-01,-1.3062e-03, 1.2144e00, 2.8563e00, 1.7405e00, 3.0779e-01,8.2037e-01, -4.7336e-01, -2.7651e00, 4.0167e-01, 2.1637e-01,-5.0109e-01, -1.0902e00, -2.6263e-01, 5.9031e-01, -5.2879e-01,1.0321e00, 1.2048e00, 1.6882e-01, 4.2126e-02, -3.8657e-01,-1.3633e00, 2.0077e00, -9.9282e-01, -1.6829e-01, -1.5846e00,-2.1892e00, -6.6651e-01, 9.6200e-01, 1.1047e00, -3.3428e-01,2.7981e00, 7.2582e-01, 3.4494e-01, 8.2232e-01, 1.7219e00,1.0106e00, -2.3200e-01, 4.9711e-02, 1.6123e00, 8.3826e-01,-1.4559e00, -2.4328e00, -2.8555e00, -2.6156e00, -1.9900e00,-2.4778e00, -1.9356e00, -1.5563e00, -2.5033e00, -3.5848e00,-2.4205e-01, -5.5758e-01, 2.3322e-01, -1.1810e00, -8.3212e-01,-4.8195e-02, -4.9411e-01, -3.0698e-03, -1.6134e00, -1.5790e00,-5.8626e-01, -1.8875e00, -1.5670e00, -2.0681e00, -1.7590e00,-3.9325e-01, -2.0172e00, -1.3237e00, -1.7693e-01, -8.5266e-01,-2.0535e00, -2.7916e00, -1.7173e00, 5.3713e-02, -1.9363e-01,-3.1787e-01, 7.0567e-01, 5.3067e-01, 1.0458e00, 1.2243e00,-3.9257e-01, -3.9865e-01, 3.8122e-01, 3.4527e-01, -1.6836e00,6.8797e-01, 1.2213e00, 1.0733e00, 1.1278e00, 6.7682e-01,1.2179e00, -8.0824e-01, 2.7535e-03, -8.5098e-01, -9.4244e-02,-3.7395e-01, -5.9386e-01, -8.1263e-02, -5.8865e-01, -8.3479e-01,-7.2452e-01, -1.6460e-01, 7.2182e-01, 1.2066e00, -1.8087e00,-4.4841e-01, -3.2795e-01, -3.0482e-01, -3.3302e-01, -2.4936e00,-5.7049e-01, -2.0744e-02, -7.5551e-01, -2.4757e00, -1.7799e00,-1.1292e00, -1.0917e00, 6.8229e-01, 8.7337e-01, 3.1813e00,-1.5752e00, 1.0542e-01, 2.5594e00, -1.0048e00, -2.2436e00,4.9551e-01, -2.0745e00, -9.9214e-01, -2.5501e00, 2.7392e00,6.4982e-01, 3.5795e00, 2.0882e00, 1.0579e00, 2.3663e00,-1.1029e00, -6.6217e-01, -4.8396e-01, 3.6624e00, 2.3802e00,8.2251e-01, 2.5061e00, -1.8793e00, 1.6354e00, 1.9349e00,7.7006e-01, 2.4251e-01, 1.7568e00, -9.3206e-01, 1.2631e00,1.0240e00, -3.5013e-01, 7.5377e-03, 5.0503e-01, -9.5431e-01,1.5458e00, -2.5770e00, 5.7188e-01, 9.7471e-01, -3.1393e-01,1.0891e00, 2.3057e00, -7.5324e-01, 3.2789e00, -8.1716e-01,-1.9879e00, 5.5330e00, 6.3507e-01, -1.1635e00, -1.1235e00,-3.4298e00, 7.5610e-01, -3.1293e-02, -9.6185e-01, -8.1488e-02,1.1240e00, -6.9891e-02, 2.5587e00, 2.2736e00, 1.7838e-01,-6.9245e-01, 2.4419e00, 2.0427e00, 1.1029e00, 4.1609e00,3.5126e00, -1.8192e00, -3.3070e00, 7.6861e-01, 1.2807e00,2.1298e-01, -8.7622e-01, -2.1935e00, 1.0431e00, 1.9949e00,-3.2491e-01, -3.1093e00, -1.0409e00, 1.2334e00, -1.7676e-01,3.0567e00, 2.6081e00, 2.7356e-01, 6.0596e-02, -1.3262e00,-3.5291e-01, -4.7318e-01, 2.1949e00, 5.3661e00, 4.2932e00,8.3733e00, 4.1425e-01, 2.4924e-01, -1.3689e00, 7.1289e-02,-9.8287e-01, -1.2412e00, 1.3910e00, 1.9533e00, 3.3525e00,1.7242e00, 1.7637e00, 1.0108e00, 1.2255e00, 1.7504e00,5.4399e-01, 2.2958e00, 1.9387e00, 2.4723e00, -1.1986e00,-1.5123e00, -1.9842e00, 1.8934e00, 1.3407e00, 4.6350e-01,2.6674e00, 1.0492e00, 1.0988e00, -1.4208e-02, 3.9129e-01,-4.7343e-01, -1.7139e00, -7.8037e-01, 1.3938e00, 2.4655e00,-9.8006e-01, -5.5273e-01, 1.1947e00, 1.5285e00, 2.2214e-01,2.2346e00, 1.3524e00, -3.2841e-01, 2.1160e00, 4.4156e00,-2.7112e00, -9.0547e-01, -1.4378e00, 1.5687e00, 3.1633e00,-2.9853e-01, 1.2451e00, 2.5149e00, 1.0312e00, -6.9518e-01,1.1537e00, 9.6612e-01, -3.5077e00, -7.9979e-02, 4.3770e00,-6.3443e-01, -5.2904e-01, 1.5411e00, 1.2678e00, -1.2136e00,-2.1303e00, 5.5227e00, 3.5111e-01, 1.5474e00, 2.1807e00,1.4828e00, -1.4299e00, 1.9229e00, 2.4931e00, -2.5156e00,-1.7203e00, -4.2708e-01, 1.6891e00, 1.5878e00, -3.3333e00,2.1083e00, -1.7954e-02, 3.9262e-01, -1.8340e00, 7.8696e-01,-2.9308e00, -2.3592e00, 1.0347e00, 8.9930e-01, 1.2392e00,5.4734e-01, 6.6852e-01, -2.6781e00, 2.2405e-01, -9.0210e-01,1.0648e00, -2.3832e00, 1.7305e00, 1.6958e00, 1.0681e00,8.2608e-01, 2.5071e00, -2.3054e-01, 3.9594e-01, -1.4630e-01,-2.1682e00, 3.0358e00, 1.5096e00, 7.6303e-01, 4.4392e00,3.2750e00, 2.6279e00, 4.3440e-01, -3.9379e00, 1.0872e00,1.7172e00, 2.8548e00, -1.0287e00, 4.9895e00, -2.0666e00,4.8006e00, 2.0120e00, -1.5181e00, 8.6181e-01, -3.4666e-01,2.2120e00, 3.0910e00, 5.9223e-01, 2.2166e00, 3.9417e00,3.5241e00, -5.3305e-01, 3.5832e00, 2.5654e00, -1.5450e00,-2.6835e00, 3.1550e00, -2.6302e00, 2.3621e-01, 2.1758e00,1.2487e00, -1.0268e-01, 3.6262e00, 3.6049e00, -2.3248e00,2.3213e-01, 3.2931e00, -1.0058e00, 4.5938e-01, -4.2993e-01,1.3951e00, -2.8811e-01, -5.2850e-01, 1.0776e00, 4.6138e00,-7.1348e-01, 5.8099e-01, 4.4438e-01, -6.0801e-01, 7.0509e-01,3.5084e00, 3.0626e00, 7.0831e-01, 1.5073e00, -2.1074e00,3.2849e00, -2.7267e00, 2.9387e-01, 5.1394e-01, 1.4031e-01,-1.0694e00, -2.5526e00, 1.6833e00, -1.3013e00, 3.0083e00,-1.9390e00, 4.4978e-01, -1.5059e-01, -2.4490e00, 1.6431e00,-4.6816e-01, -1.6293e00, -7.9092e-01, 1.1116e00, 2.1265e00,-3.0442e00, 9.5523e-02, 2.8034e00, 1.3312e00, 3.4422e00,4.4743e-01, 1.7062e00, 1.8941e-01, 1.2406e00, -9.8100e-01,-9.7636e-01, -3.9718e-01, -5.6298e-01, 2.1325e00, 1.4298e00,-4.6180e00, -5.8675e-01, 1.7124e00, -7.3919e-02, -2.9715e00,2.9501e00, 1.4472e00, -1.3756e00, -1.0018e00, -1.1162e-01,1.2214e00, -5.2164e-01, -8.7681e-01, 6.0252e-01, 2.7381e-01,-2.9817e00, -1.3999e00, 1.8137e00, -3.4810e-02, 1.2475e00,-5.1820e-01, 3.4469e00, 2.8484e00, 5.9049e-01, 2.2143e00,-1.9403e-01, 1.5231e00, -4.1188e00, 5.6471e-01, -1.4212e00,1.1938e00, 2.8821e00, 2.4709e00, -1.6792e00, -4.7604e-01,1.7501e00, -2.2566e00, 7.4556e-01, 2.5034e00, -3.6194e-01,-1.1058e00, 2.2076e00, -6.0705e-03, 2.5470e00, -1.9637e00,2.7231e00, 2.4390e00, 1.1190e00, -9.0371e-01, -4.4400e-01,8.6673e-01, 2.8887e00, -6.5289e-01, 1.6986e00, 6.0122e-01,-1.1510e00, 1.9672e00, 3.6989e00, 1.3653e-01, 9.0087e-01,1.8489e00, -2.7983e00, 1.5802e00, 2.6502e00, 1.1414e00,-5.3817e-01, 1.1085e00, -2.1715e00, -7.2016e-01, 1.5999e00,4.9543e00, 1.9814e00, -1.1679e00, 2.8527e00, 2.1758e00,7.5756e-01, -1.0221e00, 1.2118e00, -2.4591e-01, 1.4493e00,3.4529e-01, 1.6389e00, 4.0479e00, 1.2619e00, 4.2199e-01,-1.2010e00, 2.7446e00, 3.2914e00, 1.6454e00, -4.8627e-01,-3.6592e-01, 1.1508e00, 4.4760e00, 3.3516e00, 2.9289e00,1.6571e00, -6.9271e-02, 1.5371e00, -1.6635e-01, 2.8581e00,1.0374e00, 1.1429e00, 2.1297e00, 1.0264e00, 4.7174e00,-8.5201e-01, 1.7106e00, 7.4727e-01, 6.5346e-01, 1.6801e00,-3.7609e-01, -1.5926e00, -2.6283e00, -1.6866e00, 5.5250e-02,-6.2809e-02, 5.9573e-01, -7.4590e-01, 5.3049e-01, -1.5091e00,-8.0366e-01, 3.3241e00, 2.3141e00, 1.1193e00, -1.6830e00,3.3035e00, 2.9134e-01, -2.9930e00, 2.4471e00, 9.8725e-01,-2.7953e00, -1.7308e00, -9.4977e-01, 1.6247e-01, 2.5793e00,2.9449e-01, 2.1876e00, 1.3091e-01, 6.2929e00, -5.5488e-01,1.2929e00, -9.5095e-03, -1.1349e00, -1.0178e-01, 2.3317e00,-4.3678e-01, 2.3839e00, 2.6191e00, -2.0215e00, 1.5188e00,3.1490e00, 3.1997e00, -2.2047e-01, -1.2029e-01, 2.7171e00,3.1623e00, 7.7251e-01, -1.8028e00, -7.3017e-01, 1.5781e00,7.6143e-01, 4.7296e00, 1.7691e00, 1.4732e00, 2.0614e00,2.2509e00, -4.4578e00, 1.1764e00, 2.2630e00, 5.7318e-01,4.3310e-01, 1.6570e00, -1.4352e00, -1.2535e00, -4.0429e00,-5.1775e-01, -1.5580e00, -1.8145e00, 2.4469e00, 1.9574e00,-2.0032e-01, -2.0393e00, 3.3668e00, -5.2449e-01, -4.5653e00,4.8361e-01, 4.8011e-01, 8.3248e-01, -1.4842e-01, 2.5230e00,-3.1912e-01, 1.1091e00, 1.9290e00, 6.5501e-01, 7.5642e-01,1.3678e00, 1.6187e00, -2.2867e00, -1.3338e00, 7.0305e-01,-2.6969e00, -3.4848e-01, 3.5779e00, 2.5296e00, 1.2646e00,-8.2202e-01, 1.5727e00, 2.0048e00, 1.9939e00, 3.6664e-01,-3.7189e-01, 6.5360e-02, 2.5970e00, 1.9509e00, 7.9060e00,4.1564e00, 1.9750e00, 1.3692e00, 7.0074e-01, 1.3194e00,1.5737e00, 3.1158e00, 2.8220e-01, -1.1930e00, -2.9132e00,3.6715e-01, 2.0554e00, -4.5951e-01, 1.4659e00, 1.6097e-01,3.5082e-01, 1.9813e00, 2.3234e00, -1.6767e00, -1.9703e00,-4.2028e-01, -2.6262e00, -1.3928e00, -7.6662e-01, 4.5116e-01,2.6828e-01, -2.8156e-01, 7.0492e-02, -2.3663e00, -5.0179e-01,-1.6241e-01, -2.5555e00, -9.8973e-02, -2.2130e00, -2.3067e00,-1.8250e00, -1.8571e00, -2.4779e00, -2.7528e00, -2.9528e00,-9.4892e-01, -2.8599e00, -6.0309e-01, -1.4899e-01, -9.7413e-01,9.2476e-01, 1.2974e00, -8.6647e-01, -1.4522e-01, 1.5039e00,1.5240e-01, -1.9550e00, -1.3404e00, 5.6667e-01, -1.2009e00,-9.4940e-01, 1.0278e00, -2.9112e00, -6.9027e-01, -8.4326e-01,-1.5937e00, 1.6618e00, 3.1860e00, 3.0757e00, 4.0690e-01,-1.1017e00, 3.6284e00, -6.9720e-01, -1.3498e00, 1.4283e-01,-4.1820e-01, -1.6470e00, 4.1369e-01, 1.7120e-01, -1.7615e00,7.3642e-01, 1.7452e00, 4.3359e-01, -2.8788e-01, -6.6571e-02,-1.4325e-02, -2.2441e00, 1.2690e00, -7.3996e-01, -1.1551e00,-1.4367e00, -1.5546e00, -2.9878e00, -3.5215e00, -4.2169e00,-3.7416e00, -2.0244e00, -2.6461e00, -1.1108e00, 1.1864e00]],grad_fnAddmmBackward0)torch_out.size()torch.Size([1, 1000])# 导出模型 torch.onnx.export(loaded_model, # model being runx, # model input (or a tuple for multiple inputs)onnx_file, # where to save the model (can be a file or file-like object)export_paramsTrue, # store the trained parameter weights inside the model fileopset_version10, # the ONNX version to export the model todo_constant_foldingTrue, # whether to execute constant folding for optimizationinput_names [conv1], # the models input namesoutput_names [fc], # the models output names# variable length axesdynamic_axes{conv1 : {0 : batch_size}, fc : {0 : batch_size}})Diagnostic Run torch.onnx.export version 2.0.0cu117 verbose: False, log level: Level.ERROR0 NONE 0 NOTE 0 WARNING 0 ERROR 4、检验ONNX模型 # 我们可以使用异常处理的方法进行检验 try:# 当我们的模型不可用时将会报出异常onnx.checker.check_model(onnx_file) except onnx.checker.ValidationError as e:print(The model is invalid: %s%e) else:# 模型可用时将不会报出异常并会输出“The model is valid!”print(The model is valid!)The model is valid!5. 使用ONNX Runtime进行推理 import onnxruntime import numpy as nport_session onnxruntime.InferenceSession(onnx_file, providers[CPUExecutionProvider])# 将张量转化为ndarray格式 def to_numpy(tensor):return tensor.detach().cpu().numpy() if tensor.requires_grad else tensor.cpu().numpy()# 构建输入的字典和计算输出结果 ort_inputs {ort_session.get_inputs()[0].name: to_numpy(x)} ort_outs ort_session.run(None, ort_inputs)# 比较使用PyTorch和ONNX Runtime得出的精度 np.testing.assert_allclose(to_numpy(torch_out), ort_outs[0], rtol1e-03, atol1e-05)print(Exported model has been tested with ONNXRuntime, and the result looks good!)Exported model has been tested with ONNXRuntime, and the result looks good!6. 进行实际预测并可视化 # 推理数据 from PIL import Image from torchvision.transforms import transforms# 生成推理图片 image Image.open(./images/cat.jpg)# 将图像调整为指定大小 image image.resize((224, 224))# 将图像转换为 RGB 模式 image image.convert(RGB)image.save(./images/cat_224.jpg) categories [] # Read the categories with open(./imagenet/imagenet_classes.txt, r) as f:categories [s.strip() for s in f.readlines()]def get_class_name(probabilities):# Show top categories per imagetop5_prob, top5_catid torch.topk(probabilities, 5)for i in range(top5_prob.size(0)):print(categories[top5_catid[i]], top5_prob[i].item()) #预处理 def pre_image(image_file):input_image Image.open(image_file)preprocess transforms.Compose([transforms.Resize(256),transforms.CenterCrop(224),transforms.ToTensor(),transforms.Normalize(mean[0.485, 0.456, 0.406], std[0.229, 0.224, 0.225]),])input_tensor preprocess(input_image)inputs input_tensor.unsqueeze(0) # create a mini-batch as expected by the model# input_arr inputs.cpu().detach().numpy()return inputs #inference with model# 先加载模型结构 resnet50 torchvision.models.resnet50() # 在加载模型权重 resnet50.load_state_dict(torch.load(save_dir))resnet50.eval() #推理 input_batch pre_image(./images/cat_224.jpg)# move the input and model to GPU for speed if available print(GPU Availability: , torch.cuda.is_available()) if torch.cuda.is_available():input_batch input_batch.to(cuda)resnet50.to(cuda)with torch.no_grad():output resnet50(input_batch) # Tensor of shape 1000, with confidence scores over Imagenets 1000 classes # print(output[0]) # The output has unnormalized scores. To get probabilities, you can run a softmax on it. probabilities torch.nn.functional.softmax(output[0], dim0) get_class_name(probabilities)GPU Availability: False Persian cat 0.6668420433998108 lynx 0.023987364023923874 bow tie 0.016234245151281357 hair slide 0.013150070793926716 Japanese spaniel 0.012279157526791096input_batch.size()torch.Size([1, 3, 224, 224])#benchmark 性能 latency [] for i in range(10):with torch.no_grad():start time.time()output resnet50(input_batch)probabilities torch.nn.functional.softmax(output[0], dim0)top5_prob, top5_catid torch.topk(probabilities, 5)# for catid in range(top5_catid.size(0)):# print(categories[catid])latency.append(time.time() - start)print({} model inference CPU time:cost {} ms.format(str(i),format(sum(latency) * 1000 / len(latency), .2f)))0 model inference CPU time:cost 149.59 ms 1 model inference CPU time:cost 130.74 ms 2 model inference CPU time:cost 133.76 ms 3 model inference CPU time:cost 130.64 ms 4 model inference CPU time:cost 131.72 ms 5 model inference CPU time:cost 130.88 ms 6 model inference CPU time:cost 136.31 ms 7 model inference CPU time:cost 139.95 ms 8 model inference CPU time:cost 141.90 ms 9 model inference CPU time:cost 140.96 ms# Inference with ONNX Runtime import onnxruntime from onnx import numpy_helper import time onnx_file resnet50.onnx session_fp32 onnxruntime.InferenceSession(onnx_file, providers[CPUExecutionProvider]) # session_fp32 onnxruntime.InferenceSession(resnet50.onnx, providers[CUDAExecutionProvider]) # session_fp32 onnxruntime.InferenceSession(resnet50.onnx, providers[OpenVINOExecutionProvider])def softmax(x):Compute softmax values for each sets of scores in x.e_x np.exp(x - np.max(x))return e_x / e_x.sum()latency [] def run_sample(session, categories, inputs):start time.time()input_arr inputsort_outputs session.run([], {conv1:input_arr})[0]output ort_outputs.flatten()output softmax(output) # this is optionaltop5_catid np.argsort(-output)[:5]# for catid in top5_catid:# print(categories[catid])latency.append(time.time() - start)return ort_outputs input_tensor pre_image(./images/cat_224.jpg) input_arr input_tensor.cpu().detach().numpy() for i in range(10):ort_output run_sample(session_fp32, categories, input_arr)print({} ONNX Runtime CPU Inference time {} ms.format(str(i),format(sum(latency) * 1000 / len(latency), .2f)))0 ONNX Runtime CPU Inference time 67.66 ms 1 ONNX Runtime CPU Inference time 56.30 ms 2 ONNX Runtime CPU Inference time 53.90 ms 3 ONNX Runtime CPU Inference time 58.18 ms 4 ONNX Runtime CPU Inference time 64.53 ms 5 ONNX Runtime CPU Inference time 62.79 ms 6 ONNX Runtime CPU Inference time 61.75 ms 7 ONNX Runtime CPU Inference time 60.51 ms 8 ONNX Runtime CPU Inference time 59.35 ms 9 ONNX Runtime CPU Inference time 57.57 ms4、扩展知识 模型量化模型剪裁工程优化算子优化
http://www.w-s-a.com/news/660683/

相关文章:

  • 网站建设前的ER图ppt模板图片 背景
  • 做一个网站花多少钱网站导航营销步骤
  • 仙桃网站定制做房产网站能赚钱吗
  • 西安网站制作模板最新源码
  • 南京江宁网站建设大学高校网站建设栏目
  • 模板网站建设明细报价表做网站第一
  • 公司网站建设系统软件开发 上海
  • 怎么让公司建设网站固安县建设局网站
  • 360免费建站官网入口手机网站建设设计
  • 商城网站建站系统dw如何做网页
  • 网站建设的公司收费我有网站 怎么做淘宝推广的
  • 网站建设策划书事物选题手机兼职app
  • html5 微网站模版wordpress博客速度很慢
  • 怎么做五个页面网站网络推广如何收费
  • 上虞宇普电器网站建设江西建筑人才网
  • 在吗做网站商城一个网站需要服务器吗
  • 先做网站再备案吗中山微网站建设报价
  • 树莓派可以做网站的服务器吗网站建设与设计ppt
  • 网站访问速度分析网站怎么做让PC和手机自动识别
  • 网站建设要考西宁网站建设多少钱
  • 网站开发公司东莞网站推广计划书具体包含哪些基本内容?
  • 素材天下网站惠州网站建设行业
  • 网站做a视频在线观看网站天津建站
  • 自己做的网站怎么链接火车头采集一个网站可以做几级链接
  • 济南网站制作哪家专业做网站怎样投放广告
  • 辽宁网站推广短视频运营培训学费多少
  • 拼多多网站怎么做翻译 插件 wordpress
  • 做网站运营的职业生涯规划wordpress分类显示图片
  • 网站建设与制作总结沈阳百度广告
  • 网站管理系统 手机会员制网站搭建wordpress