网站实名制注册怎么做,私人私密浏览器免费下载,网站建设细节,做云购网站使用Docker部署CPUGPU 1.CPU2.GPU3.cuDNN安装3.1 Prerequisites3.2 下载Linux版本cuDNN3.3 安装 1.CPU
本说明基于DeepFace的Docker镜像文件deepface_image.tar进行说明。
# 1.导入镜像
docker load -i deepface_image.tar# 2.创建模型文件夹【并将下载好的模型文件上传】
mk… 使用Docker部署CPUGPU 1.CPU2.GPU3.cuDNN安装3.1 Prerequisites3.2 下载Linux版本cuDNN3.3 安装 1.CPU
本说明基于DeepFace的Docker镜像文件deepface_image.tar进行说明。
# 1.导入镜像
docker load -i deepface_image.tar# 2.创建模型文件夹【并将下载好的模型文件上传】
mkdir -p /root/.deepface/weights/# 3.启动容器
# 网络隔离性受影响但性能好
docker run --name deepface --privilegedtrue --restartalways --nethost -v /root/.deepface/weights/:/root/.deepface/weights/ -d deepface_image
# 一般使用
docker run --name deepface --privilegedtrue --restartalways -p 5000:5000 -v /root/.deepface/weights/:/root/.deepface/weights/ -d deepface_image
# 使用最新的代码进行容器启动
docker run --name deepface_src --privilegedtrue --restartalways --nethost \
-v /root/.deepface/weights/:/root/.deepface/weights/ \
-v /opt/test-facesearch/deepfacesrc/:/app/deepface/ \
-d deepface_image警告信息
# 执行命令
docker run --name deepface --privilegedtrue --restartalways --nethost -p 5000:5000 -v /root/.deepface/weights/:/root/.deepface/weights/ -d deepface_image# 警告
WARNING: Published ports are discarded when using host network mode这个警告通常出现在使用Docker的host网络模式时因为在这种模式下容器与主机共享相同的网络命名空间因此容器中的端口将直接映射到主机上而不需要进行端口转发。因此使用-p选项来发布容器端口是无效的并且会导致警告信息。要解决这个问题您可以尝试以下方法
如果您不需要将容器端口映射到主机上请删除-p选项。如果您需要将容器端口映射到主机上请使用Docker的其他网络模式例如bridge模式。如果您确实需要使用host网络模式请考虑使用主机IP地址来访问容器中的服务而不是使用端口转发。
2.GPU
首先要启动容器安装tensorrt
pip install tensorrt -i https://pypi.tuna.tsinghua.edu.cn/simple安装后的启动命令
docker run --name deepface --privilegedtrue --restartalways --nethost \
-e PATH/usr/local/cuda-11.2/bin:$PATH -e LD_LIBRARY_PATH/usr/local/cuda-11.2/lib64:$LD_LIBRARY_PATH \
-v /root/.deepface/weights/:/root/.deepface/weights/ \
-v /usr/local/cuda-11.2/:/usr/local/cuda-11.2/ \
-v /opt/xinan-facesearch-service-public/deepface/api/app.py:/app/app.py \
-d deepface_image测试fastmtcnn将最新代码挂载到目录下:
docker run --name deepface_gpu_src --privilegedtrue --restartalways --nethost \
-e PATH/usr/local/cuda-11.2/bin:$PATH -e LD_LIBRARY_PATH/usr/local/cuda-11.2/lib64:$LD_LIBRARY_PATH \
-v /root/.deepface/weights/:/root/.deepface/weights/ \
-v /usr/local/cuda-11.2/:/usr/local/cuda-11.2/ \
-v /opt/test-facesearch/deepfacesrc/:/app/deepface/ \
-v /opt/xinan-facesearch-service-public/deepface/api/app.py:/app/app.py \
-d deepface_image跟CPU部署不同点
设置了两个环境变量-e PATH/usr/local/cuda-11.2/bin:$PATH -e LD_LIBRARY_PATH/usr/local/cuda-11.2/lib64:$LD_LIBRARY_PATH添加了一个挂载目录-v /usr/local/cuda-11.2/:/usr/local/cuda-11.2/添加了一个挂载文件-v /deepface/api/app.py:/app/app.py
文件/deepface/api/app.py内容如下
import tensorrt as tr
import tensorflow as tf
from flask import Flask
from routes import blueprintdef create_app():available tf.config.list_physical_devices(GPU)print(favailable{available})app Flask(__name__)app.register_blueprint(blueprint)return app调用tensorflow前需要先引入tensorrt。
3.cuDNN安装
官网安装文档https://docs.nvidia.com/deeplearning/cudnn/install-guide/index.html cuDNN的支持矩阵https://docs.nvidia.com/deeplearning/cudnn/support-matrix/index.html
The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, attention, matmul, pooling, and normalization.
安装环境
[rootlocalhost ~]# cat /etc/centos-release
CentOS Linux release 7.7.1908 (Core)3.1 Prerequisites
需要先安装1.GPU Driver和2.CUDAToolkit
nvidia-smi# 查询结果
-----------------------------------------------------------------------------
| NVIDIA-SMI 460.27.04 Driver Version: 460.27.04 CUDA Version: 11.2 |
|---------------------------------------------------------------------------和3.zlib
yum list installed | grep zlib# 查询结果
zlib.x86_64 1.2.7-18.el7 anaconda
zlib-devel.x86_64 1.2.7-18.el7 base3.2 下载Linux版本cuDNN
下载cuDNN需要先注册NVIDIA开发者计划https://developer.nvidia.com/developer-program下载页面https://developer.nvidia.com/cudnn选择平台和对应的版本进行下载本次下载的为cudnn-11.2-linux-x64-v8.1.1.33.tgz大小为1.2G。浏览器下载容易失败可复制浏览器的下载链接在Linux服务器上进行下载【腾讯云服务器速度12MB/s】
wget https://developer.download.nvidia.cn/compute/machine-learning/cudnn/secure/8.1.1.33/11.2_20210301/cudnn-11.2-linux-x64-v8.1.1.33.tgz?G2wTHq8E--2jJ9iEfgtFbqfMGX0I1XD6BIksPkVIiU9F3ttrupv_oYvURaZX1dV71EIqEI767WbG5svvSMBElcaVrqZl15UEOUORNWbYwKZDyxidGmwHmG44XiEo6yyM1Rt7ct6NGlVXnxx0etcI9pNJ1PiaHYddY86Lc_yaBLdJwy9hqku4TW6NSNr7XfuCYXvGOPvOmraR4EOfg6QteyJscyI6IndlYnNpdGUiLCJsc2QiOiJkZXZlbG9wZXIubnZpZGlhLmNvbS9jdWRhLTEwLjItZG93bmxvYWQtYXJjaGl2ZT90YXJnZXRfb3M9TGludXgifQ3.3 安装
The following steps describe how to build a cuDNN dependent program. Choose the installation method that meets your environment needs. For example, the tar file installation applies to all Linux platforms. The Debian package installation applies to Debian 11, Ubuntu 18.04, Ubuntu 20.04, and 22.04. The RPM package installation applies to RHEL7, RHEL8, and RHEL9. In the following sections:
your CUDA directory path is referred to as /usr/local/cuda/your cuDNN download path is referred to as
可根据不同平台选择适合的安装方法tar文件适合所有的Linux平台安装步骤为
解压安装包
tar -xvf cudnn-linux-$arch-8.x.x.x_cudaX.Y-archive.tar.xzCopy the following files into the CUDA toolkit directory
$ sudo cp cudnn-*-archive/include/cudnn*.h /usr/local/cuda/include
$ sudo cp -P cudnn-*-archive/lib/libcudnn* /usr/local/cuda/lib64
$ sudo chmod ar /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*安装文件为cudnn-11.2-linux-x64-v8.1.1.33.tgz实际操作步骤为
# 1.解压
tar -zxvf cudnn-11.2-linux-x64-v8.1.1.33.tgz# 2.复制并赋权
# 解压后的文件夹名称为cuda
# inculde【18个文件】
cp ./cuda/include/cudnn*.h /usr/local/cuda/include
# lib64【8个文件 15个软连接】-P 选项表示保留源文件或目录的属性
cp -P ./cuda/lib64/libcudnn* /usr/local/cuda/lib64
# 所有用户赋可读权限
chmod ar /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*另一个版本的安装文件为cudnn-linux-x86_64-8.6.0.163_cuda11-archive.tar.xz步骤为
# 1.解压
tar -xvf cudnn-linux-x86_64-8.6.0.163_cuda11-archive.tar.xz# 2.复制并赋权 inculde【18个文件】 lib【13个文件 20个软连接】
cp ./cudnn-linux-x86_64-8.6.0.163_cuda11-archive/include/cudnn*.h /usr/local/cuda/include
cp -P ./cudnn-linux-x86_64-8.6.0.163_cuda11-archive/lib/libcudnn* /usr/local/cuda/lib64
chmod ar /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*