江苏建设信息官网网站,php网站开发心得3500字,wordpress会话已过期,成都网站设计建设推荐云计算平台搭建与维护微课版#xff08;基于Openstack和Kubernetes#xff09;
实训指导书
目录
附录 实训指导书... 3
实训环境说明#xff08;微课视频3分钟#xff09;... 3
实训1-1 Centos网络配置实训#xff08;微课视频10分钟#xff09;... 5
实训1-2 LVM实…云计算平台搭建与维护微课版基于Openstack和Kubernetes
实训指导书
目录
附录 实训指导书... 3
实训环境说明微课视频3分钟... 3
实训1-1 Centos网络配置实训微课视频10分钟... 5
实训1-2 LVM实训微课视频20分钟... 7
实训1-3 NFS实训微课视频5分钟... 12
实训1-4 iscsi实训微课视频15分钟... 13
实训2-1 虚拟网桥实验微课视频10分钟... 16
实训2-2 创建虚拟机微课视频20分钟... 19
实训2-3 虚拟机管理微课视频20分钟... 22
实训2-4 虚拟机存储微课视频15分钟... 24
实训2-5 虚拟机网络微课视频20分钟... 28
实训2-6 VXLAN实训微课视频15分钟... 32
实训2-7 GRE实训微课视频10分钟... 35
实训3-1 OpenStack环境准备实训微课视频15分钟... 37
实训3-2 基础服务和软件安装微课视频15分钟... 40
实训3-3 安装和配置Keystone微课视频20分钟... 44
实训3-4安装Glance微课视频20分钟... 46
实训3-5 安装和配置Nova微课视频20分钟... 48
实训3-6 安装和配置Neutron微课视频30分钟... 52
实训3-7 安装和配置Dashboard微课视频10分钟... 56
实训3-8 使用dashboard创建虚拟机微课视频15分钟... 57
实训3-9 安装和配置Cinder微课视频25分钟... 59
实训3-10 安装和配置Swift微课视频30分钟... 62
实训3-11 使用Openstack命令创建实例微课视频20分钟... 67
实训4-1 Docker安装实训微课视频15分钟... 69
实训4-2 镜像操作实训微课视频10分钟... 72
实训4-3 搭建Harbor私有镜像仓库微课视频15分钟... 73
实训4-4 容器操作实训微课视频10分钟... 75
实训4-5 容器存储实训微课视频15分钟... 78
实训4-6 容器网络实训微课视频15分钟... 79
实训4-7 自定义镜像实训微课视频15分钟... 81
实训5-1 kubernetes集群安装微课视频20分钟... 83
实训5-2 pod实训微课视频25分钟... 86
实训5-3 pod存储实训微课视频30分钟... 90
实训5-4 动态卷实训微课视频20分钟... 95
实训5-5 service实训微课视频20分钟... 100
实训5-6 Deployment实训微课视频20分钟... 106
实训5-7 StatefulSet实训微课视频15分钟... 109
实训5-8 DaemonSet实训微课视频15分钟... 112
实训5-9 Configmap实训微课视频15分钟... 113
实训5-10 Secrets实训微课视频15分钟... 117
实训5-11 Pod安全实训微课视频15分钟... 122
实训5-12 资源管理实训微课视频15分钟... 126
实训5-13 Pod调度实训微课视频20分钟... 131
实训5-14 部署wordpress微课视频20分钟... 138 附录 实训指导书
实训环境说明微课视频4分钟
1.虚拟主机
模拟实训环境采用VMWare虚拟机作为主机虚拟主机要使用VMWare12或以上版本打开。
虚拟主机root用户密码为000000。 虚拟主机名称 角色 硬件配置 快照 centos2009 centos操作系统主机 内存8G CPU2核未开启虚拟化 网卡NAT模式名称ens33 CD/DVDCentOS-7-x86_64-DVD-2009.iso 命令行只有命令行界面 图形界面安装了桌面环境 controller Openstack控制节点 内存8G CPU2核开启虚拟化 网卡1NAT模式名称ens33 IP192.168.9.100 网卡2仅主机模式名称ens36 CD/DVD1CentOS-7-x86_64-DVD-2009.iso CD/DVD2openstack.iso 环境备好设置好硬件和系统环境 基础服务安装了时间服务、数据库服务、消息服务、OPenstack通用软件 keystone-installed安装到keystone glance-installed安装到glance Nova-installed安装到Nova Neutron-installed安装到Neutron Dashboard-installed安装到Dashboard cinder-installed安装到Cinder Swift-installed安装到Swift compute Openstack计算节点 内存8G CPU2核开启虚拟化 网卡1NAT模式名称ens33 IP192.168.9.101 网卡2仅主机模式名称ens36 CD/DVD没有 同上 master Kubernetes主节点 内存8G CPU2核未开启虚拟化 网卡NAT模式名称ens33 IP192.168.9.10 CD/DVD1docker.iso CD/DVD2CentOS-7-x86_64-DVD-2009.iso 环境已设置设置好硬件和系统环境 docker-installed安装到docker harbor-installed安装到harbor kubernetes-installed安装到kubernetes NFS动态卷安装NFS动态卷 node Kubernetes工作节点 内存8G CPU2核未开启虚拟化 网卡NAT模式名称ens33 IP192.168.9.11 同上 2.软件包 软件包名称 内容 CentOS-7-x86_64-DVD-2009.iso Centos7 2009官方iso镜像 openstack.iso Openstack Rocky软件包 Open VSwitch软件包 Centos7镜像 Centos6官方iso镜像 cirros镜像 swift配置文件 docker.iso Docker-ce软件包 Docker-compse软件 Harbor软件 各种Docker镜像 实训使用的模板文件
3.VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”如下图所示。
设置VMnet8的子网IP192.168.9.0/24方框2中的勾去掉
设置VMnet1的子网IP192.168.30.0/24方框2中的勾去掉。
实训1-1 Centos网络配置实训微课视频18分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
在方框1中选定VMnet8在方框2中输入子网IP和子网掩码。
2虚拟主机准备
从虚拟机centos2009快照“命令行”克隆一个虚拟机。
设置名称centos
设置CD/DVD使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2.使用命令设置网络
1设置主机名为centos
# hostnamectl set-hostname centos
# hostnamectl
2设置IP地址192.168.9.100/24
查询原来的地址
# ip address
删除原来的地址
# ip address add 192.168.9.144/24 dev ens33
增加地址
# ip address add 192.168.9.100/24 dev ens33
3设置网关192.168.9.2
# ip route
# ip route add default via 192.168.9.2
4设置DNS为192.168.9.2
参照以下内容修改/etc/resolv.conf
# vi /etc/resolv.conf
search localdomain
nameserver 192.168.9.2
5测试
# ping www.baidu.com
3. 使用配置文件设置网卡
1参照以下内容设置网卡
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPEEthernet
BOOTPROTOstatic
NAMEens33
DEVICEens33
ONBOOTyes
IPADDR192.168.9.100
NETMASK255.255.255.0
GATEWAY192.168.9.2
DNS1192.168.9.2
2重启网络服务
# systemctl restart network
3测试
# ping www.baidu.com
4.网络名字空间
1创建和查询网络名字空间
# ip netns add ns1
# ip netns
# ls /var/run/netns/
2创建veth pair
# ip link add veth0 type veth peer name veth1
# ip link
3将veth1放入网络名字空间ns1
# ip link set veth1 netns ns1
# ip netns exec ns1 ip link
4设置IP地址
# ip address add 192.168.100.1/24 dev veth0
# ip netns exec ns1 ip address add 192.168.100.2/24 dev veth1
5启动虚拟网卡
# ip link set veth0 up
# ip netns exec ns1 ip link set veth1 up
6测试
# ping -c 3 192.168.100.2
# ip netns exec ns1 ping -c 3 192.168.100.1
实训1-2 LVM实训微课视频12分钟
1. 实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机centos2009快照“命令行”克隆一个虚拟机。
设置IP地址192.168.9.100
设置CD/DVD使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2.增加磁盘并分区
1增加一个硬盘
关机
# poweroff
增加硬盘
在VMware workstation菜单中选“虚拟机”→“设置”增加一个10G的硬盘。依次选
增加→硬盘
硬盘类型SCSI
创建虚拟硬盘
硬盘大小10G
完成后启动虚拟机。查询
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 39G 0 part ├─centos-root 253:0 0 35G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 0 10G 0 disk
2分成三个区
# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2). Changes will remain in memory only, until you decide to write them.
Be careful before using the write command. Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x8c5f129e. Command (m for help): n
Partition type: p primary (0 primary, 0 extended, 4 free) e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-20971519, default 2048):
Using default value 2048
Last sector, sectors or size{K,M,G} (2048-20971519, default 20971519): 3G
Partition 1 of type Linux and of size 3 GiB is set Command (m for help): n
Partition type: p primary (1 primary, 0 extended, 3 free) e extended
Select (default p):
Using default response p
Partition number (2-4, default 2):
First sector (6293504-20971519, default 6293504):
Using default value 6293504
Last sector, sectors or size{K,M,G} (6293504-20971519, default 20971519): 3G
Partition 2 of type Linux and of size 3 GiB is set Command (m for help): n
Partition type: p primary (2 primary, 0 extended, 2 free) e extended
Select (default p):
Using default response p
Partition number (3,4, default 3):
First sector (12584960-20971519, default 12584960):
Using default value 12584960
Last sector, sectors or size{K,M,G} (12584960-20971519, default 20971519):
Using default value 20971519
Partition 3 of type Linux and of size 4 GiB is set Command (m for help): t
Partition number (1-3, default 3): 1
Hex code (type L to list all codes): 8e
Changed type of partition Linux to Linux LVM Command (m for help): t
Partition number (1-3, default 3): 2
Hex code (type L to list all codes): 8e
Changed type of partition Linux to Linux LVM Command (m for help): t
Partition number (1-3, default 3): 3
Hex code (type L to list all codes): 8e
Changed type of partition Linux to Linux LVM Command (m for help): p Disk /dev/sdb: 10.7 GB, 10737418240 bytes, 20971520 sectors
Units sectors of 1 * 512 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x8c5f129e Device Boot Start End Blocks Id System
/dev/sdb1 2048 6293503 3145728 8e Linux LVM
/dev/sdb2 6293504 12584959 3145728 8e Linux LVM
/dev/sdb3 12584960 20971519 4193280 8e Linux LVM Command (m for help): w
The partition table has been altered! Calling ioctl() to re-read partition table.
Syncing disks.
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 40G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 39G 0 part ├─centos-root 253:0 0 35G 0 lvm / └─centos-swap 253:1 0 4G 0 lvm [SWAP]
sdb 8:16 0 10G 0 disk
├─sdb1 8:17 0 3G 0 part
├─sdb2 8:18 0 3G 0 part
└─sdb3 8:19 0 4G 0 part
3.创建物理卷
# pvcreate /dev/sdb1 /dev/sdb2 /dev/sdb3 Physical volume /dev/sdb1 successfully created. Physical volume /dev/sdb2 successfully created. Physical volume /dev/sdb3 successfully created.
# pvs PV VG Fmt Attr PSize PFree /dev/sda2 centos lvm2 a-- 39.00g 0 /dev/sdb1 lvm2 --- 3.00g 3.00g /dev/sdb2 lvm2 --- 3.00g 3.00g /dev/sdb3 lvm2 --- 4.00g 4.00g
4.创建逻辑卷组
# vgcreate vg01 /dev/sdb1 /dev/sdb2 Volume group vg01 successfully created
# vgs VG #PV #LV #SN Attr VSize VFree centos 1 2 0 wz--n- 39.00g 0 vg01 2 0 0 wz--n- 5.99g 5.99g
5.创建逻辑卷
# lvcreate -L 1G -n lv01 vg01 Logical volume lv01 created.
# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert root centos -wi-ao---- 35.00g swap centos -wi-ao---- 4.00g lv01 vg01 -wi-a----- 1.00g
6.使用逻辑卷
# ls /dev/vg01
lv01
# mkfs.ext4 /dev/vg01/lv01
……
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
# mkdir /mnt/lv01
# mount /dev/vg01/lv01 /mnt/lv01/
# df -h
……
/dev/mapper/vg01-lv01 976M 2.6M 907M 1% /mnt/lv01
7.逻辑卷组扩容
# vgextend vg01 /dev/sdb3 Volume group vg01 successfully extended
# vgdisplay --- Volume group --- VG Name vg01 System ID Format lvm2 Metadata Areas 3 Metadata Sequence No 3 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 Open LV 1 Max PV 0 Cur PV 3 Act PV 3 VG Size 9.99 GiB PE Size 4.00 MiB Total PE 2557 Alloc PE / Size 256 / 1.00 GiB Free PE / Size 2301 / 8.99 GiB VG UUID WoZeWY-BFKj-NPxC-pgy5-4HW8-Pk1H-RqpLGz
8.逻辑卷扩容
# lvextend -L 1G /dev/vg01/lv01 Size of logical volume vg01/lv01 changed from 1.00 GiB (256 extents) to 2.00 GiB (512 extents). Logical volume vg01/lv01 successfully resized.
# df -h
Filesystem Size Used Avail Use% Mounted on
……
/dev/mapper/vg01-lv01 976M 2.6M 907M 1% /mnt/lv01
# resize2fs /dev/vg01/lv01 # df -h
Filesystem Size Used Avail Use% Mounted on
……
/dev/mapper/vg01-lv01 2.0G 3.0M 1.9G 1% /mnt/lv01
实训1-3 NFS实训微课视频10分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“基础服务”克隆一个虚拟机。
2. 虚拟机“controller的克隆”配置
1安装软件
# yum -y install nfs-utils rpcbind
2配置共享
创建共享目录
# mkdir /share
修改/etc/exports
# vi /etc/exports
/share 192.168.9.0/24(rw,sync,no_root_squash,insecure)
启动和使能有关服务
# systemctl enable rpcbind
# systemctl enable nfs
# systemctl start rpcbind
# systemctl start nfs
导出共享文件夹
# exportfs -a
查询导出的文件夹
# showmount -e
Export list for localhost.localdomain:
/share 192.168.9.0/24
6检查rpcbind
# rpcinfo -p program vers proto port service 100000 4 tcp 111 portmapper 100000 3 tcp 111 portmapper 100000 2 tcp 111 portmapper 100000 4 udp 111 portmapper 100000 3 udp 111 portmapper 100000 2 udp 111 portmapper 100005 1 udp 20048 mountd 100005 1 tcp 20048 mountd 100005 2 udp 20048 mountd 100005 2 tcp 20048 mountd 100005 3 udp 20048 mountd 100005 3 tcp 20048 mountd 100024 1 udp 48939 status 100024 1 tcp 54692 status 100003 3 tcp 2049 nfs 100003 4 tcp 2049 nfs 100227 3 tcp 2049 nfs_acl 100003 3 udp 2049 nfs 100003 4 udp 2049 nfs 100227 3 udp 2049 nfs_acl 100021 1 udp 49745 nlockmgr 100021 3 udp 49745 nlockmgr 100021 4 udp 49745 nlockmgr 100021 1 tcp 46243 nlockmgr 100021 3 tcp 46243 nlockmgr 100021 4 tcp 46243 nlockmgr
3. 虚拟机“compute的克隆”配置
1安装软件
# yum -y install nfs-utils
2查询服务器导出的内容
# showmount -e 192.168.9.100
Export list for 192.168.9.100:
/share 192.168.9.*
3挂载
# mkdir /nfsmount
# mount -t nfs 192.168.9.100:/share /nfsmount
实训1-4 iscsi实训微课视频15分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“基础服务”克隆一个虚拟机。
2.配置iscsi target主机“controller的克隆”
1安装targetcli软件
# yum install -y targetcli
2启动服务
# systemctl start target
# systemctl enable target
3准备一个镜像文件
# mkdir /iscsi
# dd if/dev/zero of/iscsi/data.img bs1024k count2048
4启动targetcli工具
# targetcli
targetcli shell version 2.1.51
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type help. /iscsi/iqn.20...ver/tpg1/luns
5生成后端存储
# targetcli
targetcli shell version 2.1.51
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type help. / /backstores/fileio/
/backstores/fileio create data /iscsi/data.img 2G
Created fileio data with size 2147483648
/backstores/fileio
6配置ISCSITarget名称 /iscsi
/iscsi create iqn.2021-06.com.example:myserver
Created target iqn.2021-06.com.example:myserver
Created TPG1
7创建端口
/iscsi iqn.2021-06.com.example:myserver/tpg1/
/iscsi/ iqn.2021-06.com.example:myserver/tpg1 portals/ create
Using default IP port 3260
Binding to INADDR_Any (0.0.0.0)
Created network portal 0.0.0.0:3260
8创建LUN
/ /iscsi/iqn.2021-06.com.example:myserver/tpg1/luns
/iscsi/iqn.20...ver/tpg1/luns create /backstores/fileio/data
Created LUN 1
9创建ACL /iscsi/iqn.2021-06.com.example:myserver/tpg1/acls
/iscsi/iqn.20...ver/tpg1/acls create iqn.2021-06.com.example:myclient
Created Node ACL for iqn.2021-06.com.example:myclient
Created mapped LUN 0.
10创建用户和密码
//iscsi/iqn.2021-06.com.example:myserver/tpg1/acls/iqn.2021-06.com.example:myclient/
/iscsi/iqn.20...mple:myclient set auth useridroot
/iscsi/iqn.20...mple:myclient set auth password000000
3. 配置iscsi initiator主机“compute的克隆”
1安装iscsi-initiator-utils软件
# yum -y install iscsi-initiator-utils
2启动iscsi服务
# systemctl start iscsi
# systemctl enable iscsi
3配置ISCSIInitiator名称
# vi /etc/iscsi/initiatorname.iscsi
InitiatorNameiqn.2021-06.com.example:myclient
4配置认证
通过修改/etc/iscsi/iscsid.conf文件设置认证信息。
# vi /etc/iscsi/iscsid.conf
node.session.auth.authmethod CHAP
node.session.auth.username root
node.session.auth.password 000000
5发现ISCSI设备
# iscsiadm --mode discovery --type sendtargets --portal 192.168.9.100
6连接ISCSI设备
# iscsiadm --mode node \
--targetname iqn.2021-06.com.example:myserver \
--portal 192.168.9.100 --login
7查询ISCSI新设备
# lsblk --scsi
实训2-1 虚拟网桥实训微课视频18分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱。
2. Linux网桥实训
1安装软件
# yum install bridge-utils -y
2使用命令创建和管理Linux网桥
创建网桥
# brctl addbr br0
查询网桥信息
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no
增加端口
# brctl addif br0 ens32
# ip link set up dev br0
# brctl show br0
bridge name bridge id STP enabled interfaces
br0 8000.000c29df42a4 no ens32
测试
# ip address add 192.168.30.2/24 dev br0
# ping 192.168.30.1
注VWMare宿主机的Windows系统防火墙要禁用或从Windows系统ping 192.168.30.2
删除端口
# brctl delif br0 ens32
删除网桥
# ip link set down dev br0
# brctl delbr br0
3创建永久网桥
编辑文件/etc/sysconfig/network-scripts/ifcfg-br0
# vi /etc/sysconfig/network-scripts/ifcfg-br0
DEVICEbr0
TYPEBridge
NAMEbr0
BOOTPROTOstatic
ONBOOTyes
IPADDR192.168.30.2
NETMASK255.255.255.0
将网卡ens32桥接到br0
# vi /etc/sysconfig/network-scripts/ifcfg-ens32
TYPEEthernet
NAMEens32
DEVICEens32
ONBOOTyes
BRIDGEbr0
重启网络服务查询网桥信息
# systemctl restart network
# brctl show
bridge name bridge id STP enabled interfaces
br0 8000.000000000000 no ens32
# ping 192.168.30.1
3. Open VSwitch网桥实训
1安装Open VSwitch
# yum -y install openvswitch libibverbs
2启动OpenVSwitch
# systemctl enable openvswitch.service
# systemctl start openvswitch.service
3创建网桥
# ovs-vsctl add-br ovs-br1
# ovs-vsctl add-br ovs-br2
# ovs-vsctl list-br
ovs-br1
ovs-br2
4删除网桥
# ovs-vsctl del-br ovs-br2
# ovs-vsctl list-br
ovs-br1
5查询网桥详细信息
# ovs-vsctl show
ovs-vsctl: unknown command show-br; use --help for help
[rootlocalhost ~]# ovs-vsctl show
0d17bb11-42d8-4e6f-8224-9dc4b6a0a1b4 Bridge ovs-br1 Port ovs-br1 Interface ovs-br1 type: internal
ovs_version: 2.11.0
6给网桥增加一个internal型的端口
# ovs-vsctl add-port ovs-br1 p0 -- set interface p0 typeinternal
# ovs-vsctl list-ports ovs-br1
p0
7查询网桥的详细信息
# ovs-vsctl show
0d17bb11-42d8-4e6f-8224-9dc4b6a0a1b4 Bridge ovs-br1 Port ovs-br1 Interface ovs-br1 type: internal Port p0 Interface p0 type: internal
ovs_version: 2.11.0
8为网桥配置永久IP地址
编辑文件/etc/sysconfig/network-scripts/ifcfg-ovs-br1
# vi /etc/sysconfig/network-scripts/ifcfg-ovs-br1
DEVICEovs-br1
TYPEOVSBridge
DEVICETYPEovs
ONBOOTyes
BOOTPROTOstatic
IPADDR10.1.1.1
NETMASK255.255.255.0
9重启网络服务
# systemctl restart network
# ip add
实训2-2 创建虚拟机微课视频25分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机centos2009快照“图形界面”克隆一个虚拟机
增加一个CD/DVD
设置CD/DVD1使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
设置CD/DVD2使用openstack.iso为虚拟光驱
CPU设置开启虚拟化。
2.CPU设置
开启CPU的虚拟化功能用下面的命令检查CPU是否开启了虚拟化
# egrep vmx|svm /proc/cpuinfo
3.安装虚拟化软件包
1配置yum源
创建挂载目录
# mkdir /mnt/centos
# mkdir /mnt/openstack
卸载已挂载光驱
# umount /dev/sr0
# umount /dev/sr1
重新挂载光驱
# mount -o loop /dev/sr0 /mnt/openstack/
# mount -o loop /dev/sr1 /mnt/centos/
注挂载前先检查sr0和sr1对应的iso文件可以使用lsblk根据大小来判断。
编辑yum源文件
# rm -f /etc/yum.repos.d/C*
# vi /etc/yum.repos.d/local.repo
[centos]
namecentos 7 2009
baseurlfile:///mnt/centos
gpgcheck0
enabled1 [openstack]
name openstack
baseurlfile:///mnt/openstack
gpgcheck0
enabled1
2安装软件
# yum install -y qemu-kvm libvirt virt-install bridge-utils virt-manager qemu-img virt-viewer
3启动libvirtd守护进程。
# systemctl start libvirtd
# systemctl enable libvirtd
4检查kvm相关的内核模块。
# lsmod |grep kvm
kvm_intel 188740 0
kvm 637289 1 kvm_intel
irqbypass 13503 1 kvm
如果没有输出则执行
# modprobe kvm
4.使用iso镜像文件创建虚拟机
1设置SELinux和防火墙
# setenforce 0
# systemctl stop firewalld
2检查端口5911的是否被占用如果占用就使用其它端口VNC使用5900以上的端口
# netstat -tuln|grep 5911
3启动virt-install
# virt-install --name vm01 \
--memory 1024 --vcpus 1 --network bridgevirbr0 \
--disk size20 --cdrom /mnt/openstack/images/CentOS-6.10-x86_64-minimal.iso \ --boot hd,cdrom --graphics vnc,listen0.0.0.0,port5911
4在Windows启动一个Vnc客户端程序如TigerVNC Viewer连接ip:5911完成安装。
5列出系统已有的虚拟机
# virsh list --all
5.使用已安装好的硬盘创建虚拟机
1复制镜像
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros01.img
# chown qemu:qemu /var/lib/libvirt/images/cirros01.img
# chmod 600 /var/lib/libvirt/images/cirros01.img
2导入虚拟机
# virt-install --name vm02 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros01.img \
--import
3列出系统已有的虚拟机
# virsh list --all
6.使用配置文件创建虚拟机
1复制镜像
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros02.img
# chown qemu:qemu /var/lib/libvirt/images/cirros02.img
# chmod 600 /var/lib/libvirt/images/cirros02.img
2复制配置文件
# virsh dumpxml vm02vm03.xml
3修改vm03.xml
修改虚拟机的名字
namevm03/name
修改uuid
uuidb589b8b0-4049-4a9c-9068-dc75c1e0be38/uuid
修改磁盘文件指向新的磁盘文件
disk typefile devicedisk source file/var/lib/libvirt/images/vm02.img/
/disk
修改网卡的MAC地址 interface typedirect mac address52:54:00:0a:7e:8a/ …… /interface
4创建虚拟机。
# virsh define vm03.xml
5启动虚拟机
# virsh start vm03
6列出系统已有的虚拟机
# virsh list --all
实训2-3 虚拟机管理微课视频11分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机。设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
3安装软件
# yum install -y qemu-kvm libvirt virt-install bridge-utils qemu-img
# systemctl start libvirtd
# systemctl enable libvirtd
4快速生成两个虚拟机
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros01.img
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros02.img
# chown qemu:qemu /var/lib/libvirt/images/cirros01.img
# chown qemu:qemu /var/lib/libvirt/images/cirros02.img
# chmod 600 /var/lib/libvirt/images/cirros01.img
# chmod 600 /var/lib/libvirt/images/cirros02.img
# virt-install --name vm01 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros01.img\ --import
# virt-install --name vm02 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros02.img\ --import
2.使用命令管理虚拟机
1查询虚拟机
# virsh list --all
2开启虚拟机
# virsh start vm01
# virsh start vm02
3重启虚拟机
# virsh reboot vm01
4关闭虚拟机
# virsh shutdown vm01
# virsh shutdown vm02
5强制关机
# virsh destroy vm01
# virsh destroy vm01
6设置vm02随宿主机开机自启 # virsh autostart vm02
重启主机
# reboot
查询虚拟机
# virsh list --all
7取消开机自启。
# virsh autostart --disable vm02
3.使用virsh console连接虚拟机
1启动vm01
# virsh start vm01
2连接vm01
# virsh console vm01
3设置IP地址
# sudo vi /etc/network/interfaces
auto eth0
iface eth0 inet static
address 192.168.7.124
netmask 255.255.255.0
gateway 192.168.7.1
4退出连接
按ctrl]
实训2-4 虚拟机存储微课视频25分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机。设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
3安装软件
# yum install -y qemu-kvm libvirt virt-install bridge-utils qemu-img
# systemctl start libvirtd
# systemctl enable libvirtd
4快速生成两个虚拟机
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros01.img
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros02.img
# chown qemu:qemu /var/lib/libvirt/images/cirros01.img
# chown qemu:qemu /var/lib/libvirt/images/cirros02.img
# chmod 600 /var/lib/libvirt/images/cirros01.img
# chmod 600 /var/lib/libvirt/images/cirros02.img
# virt-install --name vm01 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros01.img\ --import
# virt-install --name vm02 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros02.img\ --import
.使用virsh管理存储池
列出当前的存储池
# virsh pool-list --all
创建目录定义存储池
# mkdir /data
# virsh pool-define-as data --type dir --target /data
# virsh pool-list --all
Name State Autostart
------------------------------------------- data inactive no
创建存储池
# virsh pool-build data
# virsh pool-list --all
启动存储池
# virsh pool-start data
# virsh pool-list –all
Name State Autostart
------------------------------------------- data active no
设置存储池自动启动
# virsh pool-autostart data
# virsh pool-list --all
查询存储池的信息
# virsh pool-info data
# virsh pool-dumpxml data
使用virsh管理卷
创建卷并查询卷的信息
# virsh vol-create-as data vol1 1G
# virsh vol-create-as data vol2 1G
# virsh vol-list data --details
# virsh vol-dumpxml vol1 data
使用卷
1停止vm01
# virsh destroy vm01
2编辑vm01加上卷vol1
# virsh edit vm01
disk typefile devicedisk driver nameqemu typeraw/ source file/data/vol1/ target devhdb buside/ address typedrive controller0 bus0 target0 unit1/
/disk
3启动vm01
# virsh start vm01
4用console连接vm01查询卷的情况
# virsh console vm01
# lsblk
删除卷
# virsh vol-delete vol2 data
4.使用qemu-img管理卷
1创建一个卷
# qemu-img create -f qcow2 /var/lib/libvirt/images/vol1.qcow2 1G
# ls /var/lib/libvirt/images
# qemu-img info /var/lib/libvirt/images/vol1.qcow2
2转换卷格式
# qemu-img convert -f qcow2 -O raw \
/var/lib/libvirt/images/vol1.qcow2 /var/lib/libvirt/images/vol1.img
# ls /var/lib/libvirt/images
改变卷的大小
# qemu-img resize /var/lib/libvirt/images/vol1.qcow2 1G
5.使用virsh管理快照
1停止vm01
# virsh destroy vm01
2创建一个快照
# virsh snapshot-create-as --domain vm01 --name snap1
# virsh snapshot-list vm01
3启动虚拟机vm01
# virsh start vm01
4使用console连接虚拟机vm01并创建文件a.txt
# virsh console vm01
Connected to domain vm01
Escape character is ^]
cirros login: cirros
Password:
$ ls
$ touch a.txt
$ ls
a.txt
5恢复到快照
# virsh destroy vm01
# virsh snapshot-revert vm01 snap1
6用virsh console连接虚拟机
# virsh start vm01
Domain vm03 started
# virsh console vm01
Connected to domain vm01
Escape character is ^]
login as cirros user. default password: cubswin:). use sudo for root.
cirros login: cirros
Password:
$ ls
7删除快照
# virsh snapshot-delete vm01 snap1
Domain snapshot snap1 deleted
# virsh snapshot-list vm01
6.使用qemu-img管理快照
1停止vm01
# virsh destroy vm01
2创建快照
# qemu-img snapshot -c snap1 /var/lib/libvirt/images/cirros01.img
3查询快照
# qemu-img snapshot -l /var/lib/libvirt/images/cirros01.img
4启动虚拟机vm01用virsh console连接虚拟机并创建文件a.txt
# virsh start vm01
Domain vm01 started
# virsh console vm01
Connected to domain vm01
Escape character is ^]
cirros login: cirros
Password:
$ ls
$ touch a.txt
$ ls
a.txt
5恢复快照
# virsh destroy vm01
# qemu-img snapshot -a snap1 /var/lib/libvirt/images/cirros01.img
6用virsh console连接虚拟机没有找到文件a.txt
# virsh start vm01
Domain vm01 started
# virsh console vm01
Connected to domain vm03
Escape character is ^]
login as cirros user. default password: cubswin:). use sudo for root.
cirros login: cirros
Password:
$ ls
7删除快照
# qemu-img snapshot -d snap1 /var/lib/libvirt/images/cirros01.img
# qemu-img snapshot -l /var/lib/libvirt/images/cirros01.img
实训2-5 虚拟机网络微课视频20分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机。设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
3安装软件
# yum install -y qemu-kvm libvirt virt-install bridge-utils qemu-img
# yum -y install openvswitch libibverbs
# systemctl enable openvswitch.service
# systemctl start openvswitch.service
# systemctl start libvirtd
# systemctl enable libvirtd
4快速生成两个虚拟机
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros01.img
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros02.img
# chown qemu:qemu /var/lib/libvirt/images/cirros01.img
# chown qemu:qemu /var/lib/libvirt/images/cirros02.img
# chmod 600 /var/lib/libvirt/images/cirros01.img
# chmod 600 /var/lib/libvirt/images/cirros02.img
# virt-install --name vm01 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros01.img\ --import
# virt-install --name vm02 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros02.img\ --import
5开启Linux包转发功能。
# vi /etc/sysctl.conf
net.ipv4.ip_forward 1
# sysctl -p
2.虚拟网络
1创建NAT模式的虚拟网络
创建配置文件
# vi /etc/libvirt/qemu/networks/net1.xml
network namenet1/name uuid8fbb6cc4-7f6a-4dea-ab42-5cf8f9f6305d/uuid forward modenat/ bridge namevirbr1 stpon delay0/ mac address52:54:00:77:fa:81/ ip address192.168.200.1 netmask255.255.255.0 dhcp range start192.168.200.2 end192.168.200.254/ /dhcp /ip
/network
创建虚拟网络
# virsh net-create /etc/libvirt/qemu/networks/net1.xml
启动网络
# virsh net-start net1
2创建Routed模式的虚拟网络
创建配置文件
# vi /etc/libvirt/qemu/networks/net2.xml
network namenet2/name uuid0fc53334-3fdd-47e9-a2f0-0d26c6e0e47c/uuid forward devens33 moderoute interface devens33/ /forward bridge namevirbr2 stpon delay0/ mac address52:54:00:ef:4c:56/ domain namenet2/ ip address192.168.201.1 netmask255.255.255.0 /ip route familyipv4 address192.168.1.0 prefix24 gateway192.168.201.100/
/network
创建虚拟网络
# virsh net-create /etc/libvirt/qemu/networks/net2.xml
启动网络
# virsh net-start net2
3创建isolated模式的虚拟网络
创建配置文件
# vi /etc/libvirt/qemu/networks/net3.xml
network namenet3/name uuidfc2d7b80-712c-44c4-8f60-e8f87da748f4/uuid bridge namevirbr3 stpon delay0/ mac address52:54:00:3f:d5:cc/ domain namenet3/ ip address192.168.124.1 netmask255.255.255.0 /ip
/network
4列出虚拟网络
# virsh net-list [--all]
5配置虚拟机使用虚拟网络
修改虚拟机配置文件配置vm01使用net1。
# virsh edit vm01
interface typenetwork
mac address52:54:00:0a:7e:8a/
source networknet1/
model typevirtio/
address typepci domain0x0000 bus0x00 slot0x03 function0x0/
/interface
3.桥接网络
1创建网桥
# brctl addbr br1
2配置虚拟机使用网桥
修改虚拟机配置文件配置vm01使用br1。
# virsh edit vm01
interface typebridge mac address52:54:00:0a:7e:8a/ source bridgebr1/ model typevirtio/ address typepci domain0x0000 bus0x00 slot0x03 function0x0/
/interface
4.物理网络
配置虚拟机使用物理网络
修改虚拟机配置文件配置vm01使用ens32。
# virsh edit vm01
interface typedirect mac address52:54:00:0a:7e:8a/ source devens32 modebridge/ model typevirtio/ address typepci domain0x0000 bus0x00 slot0x03 function0x0/
/interface
5.Open VSwitch虚拟网桥
1创建Open VSwitch虚拟网桥
# ovs-vsctl add-br ovs-br0
2通过桥接使用OVS
编辑虚拟机的配置文件按下面的示例修改
interface typebridge mac address52:54:00:71:b1:b6/ source bridgeovs-br0/ virtualport typeopenvswitch/ address typepci domain0x0000 bus0x00 slot0x03 function0x0/
/interface
3通过虚拟网络使用OVS
创建类似下面的虚拟网络配置文件
# vi /etc/libvirt/qemu/networks/ovs-br0.xml
network nameovs-br0/name forward modebridge/ bridge nameovs-br0/ virtualport typeopenvswitch/
/network
创建虚拟网络、启动网络
# virsh net-define /etc/libvirt/qemu/networks/ovs-br0.xml
# virsh net-start ovs-br0
# virsh net-autostart ovs-br0
配置虚拟机使用虚拟网络ovsbr
interface typenetwork
mac address52:54:00:0a:7e:8a/
source networkovs-br0 /
model typevirtio/
address typepci domain0x0000 bus0x00 slot0x03 function0x0/
/interface
实训2-6 VXLAN实训微课视频22分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“基础服务”克隆一个虚拟机。
3安装软件两个主机
# yum install -y qemu-kvm libvirt virt-install bridge-utils qemu-img
# yum -y install openvswitch libibverbs
# systemctl enable openvswitch.service
# systemctl start openvswitch.service
# systemctl start libvirtd
# systemctl enable libvirtd
4快速生成虚拟机两个主机
controller
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros01.img
# chown qemu:qemu /var/lib/libvirt/images/cirros01.img
# chmod 600 /var/lib/libvirt/images/cirros01.img
# virt-install --name vm01 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros01.img \
--import
compute
# scp controller:/mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros01.img
# chown qemu:qemu /var/lib/libvirt/images/cirros01.img
# chmod 600 /var/lib/libvirt/images/cirros01.img
# virt-install --name vm01 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros01.img \
--import
5开启Linux包转发功能两个主机
# vi /etc/sysctl.conf
net.ipv4.ip_forward 1
# sysctl -p
2.使用VXLAN连接不同主机内的Linux网桥
1在两个主机分别创建网桥br0。
# brctl addbr br0
# ip link set dev br0 up
2将虚拟机网络桥接到br0。
#virsh destroy vm01
#virsh edit vm01
interface typebridge mac address52:54:00:0a:7e:8a/ source bridgebr0/ model typevirtio/ address typepci domain0x0000 bus0x00 slot0x03 function0x0/
/interface
#virsh start vm01
说明两个虚拟机虽然位于不同的主机但MAC不能相同。
3启动虚拟机分别设置IP地址为10.1.1.1/24和10.1.1.2/24
controller
# virsh console vm01
$ sudo ip a add 10.1.1.1/24 dev eth0
compute
# virsh console vm01
$ sudo ip a add 10.1.1.2/24 dev eth0
4建立VXLAN隧道
controller
# ip link add vxlan0 type vxlan id 100 remote 192.168.9.101 dstport 4789
# ip link set dev vxlan0 up
# brctl addif br0 vxlan0
compute
# ip link add vxlan0 type vxlan id 100 remote 192.168.9.100 dstport 4789
# ip link set dev vxlan0 up
# brctl addif br0 vxlan0
说明 remote设置为对端主机的IP地址在本实验192.168.9.100是Host1的IP地址vxlan id和Host1的vxlan id相同
5测试
controller
# virsh console vm01
$ ping 10.1.1.2
3.基于VXLAN的OVS网桥连接
1两台主机分别创建网桥ovs-br0。
# ovs-vsctl add-br ovs-br0
2将虚拟机网络桥接到ovs-br0
# virsh destroy vm01
# virsh edit vm01
interface typebridge mac address52:54:00:71:b1:b6/ source bridgeovs-br0/ virtualport typeopenvswitch/ address typepci domain0x0000 bus0x00 slot0x03 function0x0/
/interface
# virsh start vm01
说明两个虚拟机虽然位于不同的主机但MAC不能相同。
3启动虚拟机分别设置IP地址
controller
# virsh console vm01
$ sudo ip a add 10.1.1.1/24 dev eth0
compute
# virsh console vm01
$ sudo ip a add 10.1.1.2/24 dev eth0
4建立VXLAN隧道
controller
# ovs-vsctl add-port ovs-br0 vxlan1 -- set interface vxlan1 typevxlan \
options:remote_ip192.168.9.101
说明 remote_ip设置为对端主机的IP地址
compute
# ovs-vsctl add-port ovs-br0 vxlan1 -- set interface vxlan1 typevxlan \
options:remote_ip192.168.9.100
说明 remote_ip设置为对端主机的IP地址
5测试
controller
# virsh console vm01
$ ping 10.1.1.2
实训2-7 GRE实训微课视频5分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“基础服务”克隆一个虚拟机。
3安装软件两个主机
# yum install -y qemu-kvm libvirt virt-install bridge-utils qemu-img
# yum -y install openvswitch libibverbs
# systemctl enable openvswitch.service
# systemctl start openvswitch.service
# systemctl start libvirtd
# systemctl enable libvirtd
4快速生成虚拟机两个主机
controller
# cp /mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros01.img
# chown qemu:qemu /var/lib/libvirt/images/cirros01.img
# chmod 600 /var/lib/libvirt/images/cirros01.img
# virt-install --name vm01 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros01.img \
--import
compute
# scp controller:/mnt/openstack/images/cirros* /var/lib/libvirt/images/cirros01.img
# chown qemu:qemu /var/lib/libvirt/images/cirros01.img
# chmod 600 /var/lib/libvirt/images/cirros01.img
# virt-install --name vm01 \
--memory 1024 \
--vcpus 1 \
--network bridgevirbr0 \
--disk /var/lib/libvirt/images/cirros01.img \
--import
5开启Linux包转发功能两个主机
# vi /etc/sysctl.conf
net.ipv4.ip_forward 1
# sysctl -p
2.基于GRE的OVS网桥连接
1两台主机分别创建网桥ovs-br1。
# ovs-vsctl add-br ovs-br1
2将虚拟机网络桥接到ovs-br1。
#virsh destroy vm01
#virsh edit vm01
interface typebridge mac address52:54:00:71:b1:b6/ source bridgeovs-br1/ virtualport typeopenvswitch/ address typepci domain0x0000 bus0x00 slot0x03 function0x0/
/interface
#virsh start vm01
说明两个虚拟机虽然位于不同的主机但MAC不能相同。
3启动虚拟机分别设置IP地址
controller
# virsh console vm01
$ sudo ip a add 10.1.1.1/24 dev eth0
compute
# virsh console vm01
$ sudo ip a add 10.1.1.2/24 dev eth0
4建立GRE隧道
controller
# ovs-vsctl add-port ovs-br1 gre0 -- set interface gre0 typegre options:remote_ip192.168.9.101
说明 remote_ip设置为对端主机的IP地址。
compute
# ovs-vsctl add-port ovs-br1 gre0 -- set interface gre0 typegre options:remote_ip192.168.9.100
说明 remote_ip设置为对端主机的IP地址。
5测试
controller
# virsh console vm01
$ ping 10.1.1.2
实训3-1 OpenStack环境准备实训微课视频15分钟
1.VMware网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2.主机准备
1虚拟主机controller
从虚拟机centos2009快照“命令行”克隆一个虚拟机
主机名称controller
CPU开启虚拟化功能
内存8G
CD/DVD增加一个CD/DVD
CD/DVD1使用openstack.iso为虚拟光驱
CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
网卡增加一块“仅主机模式”网卡
从虚拟机centos2009快照“命令行”克隆一个虚拟机
主机名称compute
CPU开启虚拟化功能
内存8G
网卡增加一块“仅主机模式”网卡
CD/DVD删除所有CD/DVD
3.检查CPU是否已开启虚拟化功能
启动controller和compute执行
# grep -E svm|vmx /proc/cpuinfo
4.设置网卡
1controller节点
网卡1NAT模式网卡
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPEEthernet
BOOTPROTOstatic
DEFROUTEyes
IPV4_FAILURE_FATALno
NAMEens33
DEVICEens33
ONBOOTyes
IPADDR192.168.9.100
GATEWAY192.168.9.2
DNS1192.168.9.2
NETMASK255.255.255.0
网卡2仅主机模式网卡
# vi /etc/sysconfig/network-scripts/ifcfg-ens32
TYPEEthernet
BOOTPROTOnone
DEFROUTEyes
IPV4_FAILURE_FATALno
NAMEens32
DEVICEens32
ONBOOTyes
配置完成后重启网络服务并测试
# systemctl restart network
# ping www.baidu.com
(2compute节点
网卡1NAT模式网卡
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPEEthernet
BOOTPROTOstatic
DEFROUTEyes
IPV4_FAILURE_FATALno
NAMEens33
DEVICEens33
ONBOOTyes
IPADDR192.168.9.101
GATEWAY192.168.9.2
DNS1192.168.9.2
NETMASK255.255.255.0
网卡2仅主机模式网卡
# vi /etc/sysconfig/network-scripts/ifcfg-ens32
TYPEEthernet
BOOTPROTOnone
DEFROUTEyes
IPV4_FAILURE_FATALno
NAMEens32
DEVICEens32
ONBOOTyes
配置完成后重启网络服务并测试
# systemctl restart network
# ping www.baidu.com
5.设置防火墙
controller节点和compute节点都执行
# systemctl stop firewalld
# systemctl disable firewalld
6.设置SELinux
controller节点和compute节点都执行
# setenforce 0
controller节点和compute节点都修改文件/etc/selinux/config
# vi /etc/selinux/config
……
SELINUXpermissive
……
SELINUXTYPEtargeted
7.设置主机名和主机名映射
1设置主机名
controller节点
# hostnamectl set-hostname controller
# hostnamectl
compute节点
# hostnamectl set-hostname controller
# hostnamectl
2设置主机名映射
controller节点和compute节点一样设置
# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.9.100 controller
192.168.9.101 compute
设置完成后检查两个节点都要测试
# ping controller
# ping compute
8.设置内核参数
修改文件/etc/sysctl.conf增加或修改
net.ipv4.ip_forward 1
修改完成后执行命令
# sysctl -p
实训3-2 基础服务和软件安装微课视频15分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“环境备好”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“环境备好”克隆一个虚拟机。
2.配置yum源
1controller节点
挂载CD/DVD注挂载前确认一下sr0和sr1哪个是centos哪个是openstack
# mkdir /mnt/centos
# mkdir /mnt/openstack
# mount -o loop /dev/sr0 /mnt/centos
# mount -o loop /dev/sr1 /mnt/openstack
删除/etc/yum.repos.d/下的所有文件
新建/etc/yum.repos.d/openstack.repo文件内容如下
[centos]
namecentos7 2009
baseurlfile:///mnt/centos
gpgcheck0
enabled1 [openstack]
nameopenstack rocky
baseurl file:///mnt/openstack
gpgcheck0
enabled1
配置完成后执行下面的命令测试
# yum clean all
# yum list
设置vsftpd服务
# yum -y install vsftpd
# vi /etc/vsftpd/vsftpd.conf
anon_root/mnt
#systemctl enable vsftpd
#systemctl start vsftpd
2compute节点
删除/etc/yum.repos.d/下的所有文件
新建/etc/yum.repos.d/openstack.repo文件内容如下
[centos]
namecentos7 2009
baseurlftp://controller/centos
gpgcheck0
enabled1 [openstack]
nameopenstack rocky
baseurl ftp://controller/openstack
gpgcheck0
enabled1
配置完成后执行下面的命令测试。
# yum clean all
# yum list
3. 配置时间服务
1Controller节点
安装软件
# yum -y install chrony
修改配置文件
# vi /etc/chrony.conf
allow all
local stratum 10
完成后启动服务
# systemctl restart chronyd
# systemctl enable chronyd
2Compute节点
安装软件
# yum -y install chrony
修改配置文件
# vi /etc/chrony.conf
server 192.168.9.100 iburst
完成后启动服务
# systemctl restart chronyd
# systemctl enable chronyd
测试
# chronyc sources
4.基础软件安装
在Controller节点和Compute节点都要安装
# yum install -y python-openstackclient
# yum install -y openstack-selinux
5.安装数据库
1controller节点安装mariadb
# yum install -y mariadb mariadb-server python2-PyMySQL
2数据库修改mariadb配置
# vi /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address 192.168.9.100
default-storage-engine innodb
innodb_file_per_table on
max_connections 4096
collation-server utf8_general_ci
character-set-server utf8
3启动服务
# systemctl enable mariadb.service
# systemctl start mariadb.service
4安全设置
# mysql_secure_installation
Enter current password for root (enter for none):
Set root password? [Y/n] y
New password:
Re-enter new password:
Remove anonymous users? [Y/n] y
Disallow root login remotely? [Y/n] n
Remove test database and access to it? [Y/n] y
Reload privilege tables now? [Y/n] y
6.消息服务
1controller节点安装rabbitmq
# yum -y install rabbitmq-server
2启动服务
# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service
3增加用户和授权
# rabbitmqctl add_user openstack 000000
Creating user openstack
# rabbitmqctl set_permissions openstack .* .* .*
Setting permissions for user openstack in vhost /
7.缓冲服务memcache
1Controller节点安装memcached
# yum -y install memcached python-memcached
2修改配置
# vi /etc/sysconfig/memcached
OPTIONS-l 127.0.0.1,::1,controller
3启动服务
# systemctl enable memcached.service
# systemctl start memcached.service
实训3-3 安装和配置Keystone微课视频13分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“基础服务”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“基础服务”克隆一个虚拟机。
2. 安装和配置keystone
1创建数据库
# mysql -u root -p
MariaDB [(none)] CREATE DATABASE keystone;
2创建用户keystone并授权
MariaDB [(none)] GRANT ALL PRIVILEGES ON keystone.* \
TO keystonelocalhost IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON keystone.* \
TO keystone% IDENTIFIED BY 000000;
2安装keystone
# yum install -y openstack-keystone httpd mod_wsgi
3修改配置
# vi /etc/keystone/keystone.conf
[database]
connection mysqlpymysql://keystone:000000controller/keystone [token]
provider fernet
4初始化数据库
# su -s /bin/sh -c keystone-manage db_sync keystone
5初始化keystone
# keystone-manage fernet_setup --keystone-user keystone \
--keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone \
--keystone-group keystone
# keystone-manage bootstrap --bootstrap-password 000000 \
--bootstrap-admin-url http://controller:5000/v3/ \
--bootstrap-internal-url http://controller:5000/v3/ \
--bootstrap-public-url http://controller:5000/v3/ \
--bootstrap-region-id RegionOne
6修改httpd的配置
# vi /etc/httpd/conf/httpd.conf
ServerName controller
# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
7启动httpd服务
# systemctl enable httpd.service
# systemctl start httpd.service
8设置环境变量
# vi ~/.bashrc
export OS_USERNAMEadmin
export OS_PASSWORD000000
export OS_PROJECT_NAMEadmin
export OS_USER_DOMAIN_NAMEDefault
export OS_PROJECT_DOMAIN_NAMEDefault
export OS_AUTH_URLhttp://controller:5000/v3
export OS_IDENTITY_API_VERSION3
export OS_IMAGE_API_VERSION2
# logout
9创建一个项目
# openstack project create --domain default --description Service Project service
10测试
# openstack project list
3.设置openstack命令的自动补全功能
1安装bash-completion软件
# yum -y install bash-completion
2修改~/.bashrc文件在最后加上
source (openstack complete --shell bash)
实训3-4安装Glance微课视频9分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“keystone-installed”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“keystone-installed”克隆一个虚拟机。
2.Glance安装和配置
1创建数据库
# mysql -u root -p
MariaDB [(none)] CREATE DATABASE glance;
MariaDB [(none)] GRANT ALL PRIVILEGES ON glance.* TO glancelocalhost \
IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON glance.* TO glance% \
IDENTIFIED BY 000000;
2创建用户、角色和服务
# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
-------------------------------------------------------
| Field | Value |
-------------------------------------------------------
| domain_id | default |
| enabled | True |
| id | 326a6b23f0c44b68b9d1f1bae2626d55 |
| name | glance |
| options | {} |
| password_expires_at | None |
-------------------------------------------------------
# openstack role add --project service --user glance admin
# openstack service create --name glance --description OpenStack Image image
-----------------------------------------------
| Field | Value |
-----------------------------------------------
| description | OpenStack Image |
| enabled | True |
| id | c3ff2e79ef51415b9216ada990a0769f |
| name | glance |
| type | image |
-----------------------------------------------
3创建Endpoint
# openstack endpoint create --region RegionOne \
image public http://controller:9292
------------------------------------------------
| Field | Value |
------------------------------------------------
| enabled | True |
| id | 0de94e463329421e90723de928b8ec5b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | c3ff2e79ef51415b9216ada990a0769f |
| service_name | glance |
| service_type | image |
| url | http://controller:9292 |
------------------------------------------------
# openstack endpoint create --region RegionOne \
image internal http://controller:9292
# openstack endpoint create --region RegionOne \
image admin http://controller:9292
4安装软件
# yum -y install openstack-glance
5修改配置
修改/etc/glance/glance-api.conf
# vi /etc/glance/glance-api.conf
[database]
connection mysqlpymysql://glance:000000controller/glance [keystone_authtoken]
www_authenticate_uri http://controller:5000
auth_url http://controller:5000
memcached_servers controller:11211
auth_type password
project_domain_name Default
user_domain_name Default
project_name service
username glance
password 000000 [paste_deploy]
flavor keystone [glance_store]
stores file,http
default_store file
filesystem_store_datadir /var/lib/glance/images/
修改/etc/glance/glance-registry.conf
# vi /etc/glance/glance-registry.conf
[database]
connection mysqlpymysql://glance:000000controller/glance [keystone_authtoken]
www_authenticate_uri http://controller:5000
auth_url http://controller:5000
memcached_servers controller:11211
auth_type password
project_domain_name Default
user_domain_name Default
project_name service
username glance
password 000000 [paste_deploy]
flavor keystone
6初始化数据库
# su -s /bin/sh -c glance-manage db_sync glance
7使能和启动服务
# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service
8创建镜像
# glance image-create --name centos7 --disk-format qcow2 \
--container-format bare --progress \ /mnt/openstack/images/Centos-7-x86_64-2009.qcow2
# glance image-create --name cirros --disk-format qcow2 \
--container-format bare --progress \ /mnt/openstack/images/cirros-0.3.3-x86_64-disk.img
# glance image-list
-----------------------------------------------
| ID | Name |
-----------------------------------------------
| bf327e17-6e4f-43b0-b053-99d6c135de10 | centos7 |
| f05beacf-2783-42bc-82ba-66041e28eca0 | cirros |
-----------------------------------------------
实训3-5 安装和配置Nova微课视频18分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“Glance-installed”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“Glance-installed”克隆一个虚拟机。
2.controller节点
1创建数据库
# mysql -u root -p
MariaDB [(none)] CREATE DATABASE nova_api;
MariaDB [(none)] CREATE DATABASE nova;
MariaDB [(none)] CREATE DATABASE nova_cell0;
MariaDB [(none)] CREATE DATABASE placement;
MariaDB [(none)]GRANT ALL PRIVILEGES ON nova_api.* TO novalocalhost \ IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON nova_api.* TO nova% \ IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON nova.* TO novalocalhost \
IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON nova.* \
TO nova% IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON nova_cell0.* \
TO novalocalhost IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON nova_cell0.* \
TO nova% IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON placement.* \
TO placementlocalhost IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON placement.* \
TO placement% IDENTIFIED BY 000000;
2创建用户、角色和服务
# openstack user create --domain default --password-prompt nova
# openstack role add --project service --user nova admin
# openstack service create --name nova --description OpenStack Compute compute
# openstack user create --domain default --password-prompt placement
# openstack role add --project service --user placement admin
# openstack service create --name placement --description Placement API placement
3创建Endpoint
# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
# openstack endpoint create --region RegionOne placement public http://controller:8778
# openstack endpoint create --region RegionOne placement internal http://controller:8778
# openstack endpoint create --region RegionOne placement admin http://controller:8778
4安装软件
# yum -y install openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api
5修改配置
修改/etc/nova/nova.conf
# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis osapi_compute,metadata
transport_url rabbit://openstack:000000controller
my_ip 192.168.9.100
use_neutron true
firewall_driver nova.virt.firewall.NoopFirewallDriver [api_database]
connection mysqlpymysql://nova:000000controller/nova_api [database]
connection mysqlpymysql://nova:000000controller/nova [placement_database]
connection mysqlpymysql://placement:000000controller/placement [api]
auth_strategy keystone [keystone_authtoken]
auth_url http://controller:5000/v3
memcached_servers controller:11211
auth_type password
project_domain_name Default
user_domain_name Default
project_name service
username nova
password 000000 [vnc]
enabled true
server_listen $my_ip
server_proxyclient_address $my_ip
novncproxy_base_url http://controller:6080/vnc_auto.html [glance]
api_servers http://controller:9292 [oslo_concurrency]
lock_path /var/lib/nova/tmp [placement]
region_name RegionOne
project_domain_name Default
project_name service
auth_type password
user_domain_name Default
auth_url http://controller:5000/v3
username placement
password 000000
修改/etc/httpd/conf.d/00-nova-placement-api.conf
# vi /etc/httpd/conf.d/00-nova-placement-api.conf
Directory /usr/bin IfVersion 2.4 Require all granted /IfVersion IfVersion 2.4 Order allow,deny Allow from all /IfVersion
/Directory
重启httpd
# systemctl restart httpd
6初始化数据库
# su -s /bin/sh -c nova-manage api_db sync nova
# su -s /bin/sh -c nova-manage cell_v2 map_cell0 nova
# su -s /bin/sh -c nova-manage cell_v2 create_cell --namecell1 --verbose nova
# su -s /bin/sh -c nova-manage db sync nova
# su -s /bin/sh -c nova-manage cell_v2 list_cells nova
7使能和启动服务
# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service \
openstack-nova-scheduler.service \
openstack-nova-conductor.service \
openstack-nova-novncproxy.service
3.compute节点
1安装软件
# yum -y install openstack-nova-compute
2修改配置
修改/etc/nova/nova.conf
# vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis osapi_compute,metadata
transport_url rabbit://openstack:000000controller
my_ip 192.168.9.101
use_neutron true
firewall_driver nova.virt.firewall.NoopFirewallDriver [api]
auth_strategy keystone [keystone_authtoken]
auth_url http://controller:5000/v3
memcached_servers controller:11211
auth_type password
project_domain_name Default
user_domain_name Default
project_name service
username nova
password 000000 [vnc]
enabled true
server_listen 0.0.0.0
server_proxyclient_address $my_ip
novncproxy_base_url http://controller:6080/vnc_auto.html [glance]
api_servers http://controller:9292 [oslo_concurrency]
lock_path /var/lib/nova/tmp [placement]
region_name RegionOne
project_domain_name Default
project_name service
auth_type password
user_domain_name Default
auth_url http://controller:5000/v3
username placement
password 000000 [libvirt]
virt_type qemu
2使能和启动服务
# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service
4.将compute节点加入集群
1在controller节点执行
列出计算节点:
# openstack compute service list --service nova-compute
将计算节点加入cell
# su -s /bin/sh -c nova-manage cell_v2 discover_hosts --verbose nova
查询cell中的计算节点
# nova-manage cell_v2 list_hosts
实训3-6 安装和配置Neutron微课视频18分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“Nova-installed”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“Nova-installed”克隆一个虚拟机。
2.controller节点
1创建数据库
# mysql -uroot -p
MariaDB [(none)] CREATE DATABASE neutron;
MariaDB [(none)] GRANT ALL PRIVILEGES ON neutron.* \
TO neutronlocalhost IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON neutron.* \
TO neutron% IDENTIFIED BY 000000;
2创建用户、角色和服务
# openstack user create --domain default --password-prompt neutron
# openstack role add --project service --user neutron admin
# openstack service create --name neutron --description OpenStack Networking network
3创建Endpoint
# openstack endpoint create --region RegionOne network public http://controller:9696
# openstack endpoint create --region RegionOne network internal http://controller:9696
# openstack endpoint create --region RegionOne network admin http://controller:9696
4安装软件
# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
# yum -y install libibverbs
5修改配置
修改/etc/neutron/neutron.conf
# vi /etc/neutron/neutron.conf
[database]
connection mysqlpymysql://neutron:000000controller/neutron [DEFAULT]
core_plugin ml2
service_plugins router
allow_overlapping_ips true
transport_url rabbit://openstack:000000controller
auth_strategy keystone
notify_nova_on_port_status_changes true
notify_nova_on_port_data_changes true [keystone_authtoken]
www_authenticate_uri http://controller:5000
auth_url http://controller:5000
memcached_servers controller:11211
auth_type password
project_domain_name default
user_domain_name default
project_name service
username neutron
password 000000 [nova]
auth_url http://controller:5000
auth_type password
project_domain_name default
user_domain_name default
region_name RegionOne
project_name service
username nova
password 000000 [oslo_concurrency]
lock_path /var/lib/neutron/tmp
修改/etc/neutron/plugins/ml2/ml2_conf.ini
# vi /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
type_drivers flat,vlan,vxlan,local
tenant_network_types vxlan,local
mechanism_drivers linuxbridge,l2population
extension_drivers port_security [ml2_type_flat]
flat_networks provider [ml2_type_vlan]
network_vlan_ranges provider:100:200 [ml2_type_vxlan]
vni_ranges 1:1000 [securitygroup]
enable_ipset true
修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings provider:ens32 [vxlan]
enable_vxlan true
local_ip 192.168.9.100
l2_population true [securitygroup]
enable_security_group true
firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
加载模块
# lsmod|grep br_netfilter
# modprobe br_netfilter
修改内核参数
# vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables1
net.bridge.bridge-nf-call-ip6tables1
修改完后执行
# sysctl -p
修改/etc/neutron/l3_agent.ini # vi /etc/neutron/l3_agent.ini
[DEFAULT]
interface_driver linuxbridge
修改/etc/neutron/dhcp_agent.ini 设置dhcp服务
# vi /etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver linuxbridge
dhcp_driver neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata true
修改/etc/neutron/metadata_agent.ini设置metadata服务
# vi /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host controller
metadata_proxy_shared_secret 000000
修改控制节点的/etc/nova/nova.conf让Nova使用Neutron
# vi /etc/nova/nova.conf
[neutron]
url http://controller:9696
auth_url http://controller:5000
auth_type password
project_domain_name default
user_domain_name default
region_name RegionOne
project_name service
username neutron
password 000000
service_metadata_proxy true
metadata_proxy_shared_secret 000000
建立符号链接
# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
6初始化数据库
# su -s /bin/sh -c neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini \
upgrade head neutron
7使能和启动服务
# systemctl restart openstack-nova-api.service
# systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service \
neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service
3.compute节点
1安装软件
# yum -y install openstack-neutron-linuxbridge ebtables ipset
# yum -y install libibverbs
2修改配置
修改/etc/neutron/neutron.conf
# vi /etc/neutron/neutron.conf
[DEFAULT]
transport_url rabbit://openstack:000000controller
auth_strategy keystone [keystone_authtoken]
www_authenticate_uri http://controller:5000
auth_url http://controller:5000
memcached_servers controller:11211
auth_type password
project_domain_name default
user_domain_name default
project_name service
username neutron
password 000000 [oslo_concurrency]
lock_path /var/lib/neutron/tmp
修改/etc/neutron/plugins/ml2/linuxbridge_agent.ini
# vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings provider:ens32 [vxlan]
enable_vxlan true
local_ip 192.168.9.101
l2_population true [securitygroup]
enable_security_group true
firewall_driver neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
加载模块
# lsmod|grep br_netfilter
# modprobe br_netfilter
修改内核参数
# vi /etc/sysctl.conf
net.bridge.bridge-nf-call-iptables1
net.bridge.bridge-nf-call-ip6tables1
修改完后执行
# sysctl -p
修改/etc/nova/nova.conf让Nova使用Neutron
# vi /etc/nova/nova.conf
[neutron]
url http://controller:9696
auth_url http://controller:5000
auth_type password
project_domain_name default
user_domain_name default
region_name RegionOne
project_name service
username neutron
password 000000
3使能和启动服务
# systemctl restart openstack-nova-compute.service
# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service
实训3-7 安装和配置Dashboard微课视频10分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“Neutron-installed”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“Neutron-installed”克隆一个虚拟机。
2.controller节点安装和配置
Dashboard只在控制节点安装。
1安装软件
# yum -y install openstack-dashboard
2修改配置
修改/etc/openstack-dashboard/local_settings
# vi /etc/openstack-dashboard/local_settings
OPENSTACK_HOST controller
ALLOWED_HOSTS [*, two.example.com]
SESSION_ENGINE django.contrib.sessions.backends.cache CACHES { default: { BACKEND: django.core.cache.backends.memcached.MemcachedCache, LOCATION: controller:11211, }
}
OPENSTACK_KEYSTONE_URL http://%s:5000/v3 % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT True
OPENSTACK_API_VERSIONS { identity: 3, image: 2, volume: 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN Default
OPENSTACK_KEYSTONE_DEFAULT_ROLE admin
TIME_ZONE Asia/Shanghai
修改/etc/httpd/conf.d/openstack-dashboard.conf
# vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL}
3重启httpd和memcached服务
# systemctl restart httpd.service memcached.service
实训3-8 使用dashboard创建虚拟机微课视频18分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“Dashboard-installed”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“Dashboard-installed”克隆一个虚拟机。
2.创建实例
1登录Dashboard
在浏览器建议使用chrome地址栏输入http://192.168.9.100/dashboard/。
2创建镜像
先用下列命令查询镜像
# glance image-list
如果没有镜像则使用下面的命令创建镜像
# glance image-create --name centos7 --disk-format qcow2 \
--container-format bare –progress \ /mnt/openstack/images/CentOS_7.2_x86_64.qcow2
# glance image-create --name cirros --disk-format qcow2 \
--container-format bare –progress \ /mnt/openstack/images/cirros-0.3.3-x86_64-disk.img
3创建实例类型
在Dashboard左边栏依次选取“管理员”“计算”“实例类型”。
创建实例类型f11个VCPU、512M内存、1G磁盘。
4创建网络
1创建外网和内网
在Dashboard左边栏依次选取“管理员”“网络”“网络”在右边栏选创建网络。
创建外网wnet;
创建内网nnet。
2为外网和内网创建子网
创建外网子网。在网络列表中点击“wnet”
创建内网子网。在网络列表中点击“nnet”。
5创建安全组
创建安全组在Dashboard左边栏依次选取项目、网络、安全组创建安全组securitygroup1。
管理安全组规则协议分别选“所有ICMP协议”“所有TCP协议”“所有UDP协议”方向分别选“入口”“出口”共创建六条规则。
6创建路由器
创建路由器在Dashboard左边栏依次选取“项目”“网络”“路由”在右边页面点击“新建路由”
增加接口在页面中点击新创建的路由器名称在页面中依次选取“接口”“增加接口”选择内网创建接口。
7查询网络的拓扑结构
在Dashboard左边栏依次选取“项目”“网络”“网络拓扑”。
8创建实例虚拟机
在Dashboard左边栏依次选取“项目”“计算”“实例”。
9绑定浮动IP
在实例列表右边的下拉框选“绑定浮动IP”。
3.访问实例
1在Dashboard中访问实例
修改C:\Windows\System32\drivers\etc\hosts文件加上
192.168.9.100 controller
在实例列表中点击实例名称进入“控制 台”访问实例。
2在SecureCRT中访问实例
打开SecureCRT用实例的外网IP地址连接实例即可。
实训3-9 安装和配置Cinder微课视频15分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“Dashboard-installed”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“Dashboard-installed”克隆一个虚拟机。
2.Controller节点的安装和配置
1创建数据库
# mysql -u root -p
MariaDB [(none)] CREATE DATABASE cinder;
MariaDB [(none)] GRANT ALL PRIVILEGES ON cinder.* \
TO cinderlocalhost IDENTIFIED BY 000000;
MariaDB [(none)] GRANT ALL PRIVILEGES ON cinder.* \
TO cinder% IDENTIFIED BY 000000;
2创建用户、角色和服务
# openstack user create --domain default --password-prompt cinder
# openstack role add --project service --user cinder admin
# openstack service create --name cinderv2 \
--description OpenStack Block Storage volumev2
# openstack service create --name cinderv3 \
--description OpenStack Block Storage volumev3
3创建Endpoints
# openstack endpoint create --region RegionOne \
volumev2 public http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 internal http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev2 admin http://controller:8776/v2/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 public http://controller:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 internal http://controller:8776/v3/%\(project_id\)s
# openstack endpoint create --region RegionOne \
volumev3 admin http://controller:8776/v3/%\(project_id\)s
4安装软件
# yum -y install openstack-cinder
5修改配置
修改/etc/cinder/cinder.conf
# vi /etc/cinder/cinder.conf
[database]
connection mysqlpymysql://cinder:000000controller/cinder [DEFAULT]
transport_url rabbit://openstack:000000controller
auth_strategy keystone
my_ip 192.168.9.100 [keystone_authtoken]
www_authenticate_uri http://controller:5000
auth_url http://controller:5000
memcached_servers controller:11211
auth_type password
project_domain_id default
user_domain_id default
project_name service
username cinder
password 000000 [oslo_concurrency]
lock_path /var/lib/cinder/tmp
6初始化数据库
# su -s /bin/sh -c cinder-manage db sync cinder
7配置Nova使用Cinder
修改Nova配置文件
# vi /etc/nova/nova.conf
[cinder]
os_region_name RegionOne
重启NovaAPI服务
# systemctl restart openstack-nova-api.service
8使能和启动服务
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service
2.存储节点Compute节点的安装和配置
1配置逻辑卷
安装LVM软件
# yum install lvm2 device-mapper-persistent-data
启动和使能LVM服务
# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
硬盘分区将/dev/sdb分成三个区
# fdisk /dev/sdb
创建逻辑卷组
# pvcreate /dev/sdb1
# vgcreate cinder-volumes /dev/sdb1
修改LVM配置
# vi /etc/lvm/lvm.conf
devices {
...
filter [ a/sdb1/, r/.*/]
2安装Cinder软件
# yum -y install openstack-cinder targetcli python-keystone
3修改Cinder配置
# vi /etc/cinder/cinder.conf
[database]
connection mysqlpymysql://cinder:000000controller/cinder [DEFAULT]
transport_url rabbit://openstack:000000controller
auth_strategy keystone
my_ip 192.168.9.101
enabled_backends lvm
glance_api_servers http://controller:9292 [keystone_authtoken]
www_authenticate_uri http://controller:5000
auth_url http://controller:5000
memcached_servers controller:11211
auth_type password
project_domain_id default
user_domain_id default
project_name service
username cinder
password 000000 [lvm]
volume_driver cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group cinder-volumes
iscsi_protocol iscsi
iscsi_helper lioadm [oslo_concurrency]
lock_path /var/lib/cinder/tmp
4使能和启动服务
# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service
3.验证
在控制节点执行
# openstack volume service list
实训3-10 安装和配置Swift微课视频28分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机controller快照“cinder-installed”克隆一个虚拟机设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机compute快照“cinder-installed”克隆一个虚拟机。
2.控制节点安装与配置
说明使用/dev/sdb2和/dev/sdb3作为swift设备。
1创建用户、角色和服务
# openstack user create --domain default --password 000000 swift
# openstack role add --project service --user swift admin
# openstack service create --name swift \
--description OpenStack Object Storage object-store
2创建Endpoints
# openstack endpoint create --region RegionOne object-store \
public http://controller:8080/v1/AUTH_%\(project_id\)s
# openstack endpoint create --region RegionOne object-store \
internal http://controller:8080/v1/AUTH_%\(project_id\)s
# openstack endpoint create --region RegionOne object-store \
admin http://controller:8080/v1
3安装软件
# yum -y install openstack-swift-proxy python-swiftclient \
python-keystoneclient python-keystonemiddleware memcached
4配置proxy-server
复制配置文件
# cp /mnt/openstack/swiftconf/proxy-server.conf /etc/swift/proxy-server.conf
# chmod 640 /etc/swift/proxy-server.conf
修改配置文件/etc/swift/proxy-server.conf
# vi /etc/swift/proxy-server.conf
[DEFAULT]
bind_port 8080
user swift
swift_dir /etc/swift [pipeline:main]
pipeline catch_errors gatekeeper healthcheck proxy-logging cache container_sync bulk ratelimit authtoken keystoneauth container-quotas account-quotas slo dlo versioned_writes proxy-logging proxy-server [app:proxy-server]
use egg:swift#proxy
account_autocreate True [filter:keystoneauth]
use egg:swift#keystoneauth
operator_roles admin,user [filter:authtoken]
paste.filter_factory keystonemiddleware.auth_token:filter_factory
www_authenticate_uri http://controller:5000
auth_url http://controller:5000
memcached_servers controller:11211
auth_type password
project_domain_id default
user_domain_id default
project_name service
username swift
password 000000
delay_auth_decision True [filter:cache]
use egg:swift#memcache
memcache_servers controller:11211
3.对象存储节点Compute的安装与配置
1安装和配置rsyncd
安装软件
# yum install xfsprogs rsync
准备磁盘和挂载
# mkfs.xfs /dev/sdb2
# mkfs.xfs /dev/sdb3
# mkdir -p /srv/node/sdb2
# mkdir -p /srv/node/sdb3
# vi /etc/fstab
/dev/sdb2 /srv/node/sdb2 xfs noatime,nodiratime,nobarrier,logbufs8 0 2
/dev/sdb3 /srv/node/sdb3 xfs noatime,nodiratime,nobarrier,logbufs8 0 2
# mount /srv/node/sdb2
# mount /srv/node/sdb3
配置syncd
# vi /etc/rsyncd.conf
uid swift
gid swift
log file /var/log/rsyncd.log
pid file /var/run/rsyncd.pid
address 192.168.9.101 [account]
max connections 2
path /srv/node/
read only False
lock file /var/lock/account.lock [container]
max connections 2
path /srv/node/
read only False
lock file /var/lock/container.lock [object]
max connections 2
path /srv/node/
read only False
lock file /var/lock/object.lock
启动rsyncd服务
# systemctl enable rsyncd.service
# systemctl start rsyncd.service
2安装Swift组件
安装软件
# yum -y install openstack-swift-account openstack-swift-container \ openstack-swift-object
复制account-server、container-server和object-server的配置文件
# scp controller:/mnt/openstack/swiftconf/account-server.conf /etc/swift/account-server.conf
# chmod 640 /etc/swift/account-server.conf
# scp controller:/mnt/openstack/swiftconf/container-server.conf /etc/swift/container-server.conf
# chmod 640 /etc/swift/container-server.conf
# scp controller:/mnt/openstack/swiftconf/object-server.conf /etc/swift/object-server.conf
# chmod 640 /etc/swift/object-server.conf
修改account-server的配置文件
# vi /etc/swift/account-server.conf
[DEFAULT]
bind_ip 192.168.9.101
bind_port 6202
user swift
swift_dir /etc/swift
devices /srv/node
mount_check True [pipeline:main]
pipeline healthcheck recon account-server [filter:recon]
use egg:swift#recon
recon_cache_path /var/cache/swift
修改container-server的配置文件
# vi /etc/swift/container-server.conf
[DEFAULT]
bind_ip 192.168.9.101
bind_port 6201
user swift
swift_dir /etc/swift
devices /srv/node
mount_check True [pipeline:main]
pipeline healthcheck recon container-server [filter:recon]
use egg:swift#recon
recon_cache_path /var/cache/swift
修改object-server的配置文件
# vi /etc/swift/object-server.conf
[DEFAULT]
bind_ip 192.168.9.101
bind_port 6200
user swift
swift_dir /etc/swift
devices /srv/node
mount_check True [pipeline:main]
pipeline healthcheck recon object-server [filter:recon]
use egg:swift#recon
recon_cache_path /var/cache/swift
recon_lock_path /var/lock
准备文件夹
# chown -R swift:swift /srv/node
# mkdir -p /var/cache/swift
# chown -R root:swift /var/cache/swift
# chmod -R 775 /var/cache/swift
3生成环
生成账户环
# cd /etc/swift
# swift-ring-builder account.builder create 8 2 1
# swift-ring-builder account.builder add --region 1 --zone 1 \
--ip 192.168.9.101 --port 6202 --device sdb2 --weight 100
# swift-ring-builder account.builder add --region 1 --zone 1 \
--ip 192.168.9.101 --port 6202 --device sdb3 --weight 100
验证环
# swift-ring-builder account.builder
平衡环
# swift-ring-builder account.builder rebalance
生成容器环
# cd /etc/swift
# swift-ring-builder container.builder create 8 2 1
# swift-ring-builder container.builder add --region 1 --zone 1 \
--ip 192.168.9.101 --port 6201 --device sdb2 --weight 100
# swift-ring-builder container.builder add --region 1 --zone 1 \
--ip 192.168.9.101 --port 6201 --device sdb3 --weight 100
验证环
# swift-ring-builder container.builder
平衡环
# swift-ring-builder container.builder rebalance
生成对象环
# cd /etc/swift
# swift-ring-builder object.builder create 8 2 1
# swift-ring-builder object.builder add --region 1 --zone 1 \
--ip 192.168.9.101 --port 6200 --device sdb2 --weight 100
# swift-ring-builder object.builder add --region 1 --zone 1 \
--ip 192.168.9.101 --port 6200 --device sdb3 --weight 100
验证环
# swift-ring-builder object.builder
平衡环
# swift-ring-builder object.builder rebalance
4分发环
将account.ring.gz、container.ring.gz、 object.ring.gz复制到所有节点包括存储节点和控制节点的 /etc/swift目录下。
5最后配置
1复制配置文件
# scp controller:/mnt/openstack/swiftconf/swift.conf /etc/swift/swift.conf
# chmod 640 /etc/swift/swift.conf
2修改配置文件
# vi /etc/swift/swift.conf
[swift-hash]
swift_hash_path_suffix HASH_PATH_SUFFIX
swift_hash_path_prefix HASH_PATH_PREFIX [storage-policy:0]
name Policy-0
default yes
3分发配置文件
把 swift.conf文件复制到 所有节点包括存储节点和控制节点的/etc/swift目录。
# chown -R root:swift /etc/swift
4在控制节点和其它代理节点运行
# systemctl enable openstack-swift-proxy.service memcached
# systemctl start openstack-swift-proxy.service memcached.service
5在存储节点运行
# systemctl enable openstack-swift-account \
openstack-swift-account-auditor \
openstack-swift-account-reaper.service \
openstack-swift-account-replicator.service
# systemctl start openstack-swift-account.service \
openstack-swift-account-auditor.service \
openstack-swift-account-reaper.service \
openstack-swift-account-replicator.service
# systemctl enable openstack-swift-container.service \
openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service \
openstack-swift-container-updater.service
# systemctl start openstack-swift-container.service \
openstack-swift-container-auditor.service \
openstack-swift-container-replicator.service \
openstack-swift-container-updater.service
# systemctl enable openstack-swift-object.service \
openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service \
openstack-swift-object-updater.service
# systemctl start openstack-swift-object.service \
openstack-swift-object-auditor.service \
openstack-swift-object-replicator.service \
openstack-swift-object-updater.service
4.验证
在控制节点
# swift stat
# openstack container create container1
# openstack object create container1 FILE
# openstack object list container1
# openstack object save container1 FILE
实训3-11 使用Openstack命令创建实例微课视频11分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
直接使用controller设置CD/DVD1使用openstack.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和compute。
2.创建镜像
1创建镜像
先检查是否有cirros镜像
# openstack image list
如果没有cirros镜像则创建cirros镜像
# openstack image create --file /mnt/openstack/images/cirros-0.3.3-x86_64-disk.img \
--disk-format qcow2 --container-format bare --public cirros
2查询镜像信息
# openstack image show cirros
3.创建实例类型
# openstack flavor create --id 2 --ram 1024 --disk 1 --vcpus 1 f2
4.创建网络
1创建外网
# openstack network create --project admin --provider-physical-network provider \
--provider-network-type flat --external ext-net
2创建内网
# openstack network create --project admin --provider-network-type vxlan --internal int-net
3创建外网子网
# openstack subnet create --project admin --dhcp --gateway 192.168.30.1 \
--subnet-range 192.168.30.0/24 --network ext-net \
--allocation-pool start192.168.30.100,end192.168.30.200 ext-subnet
4创建内网子网
# openstack subnet create --project admin --dhcp --gateway 10.1.1.1 \
--subnet-range 10.1.1.0/24 --network int-net int-subnet
5.创建路由器
1创建路由器
# openstack router create --project admin router1
2设置外网网关
# openstack router set --external-gateway ext-net --enable-snat router1
3连接内网
# openstack router add subnet router1 int-subnet
6.创建安全组与规则
1创建安全组
# openstack security group create --project admin sg-1
2创建安全组规则
# openstack security group rule create --remote-ip 0.0.0.0/0 --ethertype IPv4 \
--protocol icmp --ingress sg-1
# openstack security group rule create --remote-ip 0.0.0.0/0 --ethertype IPv4 \
--protocol icmp --egress sg-1
# openstack security group rule create --remote-ip 0.0.0.0/0 --ethertype IPv4 \
--protocol tcp --dst-port 1:65535 --ingress sg-1
# openstack security group rule create --remote-ip 0.0.0.0/0 --ethertype IPv4 \
--protocol tcp --dst-port 1:65535 --egress sg-1
# openstack security group rule create --remote-ip 0.0.0.0/0 --ethertype IPv4 \
--protocol udp --dst-port 1:65535 --ingress sg-1
# openstack security group rule create --remote-ip 0.0.0.0/0 --ethertype IPv4 \
--protocol udp --dst-port 1:65535 --egress sg-1
7.创建实例
# openstack server create --image cirros --flavor f2 --security-group sg-1 \
--availability-zone nova --network int-net vm01
8.绑定浮动IP
1生成浮动IP
# openstack floating ip create ext-net
2绑定浮动IP
# openstack floating ip list
# openstack server add floating ip vm01 192.168.30.104
注192.168.30.104要根据实际查询结果更换。
9.访问实例
使用SecureCRT连接vm01的浮动IP
实训4-1 Docker安装实训微课视频11分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机centos2009快照“命令行”克隆一个虚拟机
主机名称docker
内存8G
CD/DVD增加一个CD/DVD
CD/DVD1使用docker.iso为虚拟光驱
CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2.配置yum源
1挂载CD/DVD
# mkdir /mnt/centos
# mkdir /mnt/docker
# mount -o loop /dev/sr0 /mnt/centos
# mount -o loop /dev/sr1 /mnt/docker
注检查 一下sr0和sr1是否分别对应centos和docker4.4G的是centos
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:1 1 4.4G 0 rom
sr1 11:0 1 6.3G 0 rom
2删除/etc/yum.repos.d/下的所有文件
# rm -f /etc/yum.repos.d/*
3新建/etc/yum.repos.d/docker.repo文件内容如下
# vi /etc/yum.repos.d/docker.repo
[centos]
namecentos7 2009
baseurlfile:///mnt/centos
gpgcheck0
enabled1 [docker]
name docker-ce
baseurl file:///mnt/docker
gpgcheck0
enabled1
3.基本设置
1配置主机名
# hostnamectl set-hostname docker
2修改/etc/hosts配置主机名映射
192.168.9.10 docker
3网卡配置配置IP、子网掩码、网关和DNS。
# vi /etc/sysconfig/network-scripts/ifcfg-ens33
TYPEEthernet
BOOTPROTOstatic
NAMEens33
DEVICEens33
ONBOOTyes
IPADDR192.168.9.10
NETMASK255.255.255.0
GATEWAY192.168.9.2
DNS1192.168.9.2
# systemctl restart network
4关闭SELinux和防火墙
关闭SELinux修改/etc/selinux/config文件将SELINUXenforcing改为SELINUXdisabled
# setenforce 0
关闭防火墙
# systemctl disable firewalld
# systemctl stop firewalld
5修改内核参数。
编辑文件/etc/sysctl.conf加上
# vi /etc/sysctl.conf
net.ipv4.ip_forward 1
执行命令
# sysctl -p
4.安装软件
1安装Docker-ce。Docker-ce是Docker的社区版本。
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum install -y docker-ce
2启动Docker
# systemctl start docker
# systemctl enable docker
5.配置Docker
1修改dockerd配置
修改dockerd的配置文件/etc/docker/daemon.json。
# vi /etc/docker/daemon.json
{ insecure-registries : [0.0.0.0/0],
registry-mirrors: [http://3tshx8jr.mirror.aliyuncs.com], exec-opts: [native.cgroupdriversystemd]
}
2重启服务
# systemctl daemon-reload
# systemctl restart docker
3查询Docker信息
# docker info
4.安装 Docker-compose 1复制Docker-compose。
# cp /mnt/docker/docker-compose/docker-compose-Linux-x86_64 /usr/local/bin/docker-compose
2修改权限
# chmod x /usr/local/bin/docker-compose
3验证
# docker-compose --version
实训4-2 镜像操作实训微课视频8分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机master快照“Docker-installed”克隆一个虚拟机。设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2. 镜像管理
1查询本地镜像
# docker images
2镜像的载入与存出
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/httpd-2.2.31.tar
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
busybox latest 6858809bf669 11 months ago 1.23MB
httpd 2.2.31 c8a7fb36e3ab 4 years ago 170MB
# docker save busybox:latest -o busybox-latest.tar
2tag命令
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
# docker tag httpd:2.2.31 192.168.9.10/library/httpd:2.2.31
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.9.10/library/busybox latest 6858809bf669 11 months ago 1.23MB
busybox latest 6858809bf669 11 months ago 1.23MB
192.168.9.10/library/httpd 2.2.31 c8a7fb36e3ab 4 years ago 170MB
httpd 2.2.31 c8a7fb36e3ab 4 years ago 170MB
3拉取镜像
# docker pull nginx
4查询镜像详细信息docker inspect命令输出json格式的镜像信息。
# docker inspect httpd:2.2.31
# docker inspect -f {{.Id}} httpd:2.2.31
# docker inspect -f {{.Config.Hostname}} httpd:2.2.31
5删除镜像
# docker rmi 192.168.9.10/library/httpd:2.2.31
实训4-3 搭建Harbor私有镜像仓库微课视频11分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机master快照“Docker-installed”克隆一个虚拟机。设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2.安装Harbor
1解压harbor
# tar xzvf /mnt/docker/harbor/harbor-offline-installer-v2.1.0.tgz -C /opt
2修改配置文件harbor.yml
# cd /opt/harbor/
# cp harbor.yml.tmpl harbor.yml
# vi harbor.yml
hostname: 192.168.9.10
harbor_admin_password: Harbor12345
# https related config
#https: # https port for harbor, default is 443
# port: 443 # The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path
3载入镜像用docker load命令将/mnt/docker/images/goharbor/所有文件载入
# cd /mnt/docker/images/goharbor/
# ls
chartmuseum-photon-v2.1.0.tar harbor-db-v2.1.0.tar harbor-registryctl-v2.1.0.tar prepare-v2.1.0.tar clair-adapter-photon-v2.1.0.tar harbor-jobservice-v2.1.0.tar nginx-photon-v2.1.0.tar redis-photon-v2.1.0.tar clair-photon-v2.1.0.tar harbor-log-v2.1.0.tar notary-server-photon-v2.1.0.tar registry-photon-v2.1.0.tar harbor-core-v2.1.0.tar harbor-portal-v2.1.0.tar notary-signer-photon-v2.1.0.tar trivy-adapter-photon-v2.1.0.tar
# docker load -I chartmuseum-photon-v2.1.0.tar
重复使用以上命令将所有载入
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
goharbor/chartmuseum-photon v2.1.0 5bad3dce5fd5 10 months ago 172MB
goharbor/redis-photon v2.1.0 45fa455a8eeb 10 months ago 68.7MB
goharbor/trivy-adapter-photon v2.1.0 9b443d147b3d 10 months ago 106MB
goharbor/clair-adapter-photon v2.1.0 cee42542dfb2 10 months ago 57.9MB
goharbor/clair-photon v2.1.0 9741a40b433c 10 months ago 167MB
goharbor/notary-server-photon v2.1.0 e20ff73edec7 10 months ago 139MB
goharbor/notary-signer-photon v2.1.0 2b783b793805 10 months ago 136MB
goharbor/harbor-registryctl v2.1.0 98f466a61ebb 10 months ago 132MB
goharbor/registry-photon v2.1.0 09c818fabdd3 10 months ago 80.1MB
goharbor/nginx-photon v2.1.0 470ffa4a837e 10 months ago 40.1MB
goharbor/harbor-log v2.1.0 402802990707 10 months ago 82.1MB
goharbor/harbor-jobservice v2.1.0 ff65bef832b4 10 months ago 165MB
goharbor/harbor-core v2.1.0 26047bcb9ff5 10 months ago 147MB
goharbor/harbor-portal v2.1.0 5e97d5e230b9 10 months ago 49.5MB
goharbor/harbor-db v2.1.0 44c0be92f223 10 months ago 164MB
goharbor/prepare v2.1.0 58d0e7cee8cf 10 months ago 160MB
4首次启动Harbor
# cd /opt/harbor/
# ./prepare
# ./install.sh --with-clair
5设置开机时启动Harbor
编辑/etc/rc.d/rc.local加上
docker-compose -f /opt/harbor/docker-compose.yml up -d
然后执行
# chmod x /etc/rc.d/rc.local
6使用Web界面
在浏览器中打开http://192.168.9.10
7拉取和上传镜像
登录Harbor仓库后
# docker login -u admin 192.168.9.10
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded
2拉取和上传镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
# docker push 192.168.9.10/library/busybox:latest
# docker rmi 192.168.9.10/library/busybox:latest
# docker pull 192.168.9.10/library/busybox:latest
实训4-4 容器操作实训微课视频11分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机master快照“Docker-installed”克隆一个虚拟机。设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2.载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/httpd-2.2.31.tar
# docker load -i /mnt/docker/images/httpd-2.2.32.tar
# docker load -i /mnt/docker/images/centos-centos7.5.1804.tar
# docker load -i /mnt/docker/images/nginx-latest.tar
3.容器的创建
1用docker run创建容器
用-ti选项运行容器
# docker run -ti busybox:latest sh
/ # exit
用-d在后台运行容器
# docker run -d busybox:latest sleep infinity
使用docker run的其它选项
# docker run -ti --add-hostmyhost:10.1.1.1 busybox:latest sh
/ # cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
10.1.1.1 myhost
172.17.0.3 07e1edd94423
/ # exit
# docker run -d --nametest busybox:latest 2分步创建容器
先使用docker create命令创建的容器。
# docker create httpd:2.2.31
34c3b78c029ff7654d1c0eb462e0714cf68f7c44511e6b2c57d0fd9ee8084703
然后使用docker start命令创建的容器。
# docker start 34c3b78c029ff765
34c3b78c029ff765
# docker ps -a|grep httpd
34c3b78c029f httpd:2.2.31 httpd-foreground 30 seconds ago Up 12 seconds 80/tcp relaxed_williams
3在容器中执行命令env
# docker exec 34c3b78c029ff765 env
4.查询容器
1查询运行中的容器。
# docker ps
2选项-q只显示容器的ID # docker ps -q
3查询所有状态的容器 #docker ps –a
5.查询容器内进程
1在容器外部查询容器的进程
# docker top 34c3b78c029ff765
UID PID PPID C STIME TTY TIME CMD
root 11784 11763 0 13:26 ? 00:00:00 httpd -DFOREGROUND
bin 11818 11784 0 13:26 ? 00:00:00 httpd -DFOREGROUND
……
2在容器内部查询容器的进程。条件容器内有ps命令
# docker exec 34c3b78c029ff765 ps -A PID TTY TIME CMD 1 ? 00:00:00 httpd 8 ? 00:00:00 httpd 9 ? 00:00:00 httpd 10 ? 00:00:00 httpd 11 ? 00:00:00 httpd 12 ? 00:00:00 httpd 20 ? 00:00:00 ps
6.查询容器日志
# docker logs 34c3b78c029ff765
7.查询容器的详细信息
1查询所有信息
# docker inspect 34c3b78c029ff765
2查询指定信息
# docker inspect -f {{.State.Status}} 34c3b78c029ff765
8.停止和删除容器
1停止容器
# docker stop 34c3b78c029ff765
2删除容器
# docker rm 34c3b78c029ff765
3删除一个正在运行中的容器
# docker rm -f 34c3b78c029ff765
实训4-5 容器存储实训微课视频6分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机master快照“Docker-installed”克隆一个虚拟机。设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2.载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/httpd-2.2.31.tar
# docker load -i /mnt/docker/images/httpd-2.2.32.tar
# docker load -i /mnt/docker/images/centos-centos7.5.1804.tar
# docker load -i /mnt/docker/images/nginx-latest.tar
3.管理卷
1创建卷
# docker volume create vol1
# docker volume create vol2
2列出卷
# docker volume ls
3查询卷的详细信息
# docker volume inspect vol1
4删除卷
# docker volume rm vol2
4.在容器中使用卷
1使用目录作为卷
# mkdir /data
# docker run -d --namehttpd1 -v /data:/usr/local/apache2/htdocs httpd:2.2.31
测试
# echo test/data/index.html
# docker inspect httpd1|grep IPAdd
IPAddress: 172.17.0.2
# curl http:// 172.17.0.2
curl: (6) Could not resolve host: http; Unknown error
test
2使用预创建的卷
# docker run -d --namehttpd2 -v vol1:/usr/local/apache2/htdocs httpd:2.2.31
测试
# docker inspect httpd2|grep IPAdd IPAddress: 172.17.0.3,
# docker volume inspect vol1|grep Mountpoint Mountpoint: /var/lib/docker/volumes/vol1/_data,
# echo httpd2-test/var/lib/docker/volumes/vol1/_data/index.html
# curl http://172.17.0.3
httpd2-test
3使用其它容器的卷
# docker run -d --namehttpd3 --volumes-fromhttpd2 httpd:2.2.31
测试
# docker inspect httpd3|grep IPAdd IPAddress: 172.17.0.4,
# curl http://172.17.0.4
httpd2-test
实训4-6 容器网络实训微课视频9分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机master快照“Docker-installed”克隆一个虚拟机。设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2.载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/httpd-2.2.31.tar
# docker load -i /mnt/docker/images/httpd-2.2.32.tar
# docker load -i /mnt/docker/images/centos-centos7.5.1804.tar
# docker load -i /mnt/docker/images/nginx-latest.tar
3.创建网络
1创建网络
# docker network create --driver bridge --subnet172.28.0.0/16 \
--ip-range172.28.5.0/24 --gateway172.28.0.1 br0
2列出网络
# docker network ls
NETWORK ID NAME DRIVER SCOPE
2784c5cab5fd br0 bridge local
1d27412a771c bridge bridge local
abc8bef4918f host host local
88a048e26b71 none null local
3查询网络详情
# docker network inspect 2784c5cab5fd
4.容器的网络
1没有网络的容器busybox1
# docker run -d --namebusybox1 --networknone busybox sleep infinity
# docker exec busybox1 ip a
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
2使用默认网络的容器busybox2
# docker run -d --namebusybox2 busybox sleep infinity
# docker exec busybox2 ip a
13: eth0if14: BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN mtu 1500 qdisc noqueue link/ether 02:42:ac:11:00:05 brd ff:ff:ff:ff:ff:ff inet 172.17.0.5/16 brd 172.17.255.255 scope global eth0 valid_lft forever preferred_lft forever
map[bridge:0xc0005c4000]
3使用host网络的容器busybox3
# docker run -d --namebusybox3 --networkhost busybox sleep infinity
eaf31f68b05c5c9de2a037dc74c51f722cd2ffcd5daabf222ff14d8890baba0d
# docker inspect -f {{.NetworkSettings.Networks}} busybox3
map[host:0xc0005b2f00]
4使用自定义网络br0的容器busybox4
# docker run -d --namebusybox4 --networkbr0 busybox sleep infinity
d35905c70744cafaa3dea34cab361a7b95494631d40c1a43d897578173bc294f
# docker inspect -f {{.NetworkSettings.Networks}} busybox4
map[br0:0xc000016d80]
5容器busybox5使用容器busybox4的网络
# docker run -d --namebusybox5 --networkcontainer:busybox4 busybox sleep infinity
311eaee9ef8cee3218ca982dd9b0c2436febde235fd3f852d337c52a5b3017f4
# docker exec busybox4 ip a
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue qlen 1000
……
83: eth0if84: BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN mtu 1500 qdisc
……
# docker exec busybox5 ip add
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue qlen 1000
……
83: eth0if84: BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN mtu 1500 qdisc
……
5.端口暴露
# mkdir /data
# docker run -d --namehttpd1 -p 8000:80 -v /data:/usr/local/apache2/htdocs httpd:2.2.31
# echo test /data/index.html
# curl 192.168.9.10:8000
test
实训4-7 自定义镜像实训微课视频5分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机master快照“Docker-installed”克隆一个虚拟机。设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
2.载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/httpd-2.2.31.tar
# docker load -i /mnt/docker/images/httpd-2.2.32.tar
# docker load -i /mnt/docker/images/centos-centos7.5.1804.tar
# docker load -i /mnt/docker/images/nginx-latest.tar
3.使用dockerfile创建镜像
1基于busybox创建镜像
创建Dockerfile
# vi Dockerfile
FROM busybox:latest
LABEL namesleepbusybox
VOLUME [/var/www]
CMD [sleep,infinlty]
创建镜像
# docker build -t sleepbusybox:latest ./
列出创建的镜像
# docker images|grep sleepbusybox
sleepbusybox latest 4490fb0ab1b0 About a minute ago 1.24MB‘
查询新镜像的Label
# docker inspect -f {{.ContainerConfig.Labels}} sleepbusybox:latest
map[name:sleepbusybox]
查询新镜像的Volume
# docker inspect -f {{.ContainerConfig.Volumes}} sleepbusybox:latest
map[/var/www:{}]
用新镜像运行一个容器
# docker run -d --namesleep1 sleepbusybox:latest
69dcf840cb11c9166f65f990b5d134a5cdda6dc26da0cbf16debfcb3044b5b4a
查询生成的容器
# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
69dcf840cb11 sleepbusybox:latest sleep infinlty 3 minutes ago Up sleep1
查询容器的卷
# docker inspect -f {{.Mounts}} sleep1
[{volume 4443c777394ec4872ca4bc3d287370450309c63d1f8e34036720b17308a5d661 /var/lib/docker/volumes/4443c777394ec4872ca4bc3d287370450309c63d1f8e34036720b17308a5d661/_data /var/www local true }]
4.使用docker commit命令创建镜像
# docker run -d --namebusybox1 busybox:latest sleep infinity
cf7a8edd046eeef5afab1465acdd150707139db9000aca24cc18fbc1757305d6
# docker commit cf7a8edd046eeef5afa busybox:new
注cf7a8edd046eeef5afa是上一步创建的容器的ID
sha256:26337213e90ddd80b1203a2f1d3a22678bdcb8e83d885412700a13188a120d12
# docker images|grep new
Busybox new 26337213e90d 28 seconds ago 1.24MB
# docker inspect busybox:new
实训5-1 kubernetes集群安装微课视频25分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
从虚拟机master快照“Harbor-installed”克隆一个虚拟机设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱
从虚拟机node快照“Harbor-installed”克隆一个虚拟机。
2.基本设置master和node一样
1修改主机名
master节点
# hostnamectl set-hostname master
node节点
# hostnamectl set-hostname node
2修改/etc/hosts文件配置主机名映射
192.168.9.10 master
192.168.9.11 node
3关闭swap分区
# swapoff -a
# sed -i s/.*swap.*/#/ /etc/fstab
4加载br_netfilter模块
# modprobe br_netfilter
5创建/etc/sysctl.d/k8s.conf文件修改内核参数
# vi /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables 1
net.bridge.bridge-nf-call-iptables 1
net.ipv4.ip_forward 1
然后执行命令使内核参数生效
# sysctl -p /etc/sysctl.d/k8s.conf
3.master节点安装配置
1安装软件并启动kubelet服务
# yum -y install kubeadm-1.20.6 kubectl-1.20.6 kubelet-1.20.6
# systemctl enable kubelet
# systemctl start kubelet
2载入控制面组件
# kubeadm config images list
用docker load命令载入/mnt/docker/images/registry.aliyuncs.com/google_containers/下所有镜像
# ls /mnt/docker/images/registry.aliyuncs.com/google_containers/
coredns-1.7.0.tar kube-apiserver-v1.20.6.tar kube-proxy-v1.20.6.tar pause-3.2.tar etcd-3.4.13-0.tar kube-controller-manager-v1.20.6.tar kube-scheduler-v1.20.6.tar
3初始化集群
# kubeadm init --kubernetes-version1.20.6 \
--apiserver-advertise-address192.168.9.10 \
--image-repository registry.aliyuncs.com/google_containers \
--pod-network-cidr10.244.0.0/16
4建立配置文件
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
5安装网络插件
登录Harbor
# docker login -u admin 192.168.9.10
上传flannel镜像
# docker load -i /mnt/docker/images/quay.io-coreos-flannel-v0.13.0-rc2.tar
# docker tag quay.io/coreos/flannel:v0.13.0-rc2 192.168.9.10/library/flannel:v0.13.0-rc2
# docker push 192.168.9.10/library/flannel:v0.13.0-rc2
创建网络插件
# kubectl apply -f /mnt/docker/yaml/flannel/kube-flannel.yaml
6安装Dashboard
Dashboard为Kubernetes提供图形化界面是可选组件。
安装Dashboard
# docker load -i /mnt/docker/images/kubernetesui-dashboard-v2.0.0.tar
# docker load -i /mnt/docker/images/kubernetesui-metrics-scraper-v1.0.4.tar
# kubectl apply -f /mnt/docker/yaml/dashboard/recommended.yaml
# kubectl apply -f /mnt/docker/yaml/dashboard/dashboard-adminuser.yaml
访问dashboard打开浏览器输入https://192.168.9.10:30000访问dashboard
登录dashboard使用token登录下列命令可以获取Token
# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk {print $1})
7查询集群状态
# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {health:true}
如果scheduler和controller-manager的状态不是Healthy则分别编辑 /etc/kubernetes/manifests/kube-controller-manager.yaml和/etc/kubernetes/manifests/kube-scheduler.yaml把其中- --port0注释掉前面加#如下
# - --port0
8删除污点
# kubectl taint nodes master node-role.kubernetes.io/master-
4.node节点安装配置
1登录Harbor
# docker login -u admin 192.168.9.10
2安装软件并启动kubelet服务
# yum -y install kubeadm-1.20.6 kubectl-1.20.6 kubelet-1.20.6
# systemctl enable kubelet
# systemctl start kubelet
3准备proxy镜像
# scp master:/mnt/docker/images/registry.aliyuncs.com/google_containers/kube-proxy-v1.20.6.tar ./
# scp master:/mnt/docker/images/registry.aliyuncs.com/google_containers/pause-3.2.tar ./
# docker load -i kube-proxy-v1.20.6.tar
# docker load -i pause-3.2.tar
4将node节点加入集群在master节点执行
# kubeadm token create --print-join-command
kubeadm join 192.168.9.10:6443 --token zv3fee.9amc7pqalkuzxar1 --discovery-token-ca-cert-hash sha256:be3fc0caafcd9c3681916f76d2c4a309402840823171560fe609c8c79edadbbd
复制上面的输出结果在node节点执行
# kubeadm join 192.168.9.10:6443 --token zv3fee.9amc7pqalkuzxar1 --discovery-token-ca-cert-hash sha256:be3fc0caafcd9c3681916f76d2c4a309402840823171560fe609c8c79edadbbd
5.验证
在master节点执行
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7f89b7bc75-699wc 1/1 Running 0 17m
coredns-7f89b7bc75-gw4zv 1/1 Running 0 17m
etcd-master 1/1 Running 0 17m
kube-apiserver-master 1/1 Running 0 17m
kube-controller-manager-master 1/1 Running 0 17m
kube-flannel-ds-jx8jl 1/1 Running 0 14m
kube-flannel-ds-z8gf8 1/1 Running 0 7m58s
kube-proxy-nqjqd 1/1 Running 0 17m
kube-proxy-tcx5g 1/1 Running 0 7m58s
kube-scheduler-master 1/1 Running 0 17m
# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready control-plane,master 1d v1.20.6
node Ready none 1d v1.20.6
6.配置命令补全
1安装bash-completion
# yum install -y bash-completion
2修改~/.bashrc文件
# vi ~/.bashrc
……
source (kubectl completion bash)
实训5-2 pod实训微课视频22分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
增加Tag
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
上传镜像
# docker push 192.168.9.10/library/busybox:latest
3.名字空间
1列出系统中的名字空间
# kubectl get ns|namespace
2用kubectl create namespace命令创建名字空间
# kubectl create namespace ns1
3用模板文件创建名字空间
# vi ns2.yaml
kind: Namespace
apiVersion: v1
metadata: name: ns2
# kubectl apply -f ns2.yaml
# kubectl get ns
4删除名字空间
# kubectl delete ns ns2
4.用kubectl run命令创建Pod
1创建Pod
# kubectl run busybox1 --image192.168.9.10/library/busybox --image-pull-policyIfNotPresent --command -- sleep infinity
2列出系统中的Pod
# kubectl get pods
NAME READY STATUS RESTARTS AGE
busybox1 1/1 Running 0 15s
# kubectl get pods -o custom-columnsIMAGE:.spec.containers[0].image,NAME:.metadata.name,IP:.status.podIP
IMAGE NAME IP
busybox sleep1 10.244.0.30
# kubectl get pods -o wide
# kubectl get pods busybox1 -o yaml
# kubectl describe pods busybox1
3查询容器日志
# kubectl logs busybox
4查询pod详细描述信息
# kubectl describe pods busybox
5.用模板文件创建Pod
1模板文件
# vi busybox2.yml
apiVersion: v1
kind: Pod
metadata: name: busybox2
spec: containers: - image: 192.168.9.10/library/busybox imagePullPolicy: IfNotPresent command: [sleep,infinity] name: busybox2
2应用模板文件
#kubectl apply -f busybox2.yml
6. kuectl explain命令
# kubectl explain pod
# kubectl explain pod.spec
7.在Pod中执行命令
# kubectl exec busybox1 -- ps -A
PID USER TIME COMMAND
1 root 0:00 sleep infinity
7 root 0:00 ps -A
# kubectl exec busybox2 -c busybox2 -- ip address
1: lo: LOOPBACK,UP,LOWER_UP mtu 65536 qdisc noqueue qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever
3: eth0if70: BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN mtu 1450 qdisc noqueue link/ether 32:3a:93:e4:45:1c brd ff:ff:ff:ff:ff:ff inet 10.244.0.33/24 brd 10.244.0.255 scope global eth0 valid_lft forever preferred_lft forever
8.管理Pod
1加标签
# kubectl label pod busybox2 appbusybox
2显示标签
# kubectl get pod busybox2 --show-labels
NAME READY STATUS RESTARTS AGE LABELS
busybox2 1/1 Running 0 4m27s appbusybox
3加注释
# kubectl annotate pod busybox2 note1This is a test pod
4显示注释
# kubectl get pod busybox2 -o yaml
# kubectl get pod busybox2 -o json
# kubectl get pod busybox2 -o custom-columnsNOTE1:.metadata.annotations.note1
5在线修改Pod的模板文件
# kubectl edit pod busybox2
6删除Pod
# kubectl delete pod busybox1
# kubectl delete pods --force busybox2
9. Pod与名字空间
1使用命令在名字空间创建Pod
# kubectl run busybox3 --image192.168.9.10/library/busybox --image-pull-policyIfNotPresent --namespacens1 -- sleep infinity
pod/busybox1 created
# kubectl get pods -n ns1
NAME READY STATUS RESTARTS AGE
busybox3 1/1 Running 0 6s
2使用模板文件在名字空间创建Pod
# vi busybox4.yaml
kind: Pod
apiVersion: v1
metadata: name: busybox4 namespace: ns1
spec: containers: - name: c1 image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent command: [sleep,infinity]
# kubectl apply -f busybox4.yaml
pod/busybox4 created
# kubectl get pods -n ns1
NAME READY STATUS RESTARTS AGE
busybox3 1/1 Running 0 2m29s
busybox4 1/1 Running 0 11s
实训5-3 pod存储实训微课视频20分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/nginx-latest.tar
增加Tag
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
# docker tag nginx:latest 192.168.9.10/library/nginx:latest
上传镜像
# docker push 192.168.9.10/library/busybox:latest
# docker push 192.168.9.10/library/nginx:latest
3.使用emptyDir卷
1创建模板文件
# vi pod-vol-emptydir.yaml
kind: Pod
apiVersion: v1
metadata: name: pod-vol-emptydir labels: app: volumetest
spec: containers: - name: ct-vol-empty image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent args: [sleep,infinity] volumeMounts: - name: vol1 mountPath: /etc/vol1 volumes: - name: vol1 emptyDir: {}
2创建Pod
# kubectl apply -f pod-vol-emptydir
pod/pod-vol-emptydir created
3查询Pod信息
# kubectl get pods pod-vol-emptydir -o yaml|grep volumeMounts: -A 2 volumeMounts: - mountPath: /etc/vol1 name: vol1
4.使用hostPath卷
1创建目录master和node节点都要执行
# mkdir -p /data/nginx/html
2创建模板文件
# vi pod-vol-hostpath.yaml
kind: Pod
apiVersion: v1
metadata: name: pod-vol-hostpath labels: app: hostpath
spec: containers: - image: 192.168.9.10/library/nginx:latest name: nginx imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /usr/share/nginx/html name: html volumes: - name: html hostPath: path: /data/nginx/html
3创建Pod
# kubectl apply -f pod-vol-hostpath.yaml
4查询Pod所在的节点
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-vol-hostpath 1/1 Running 0 17s 10.244.1.35 node none none
5测试
在Master和Node节点
# echo test /data/nginx/html/index.html
# curl 10.244.1.35
test
5.使用NFS卷
1按实训1-3Master节点配置NFS服务器Node节点配置NFS客户端
2创建模板文件
# vi pod-vol-nfs.yaml
kind: Pod
apiVersion: v1
metadata: name: pod-vol-nfs labels: app: volumenfs
spec: containers: - image: 192.168.9.10/library/nginx:latest name: nginx-nfs imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /usr/share/nginx/html name: html volumes: - name: html nfs: path: /share server: 192.168.9.10
3创建Pod
# kubectl apply -f pod-vol-nfs.yaml
4查询Pod
# kubectl get pods pod-vol-nfs -o wide
pod-vol-nfs 1/1 Running 0 29s 10.244.1.36 node
5在Master节点测试
# echo nfs-test/share/index.html
# curl 10.244.1.36
nfs-test
6.使用持久卷
1按实训1-3Master节点配置NFS服务器Node节点配置NFS客户端
2创建持久卷
创建模板文件
# vi pv-1.yaml
kind: PersistentVolume
apiVersion: v1
metadata: name: pv-1
spec: capacity: storage: 10Gi persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce - ReadOnlyMany nfs: server: 192.168.9.10 path: /share
创建持久卷
# kubectl apply -f pv-1.yaml
persistentvolume/pv-1 created
查询持久卷
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-1 10Gi RWO,ROX Retain Available 10s
3创建持久卷申明
创建模板文件
# vi pvc-1.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata: name: pvc-1
spec: resources: requests: storage: 10Gi accessModes: - ReadWriteOnce storageClassName:
创建持久卷申明
# kubectl apply -f pvc-1.yaml
persistentvolumeclaim/pvc-1 created
查询持久卷申明
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-1 Bound pv-1 10Gi RWO,ROX 9s
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-1 10Gi RWO,ROX Retain Bound default/pvc-1 6m48s
4在Pod中使用持久卷申明
1创建模板文件
# vi pod-pv.yaml
kind: Pod
apiVersion: v1
metadata: name: pod-pv labels: app: pvtest
spec: containers: - image: 192.168.9.10/library/nginx:latest name: nginx imagePullPolicy: IfNotPresent volumeMounts: - mountPath: /usr/share/nginx/html name: html volumes: - name: html persistentVolumeClaim: claimName: pvc-1
2创建Pod
# kubectl apply -f pod-pv.yaml
pod/pod-pv created
3查询Pod
# kubectl get pod pod-pv -o yaml|grep volumeMounts: -A 2 volumeMounts: - mountPath: /usr/share/nginx/html name: html
4测试
# kubectl get pod pod-pv -o wide
pod-pv 1/1 Running 0 5m41s 10.244.1.37 node
# echo pv-test/share/1.html
# curl 10.244.1.37/1.html
pv-test
实训5-4 动态卷实训微课视频12分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/nginx-latest.tar
# docker load -i /mnt/docker/images/quay.io-external_storage-nfs-client-provisioner-latest.tar
增加Tag
# docker tag nginx:latest 192.168.9.10/library/nginx:latest
# docker tag quay.io/external_storage/nfs-client-provisioner:latest 192.168.9.10/library/nfs-client-provisioner:latest
上传镜像
# docker push 192.168.9.10/library/nginx:latest
# docker push 192.168.9.10/library/nfs-client-provisioner:latest
3.配置apiserver服务
1修改/etc/kubernetes/manifests/kube-apiserver.yaml 文件
添加添加- --feature-gatesRemoveSelfLinkfalse
等待所有Pod正常运行
# kubectl get pods -A
3.NFS动态卷
1按实训1-3Master节点配置NFS服务器Node节点配置NFS客户端
注意导出选项有rw
2创建模板文件
# vi rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata: name: nfs-client-provisioner # replace with namespace where provisioner is deployed namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata: name: nfs-client-provisioner-runner
rules: - apiGroups: [] resources: [persistentvolumes] verbs: [get, list, watch, create, delete] - apiGroups: [] resources: [persistentvolumeclaims] verbs: [get, list, watch, update] - apiGroups: [storage.k8s.io] resources: [storageclasses] verbs: [get, list, watch] - apiGroups: [] resources: [events] verbs: [create, update, patch]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata: name: run-nfs-client-provisioner
subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default
roleRef: kind: ClusterRole name: nfs-client-provisioner-runner apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata: name: leader-locking-nfs-client-provisioner namespace: default
rules: - apiGroups: [] resources: [endpoints] verbs: [get, list, watch, create, update, patch]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata: name: leader-locking-nfs-client-provisioner namespace: default
subjects: - kind: ServiceAccount name: nfs-client-provisioner namespace: default
roleRef: kind: Role name: leader-locking-nfs-client-provisioner apiGroup: rbac.authorization.k8s.io
# vi deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata: name: nfs-client-provisioner labels: app: nfs-client-provisioner namespace: default
spec: replicas: 1 strategy: type: Recreate selector: matchLabels: app: nfs-client-provisioner template: metadata: labels: app: nfs-client-provisioner spec: serviceAccountName: nfs-client-provisioner containers: - name: nfs-client-provisioner image: 192.168.9.10/library/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: fuseim.pri/ifs - name: NFS_SERVER value: 192.168.9.10 - name: NFS_PATH value: /share volumes: - name: nfs-client-root nfs: server: 192.168.9.10 path: /share
# vi class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata: name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters: archiveOnDelete: false
# vi test-claim.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata: name: test-claim annotations: volume.beta.kubernetes.io/storage-class: managed-nfs-storage
spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi
# vi test-pod.yaml
kind: Pod
apiVersion: v1
metadata: name: test-pod
spec: containers: - name: test-pod image: 192.168.9.10/library/nginx:latest imagePullPolicy: IfNotPresent volumeMounts: - name: nfs-pvc mountPath: /usr/share/nginx/html restartPolicy: Never volumes: - name: nfs-pvc persistentVolumeClaim: claimName: test-claim
3创建serviceAccount、role、clusterRole并绑定
# kubectl apply -f rbac.yaml
4创建nfs-provider
# kubectl apply -f deployment.yaml
# kubectl get deployment.apps/nfs-client-provisioner
NAME READY UP-TO-DATE AVAILABLE AGE
nfs-client-provisioner 1/1 1 1 50s
5创建存储类
# kubectl apply -f class.yaml
6创建持久卷申明
# kubectl apply -f test-claim.yaml
# kubectl get pvc,pv
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/test-claim Bound pvc-9105515f-16b0-4b1f-9908-9f364f350c49 1Mi RWX managed-nfs-storage 36s NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-9105515f-16b0-4b1f-9908-9f364f350c49 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 36s
# kubectl describe persistentvolume/pvc-9105515f-16b0-4b1f-9908-9f364f350c49
……
Annotations: pv.kubernetes.io/provisioned-by: fuseim.pri/ifs
……
Path: /share/default-test-claim-pvc-9105515f-16b0-4b1f-9908-9f364f350c49
……
# ls /share/
default-test-claim-pvc-9105515f-16b0-4b1f-9908-9f364f350c49
7创建Pod使用动态卷
# kubectl apply -f test-pod.yaml
# kubectl get pod/test-pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
test-pod 1/1 Running 0 21s 10.244.1.45 node none
8测试
# echo test-nfs /share/default-test-claim-pvc-9105515f-16b0-4b1f-9908-9f364f350c49/index.html
# curl 10.244.1.45
实训5-5 service实训微课视频28分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/httpd-2.2.31.tar
# docker load -i /mnt/docker/images/httpd-2.2.32.tar
# docker load -i /mnt/docker/images/centos-centos7.5.1804.tar
# docker load -i /mnt/docker/images/nginx-latest.tar
增加Tag
# docker tag httpd:2.2.32 192.168.9.10/library/httpd:2.2.32
# docker tag httpd:2.2.31 192.168.9.10/library/httpd:2.2.31
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
# docker tag centos:centos7.5.1804 192.168.9.10/library/centos:centos7.5.1804
# docker tag nginx:latest 192.168.9.10/library/nginx:latest
上传镜像
# docker push 192.168.9.10/library/busybox:latest
# docker push 192.168.9.10/library/httpd:2.2.32
# docker push 192.168.9.10/library/httpd:2.2.31
# docker push 192.168.9.10/library/centos:centos7.5.1804
# docker push 192.168.9.10/library/nginx:latest
3.端口转发
1创建一个Pod
# kubectl run nginx1 --image192.168.9.10/library/nginx:latest --port80 --image-pull-policyIfNotPresent
2将Pod的80端口转发到127.0.0.1的8000端口
# kubectl port-forward nginx1 8000:80
Forwarding from 127.0.0.1:8000 - 80
Forwarding from [::1]:8000 - 80
Handling connection for 8000
3访问127.0.0.1的8000端口
# curl 127.0.0.1:8000
4.暴露端口
1暴露端口
# kubectl expose pod nginx1 --port8000 --protocolTCP --target-port80
2查询创建的Service
# kubectl get svc
3通过Service访问Pod
# curl 10.100.47.149:8000
注10.100.47.149是Servicer IP地址根据实际情况替换
5. ClusterIP型Service
1创建Pod
# vi pod-http.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-http labels: app: http
spec: containers: - image: 192.168.9.10/library/httpd:2.2.32 imagePullPolicy: IfNotPresent name: c-http ports: - containerPort: 80 protocol: TCP
# kubectl apply -f pod-http.yaml
2创建Service
# vi svc-http.yaml
apiVersion: v1
kind: Service
metadata: name: svc-http namespace: default
spec: ports: - port: 8000 protocol: TCP targetPort: 80 selector:
app: http
# kubectl apply -f svc-http.yaml
service/svc-http created
3查询和访问Service
# kubectl get service/svc-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc-http ClusterIP 10.98.131.213 none 8000/TCP 92s
# curl 10.98.131.213:8000
htmlbodyh1It works!/h1/body/html
4查询与Service关联的Endpoints
# kubectl describe svc svc-http
……
Endpoints: 10.244.0.93:80,10.244.0.96:80
6.ExternalName型Service
1创建Service和Endpoints
# vi svc-external.yaml
apiVersion: v1
kind: Service
metadata: name: svce
spec: type: ExternalName externalName: www.126.com ports: - name: http port: 80
---
apiVersion: v1
kind: Endpoints
metadata: name: svce namespace: default
subsets:
- addresses: - ip: 123.126.96.210 ports: - name: http port: 80
protocol: TCP
2测试
# kubectl -ti run test --image192.168.9.10/library/centos:centos7.5.1804 -- bash [roottest /]# curl svce.default.svc.cluster.local
html
headtitle403 Forbidden/title/head
body
centerh1403 Forbidden/h1/center
hrcenternginx/center
/body
/html
[roottest1 /]# exit
7.NodePort型Service
1创建Service
# vi svc-http-nodeport.yaml
apiVersion: v1
kind: Service
metadata: name: svc-http-nodeport namespace: default
spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30888 selector: app: http
# kubectl apply -f svc-http-nodeport.yaml
2测试
在浏览器地址栏输入http://192.168.9.11:30888
8.Ingress
1创建一个后端Pod和Service
# vi pod-http.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-http labels: app: http
spec: containers: - image: 192.168.9.10/library/httpd:2.2.32 imagePullPolicy: IfNotPresent name: c-http ports: - containerPort: 80 protocol: TCP # vi svc-http.yaml
apiVersion: v1
kind: Service
metadata: name: svc-http namespace: default
spec: ports: - port: 8000 protocol: TCP targetPort: 80 selector:
app: http
# kubectl apply -f pod-http.yaml
# kubectl apply -f svc-http.yaml
2部署Ingress-nginx控制器
载入镜像
# docker load -i /mnt/docker/images/nginx-ingress-controller-0.30.0.tar
# docker tag nginx-ingress-controller:0.30.0 192.168.9.10/library/nginx-ingress-controller:0.30.0
# docker push 192.168.9.10/library/nginx-ingress-controller:0.25.0
部署Ingress-nginx控制器
# kubectl apply -f /mnt/docker/yaml/lab5-5/mandatory.yaml
3创建一个外部可以访问的Service例如NodePort型
# kubectl apply -f /mnt/docker/yaml/ lab5-5/svc-ingress-nginx.yaml
4创建Ingress资源定义访问规则
# vi ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata: name: ingress1 annotations: nginx.ingress.kubernetes.io/rewrite-target: /
spec: rules: - host: www.ingress.com http: paths: - path: / pathType: Prefix backend: service: name: svc-http port: number: 80
# kubectl apply -f /mnt/docker/yaml/lab5-5/ingress.yaml
5查询nginx-ingress-controller中配置文件的变化。
# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-55b6c67bd-d28wd 1/1 Running 0 6m44s
# kubectl exec nginx-ingress-controller-55b6c67bd-d28wd -n ingress-nginx -- cat /etc/nginx/nginx.conf
可以发现这一段
## start server www.ingress.com server { server_name www.ingress.com ; listen 80 ; listen 443 ssl http2 ;
…… }
## end server www.ingress.com
6测试
修改/etc/hosts加上192.168.9.11 node www.ingress.com
# curl www.ingress.com
htmlbodyh1It works!/h1/body/html
实训5-6 Deployment实训微课视频12分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/httpd-2.2.31.tar
# docker load -i /mnt/docker/images/httpd-2.2.32.tar
增加Tag
# docker tag httpd:2.2.32 192.168.9.10/library/httpd:2.2.32
# docker tag httpd:2.2.31 192.168.9.10/library/httpd:2.2.31
上传镜像
# docker push 192.168.9.10/library/httpd:2.2.32
# docker push 192.168.9.10/library/httpd:2.2.31
3.创建 Deployment
1创建模板文件
# vi dp-http.yaml
kind: Deployment
apiVersion: apps/v1
metadata: name: dp-http labels: app: http
spec: replicas: 2 selector: matchLabels: app: http template: metadata: name: pod-http labels: app: http spec: containers: - name: c-http image: 192.168.9.10/library/httpd:2.2.31 imagePullPolicy: IfNotPresent ports: - containerPort: 80
2创建Deployment
# kubectl apply -f dp-http.yaml
deployment.apps/dp-http created
3查询
# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
dp-http 2/2 2 2 28s
# kubectl get rs
NAME DESIRED CURRENT READY AGE
dp-http-65b8f484f7 2 2 2 55s
# kubectl get pods
NAME READY STATUS RESTARTS AGE
dp-http-65b8f484f7-m2gp2 1/1 Running 0 66s
dp-http-65b8f484f7-xhv7q 1/1 Running 0 66s
4查询 Deployment 上线状态。
# kubectl rollout status deployment dp-http
deployment dp-http successfully rolled out
4.更新Deployment
1更新镜像
# kubectl set image deployment/dp-http c-http192.168.9.10/library/httpd:2.2.32 --record
2查询上线状态
# kubectl rollout status deployment dp-http
5.回滚Deployment
1检查 Deployment 上线历史
# kubectl rollout history deploy dp-http
deployment.apps/dp-http
REVISION CHANGE-CAUSE
1 none
2 kubectl set image deploy/dp-http c-httphttpd:2.2.32 --recordtrue
2查询第2次上线的详细信息
# kubectl rollout history deploy dp-http --revision2
deployment.apps/dp-http with revision #2
Pod Template: Labels: apphttp pod-template-hash6b7fdd6fd4 Annotations: kubernetes.io/change-cause: kubectl set image deploy/dp-http c-httphttpd:2.2.32 --recordtrue Containers: c-http: Image: httpd:2.2.32 Port: 80/TCP Host Port: 0/TCP Environment: none Mounts: none Volumes: none
3回滚
回滚到上线版本
# kubectl rollout undo deployment/dp-http --to-revision1
6.缩放Deployment
1查询rs和Pod信息
# kubectl get rs
NAME DESIRED CURRENT READY AGE
dp-http-65b8f484f7 2 2 2 46h
# kubectl get pods
NAME READY STATUS RESTARTS AGE
dp-http-65b8f484f7-4j8nh 1/1 Running 0 5m10s
dp-http-65b8f484f7-klqhs 1/1 Running 0 5m13s
2缩放dp-http的副本数到3
# kubectl scale deployment/dp-http --replicas3
deployment.apps/dp-http scaled
3查询rs和Pod信息
# kubectl get rs
NAME DESIRED CURRENT READY AGE
dp-http-65b8f484f7 3 3 3 46h
# kubectl get pods
NAME READY STATUS RESTARTS AGE
dp-http-65b8f484f7-4j8nh 1/1 Running 0 5m53s
dp-http-65b8f484f7-7bltx 1/1 Running 0 11s
dp-http-65b8f484f7-klqhs 1/1 Running 0 5m56s
7.删除Deployment
# kubectl delete deployment/dp-http
实训5-7 StatefulSet实训微课视频5分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/nginx-latest.tar
增加Tag
# docker tag nginx:latest 192.168.9.10/library/nginx:latest
上传镜像
# docker push 192.168.9.10/library/nginx:latest
3. 创建持久卷
1按实训1-3Master节点配置NFS服务器Node节点配置NFS客户端
2创建持久卷
# vi pv-statefulset.yaml
kind: PersistentVolume
apiVersion: v1
metadata: name: pv-statefulset-1
spec: capacity: storage: 1Gi persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce - ReadOnlyMany nfs: server: 192.168.9.10 path: /share
---
kind: PersistentVolume
apiVersion: v1
metadata: name: pv-statefulset-2
spec: capacity: storage: 1Gi persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce - ReadOnlyMany nfs: server: 192.168.9.10 path: /share
# kubectl apply -f pv-statefulset.yaml
# kubectl get pv
4.创建StatefulSet
1创建模板文件
# vi statefulset-demo.yaml
apiVersion: v1
kind: Service
metadata: name: nginx labels: app: nginx
spec: ports: - port: 80 name: web clusterIP: None selector: app: nginx
---
apiVersion: apps/v1
kind: StatefulSet
metadata: name: web
spec: selector: matchLabels: app: nginx # has to match .spec.template.metadata.labels serviceName: nginx replicas: 2
template: metadata: labels: app: nginx # has to match .spec.selector.matchLabels spec: terminationGracePeriodSeconds: 10 containers: - name: nginx image: k8s.gcr.io/nginx-slim:0.8 ports: - containerPort: 80 name: web volumeMounts: - name: www mountPath: /usr/share/nginx/html
volumeClaimTemplates: - metadata: name: www spec: accessModes: [ ReadWriteOnce ] storageClassName: my-storage-class resources: requests: storage: 1Gi
2创建StatefulSet
# kubectl apply -f statefulset-demo.yaml
3查询相关信息
# kubectl get statefulset
NAME READY AGE
web 2/2 10s
# kubectl get pods
NAME READY STATUS RESTARTS AGE
web-0 1/1 Running 0 16s
web-1 1/1 Running 0 14s
实训5-8 DaemonSet实训微课视频3分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/nginx-latest.tar
增加Tag
# docker tag nginx:latest 192.168.9.10/library/nginx:latest
上传镜像
# docker push 192.168.9.10/library/nginx:latest
3.创建DaemonSet
1创建模板文件
# vi daemonset-demo.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata: name: daemonset-test labels: app: daemonset
spec: selector: matchLabels: name: ds-nginx template: metadata: labels: name: ds-nginx spec: tolerations: - key: node-role.kubernetes.io/master operator: Exists effect: NoSchedule containers: - name: c-ds-nginx image: 192.168.9.10/library/nginx:latest
2创建DaemonSet
# kubectl apply -f daemonset-demo.yaml
daemonset.apps/daemonset-test created
3查询相关信息
# kubectl get daemonset
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset-test 2 2 2 2 2 none 11s
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
daemonset-test-fc95h 1/1 Running 0 44s 10.244.1.72 node
daemonset-test-zvf2k 1/1 Running 0 44s 10.244.0.4 master
实训5-9 Configmap实训微课视频16分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
增加Tag
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
上传镜像
# docker push 192.168.9.10/library/busybox:latest
3.创建 Configmap
1基于目录创建 Configmap
1创建一个目录 # mkdir -p configmap 2在目录下创建两个文件
# vi configmap/keystone.conf
auth_typepassword
project_domain_nameDefault
user_domain_nameDefault
project_nameservice
usernamenova
password000000
# vi configmap/vnc.conf
enabledtrue
server_listenmy_ip
server_proxyclient_addressmy_ip
3创建 configmap
# kubectl create configmap configmap1 --from-fileconfigmap
configmap/configmap1 created
4查询ConfigMap 的信息
# kubectl describe configmaps configmap1
# kubectl get configmaps configmap1 -o yaml
2基于文件创建 Configmap
不指定键创建 Configmap
# kubectl create configmap configmap2 --from-fileconfigmap/keystone.conf
指定一个键创建 Configmap
# kubectl create configmap configmap3 --from-filekey1configmap/keystone.conf
查询Configmap的信息
# kubectl get configmap configmap2 -o yaml
# kubectl get configmap configmap3 -o yaml
3基于环境文件创建 Configmap
创建 Configmap
# kubectl create configmap configmap4 --from-env-fileconfigmap/keystone.conf
查询Configmap的信息
# kubectl get configmap configmap4 -o yaml
4从字面值创建 Configmap
创建Configmap
# kubectl create configmap configmap5 --from-literaluseradmin --from-literalpassword123456
查询Configmap的信息
# kubectl get configmap configmap5 -o yaml
4.使用 Configmap
1使用Configmap中的指定数据定义容器环境变量。
创建模板文件
# vi pod-config-1.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-config-1
spec: containers: - name: pod-config-1 image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent command: [ sleep, infinity ] env: - name: USER_NAME valueFrom: configMapKeyRef: name: configmap5 key: user
创建Pod
# kubectl apply -f pod-config-1.yaml
查询Pod中的环境变量
# kubectl exec pod-config-1 -- env
2将 Confimap中的所有键值对配置为容器环境变量
创建模板文件
# vi pod-config-2.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-config-2
spec: containers: - name: pod-config-2 image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent command: [ sleep, infinity ] envFrom: - configMapRef: name: configmap5
创建Pod
# kubectl apply -f pod-config-2.yaml
pod/pod-config-2 created
查询Pod中的环境变量
# kubectl exec pod-config-2 -- env
3在 Pod 命令中使用 ConfigMap
创建模板文件
# vi pod-config-3.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-config-3
spec: containers: - name: pod-config-3 image: 192.168.9.10/library/busybox imagePullPolicy: IfNotPresent command: [ /bin/sh, -c, echo $(USER_NAME) ] env: - name: USER_NAME valueFrom: configMapKeyRef: name: configmap5 key: user
创建Pod
# kubectl apply -f pod-config-3.yaml
pod/pod-config-3 created
验证
# kubectl logs pod-config-3
admin
4使用Configmap 填充数据卷
创建模板文件
# vi pod-config-4.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-config-4
spec: containers: - name: pod-config-4 image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent command: [ sleep, infinity ] volumeMounts: - name: config-volume mountPath: /etc/config volumes: - name: config-volume configMap: name: configmap1
创建Pod
# kubectl apply -f pod-config-4.yaml
查询数据卷内的文件
# kubectl exec pod-config-4 -- ls -l /etc/config
实训5-10 Secrets实训微课视频18分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/httpd-2.2.32.tar
增加Tag
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
# docker tag httpd:2.2.32 192.168.9.10/library/httpd:2.2.32
上传镜像
# docker push 192.168.9.10/library/busybox:latest
# docker push 192.168.9.10/library/httpd:2.2.32
3. Opaque型Secret
1创建Opaque型Secret
使用命令创建Opaque型Secret
# kubectl create secret generic opaque-secret-1 \ --from-literalusernameadmin \ --from-literalpassword123456
使用模板文件创建 Opaque 型Secret
# echo 123456|base64
# echo admin | base64
创建使用base64编码的模板文件
# vi opaque-secret-2.yaml
apiVersion: v1
kind: Secret
metadata: name: opaque-secret-2
type: Opaque
data: username: YWRtaW4K password: MWYyZDFlMmU2N2Rm
不使用base64编码的模板文件
# vi opaque-secret-3.yaml
apiVersion: v1
kind: Secret
metadata: name: opaque-secret-3
type: Opaque
stringData: username: admin password: 123456
包含文件的模板文件
#vi opaque-secret-4.yaml
apiVersion: v1
kind: Secret
metadata: name: opaque-secret-4
type: Opaque
stringData: config.yaml: | apiUrl: https://my.api.com/api/v1 username: admin password: 123456
2创建Secret
# kubectl apply -f opaque-secret-2.yaml
# kubectl apply -f opaque-secret-3.yaml
# kubectl apply -f opaque-secret-4.yaml
3查询Opaque型Secret
查询Secret列表
# kubectl get secret
查询Secret详情
# kubectl get secret opaque-secret-1 -o yaml
4.以卷的形式使用Secret
1创建模板文件
# vi pod-secret-1.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-secret-1
spec: containers: - name: pod-secret image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent args: [sleep,infinity] volumeMounts: - name: foo mountPath: /etc/foo readOnly: true volumes: - name: foo secret: secretName: opaque-secret-1
2创建Pod
# kubectl apply -f pod-secret-1.yaml
3查询卷的内容
# kubectl exec pod-secret-1 -- ls /etc/foo
# kubectl exec pod-secret-1 -- cat /etc/foo/username
5.将 Secret 键名映射到特定路径和文件
3创建模板文件
# vi pod-secret-2.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-secret-2
spec: containers: - name: pod-secret-1 image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent args: [sleep,infinity] volumeMounts: - name: foo mountPath: /etc/foo readOnly: true volumes: - name: foo secret: secretName: opaque-secret-1 items: - key: username path: u/username
3创建Pod
# kubectl apply -f pod-secret-2.yaml
3查询文件和内容
# kubectl exec pod-secret-2 -- ls /etc/foo/u/username -l
# kubectl exec pod-secret-2 -- cat /etc/foo/u/username
6.以环境变量的形式使用 Secrets
1创建模板文件
# vi pod-secret-env.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-secret-env
spec: containers: - name: pod-secret-env image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent args: [sleep,infinity] env: - name: SECRET_USERNAME valueFrom: secretKeyRef: name: opaque-secret-1 key: username - name: SECRET_PASSWORD valueFrom: secretKeyRef: name: opaque-secret-1 key: password
2创建Pod
# kubectl apply -f pod-secret-env.yaml
3查询Pod中的环境变量
# kubectl exec pod-secret-env -- env
7.TLS型Secret
1创建私钥和证书
# openssl genrsa -out tls.key 2048
# openssl req -new -x509 -key tls.key -out tls.cert -days 360 -subj /CNexample.com
2创建Secret
# kubectl create secret tls tls-secret --certtls.cert --keytls.key
3创建ingress使用secret
# vi ingress2.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata: name: ingress2 annotations: nginx.ingress.kubernetes.io/rewrite-target: /
spec: tls: - hosts: - www.ingress.com secretName: tls-secret rules: - host: www.ingress.com http: paths: - path: / pathType: Prefix backend: service: name: svc-http port: number: 80
# kubectl apply -f ingress2.yaml
4创建一个后端Pod和Service
# kubectl apply -f /mnt/docker/yaml/lab5-5/pod-http.yaml
# kubectl apply -f /mnt/docker/yaml/lab5-5/svc-http.yaml
5测试
修改/etc/hosts加上192.168.9.11 node www.ingress.com
# curl -k https://www.ingress.com
htmlbodyh1It works!/h1/body/html
实训5-11 Pod安全实训微课视频21分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
# docker load -i /mnt/docker/images/nginx-latest.tar
增加Tag
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
# docker tag busybox:latest 192.168.9.10/library/nginx:latest
上传镜像
# docker push 192.168.9.10/library/busybox:latest
# docker push 192.168.9.10/library/nginx:latest
3.安全上下文
1为Pod设置安全上下文
创建模板文件
# vi security-context-demo-1.yaml
apiVersion: v1
kind: Pod
metadata: name: security-context-demo-1
spec: securityContext: runAsUser: 1000 runAsGroup: 3000 fsGroup: 2000
volumes: - name: sec-ctx-vol emptyDir: {} containers: - name: sec-ctx-demo image: 192.168.9.10/library/busybox command: [ sh, -c, sleep 1h ] volumeMounts: - name: sec-ctx-vol mountPath: /data/demo securityContext: allowPrivilegeEscalation: false
创建Pod
# kubectl apply -f security-context-demo-1.yaml
查询
# kubectl exec security-context-demo-1 -- ls -ld /data/demo
drwxrwsrwx 2 root 2000 6 Aug 12 10:51 /data/demo
# kubectl exec security-context-demo-1 -- ps -A
PID USER TIME COMMAND 1 1000 0:00 sleep 1h 14 1000 0:00 ps -A
2为容器设置安全性上下文
创建模板文件
# vi security-context-demo-2.yaml
apiVersion: v1
kind: Pod
metadata: name: security-context-demo-2
spec: securityContext: runAsUser: 1000 containers: - name: sec-ctx-demo-2 image: 192.168.9.10/library/busybox
command: [ sh, -c, sleep 1h ] securityContext: runAsUser: 2000 allowPrivilegeEscalation: false
创建Pod
# kubectl apply -f security-context-demo-2.yaml
查询
# kubectl exec security-context-demo-2 -- ps aux
PID USER TIME COMMAND 1 1000 0:00 sleep 1h 27 1000 0:00 ps aux
4. Pod的RBAC访问控制
1查询是否启用了RBAC
# cat /etc/kubernetes/manifests/kube-apiserver.yaml |grep authorization-mode - --authorization-modeNode,RBAC
如果结果中没有RBAC则编辑/etc/kubernetes/manifests/kube-apiserver.yaml
# vi /etc/kubernetes/manifests/kube-apiserver.yaml
# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
2创建ServiceAccount
用命令创建ServiceAccount
# kubectl create serviceaccount sa-demo-1
用模板文件创建ServiceAccount
# vi sa-demo-2.yaml
kind: ServiceAccount
apiVersion: v1
metadata: name: sa-demo-2
# kubectl apply -f sa-demo-2.yaml
3创建Role 和ClusterRole示例
# vi role-demo.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata: namespace: default name: pod-reader
rules:
- apiGroups: [] resources: [pods] verbs: [get, watch, list]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata: name: pv-reader
rules:
- apiGroups: [] resources: [persistentvolumes] verbs: [get, watch, list]
# kubectl apply -f role-demo.yaml
4创建RoleBinding
# vi rolebinding-demo.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata: name: rolebinding-demo
subjects: - kind: ServiceAccount name: sa-demo-1 namespace: default
roleRef: kind: Role name: pod-reader apiGroup: rbac.authorization.k8s.io
# kubectl apply -f rolebinding-demo.yaml
5创建ClusterRoleBinding
# vi clusterrolebinding-demo.yaml
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata: name: clusterrolebinding-demo
subjects: - kind: ServiceAccount name: sa-demo-1 namespace: default
roleRef: kind: ClusterRole name: pv-reader apiGroup: rbac.authorization.k8s.io
# kubectl apply -f clusterrolebinding-demo.yaml
6创建Pods
# vi pod-rbac-demo.yaml
kind: Pod
apiVersion: v1
metadata: name: rbac-pod1
spec: serviceAccountName: sa-demo-1 containers: - name: rbac-pod-c1 image: 192.168.9.10/library/nginx:latest imagePullPolicy: IfNotPresent
--- kind: Pod
apiVersion: v1
metadata: name: rbac-pod2
spec: serviceAccountName: sa-demo-2 containers: - name: rbac-pod-c2 image: 192.168.9.10/library/nginx:latest imagePullPolicy: IfNotPresent
# kubectl apply -f pod-rbac-demo.yaml
7测试
查询service:
# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 none 443/TCP 8d
在rbac-pod1中测试
# kubectl exec -it rbac-pod1 bash
rootnode:/# token$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
rootnode:/# curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H Authorization: Bearer $token https://10.96.0.1/api/v1/namespaces/default/pods
在rbac-pod2中测试
# kubectl exec -it rbac-pod2 bash
rootnode:/# token$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
rootnode:/# curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt -H Authorization: Bearer $token https://10.96.0.1/api/v1/namespaces/default/pods
4. Kubernetes用户的RBAC访问控制
1创建和查询User
# kubectl config set-credentials test
# kubectl config get-users
2设定User的证书和密钥
创建密钥
# openssl genrsa -out a.key 2048
创建签名请求文件
# openssl req -new -key a.key -out a.csr -subj /CNtest/Oaaa
生成证书
# openssl x509 -req -in a.csr -CA /etc/kubernetes/pki/ca.crt -CAkey /etc/kubernetes/pki/ca.key -CAcreateserial -out a.crt -days 180
设置用户的证书和密钥
# clientcertificatedata$(cat a.crt|base64 --wrap0)
# clientkeydata$(cat a.key | base64 --wrap0)
# kubectl config set users.test.client-key-data $clientkeydata
# kubectl config set users.test.client-certificate-data $clientcertificatedata
查询
# kubectl config view
3绑定User与Role
# kubectl create rolebinding testrolebinding --rolepod-reader --usertest
# kubectl create clusterrolebinding testclusterrolebinding --clusterrolepv-reader --usertest
4创建和查询context。
# kubectl config set-context test-context
# kubectl config get-contexts
5设置context的cluster和user。
# kubectl config set contexts.test-context.cluster kubernetes
# kubectl config set contexts.test-context.user test
6设置和查询当前context示例
# kubectl config current-context
# kubectl config use-context test-context
7测试
# kubectl get pods
# kubectl get pods -A
8恢复context并测试
# kubectl config use-context kubernetes-adminkubernetes
# kubectl config current-context
# kubectl get pods -A
实训5-12 资源管理实训微课视频17分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/nginx-latest.tar
增加Tag
# docker tag busybox:latest 192.168.9.10/library/nginx:latest
上传镜像
# docker push 192.168.9.10/library/nginx:latest
3.Pod的资源request和limit
1创建ns
# kubectl create ns ns1
2查询节点资源
# kubectl describe node master
# kubectl describe node node
3创建Pod
创建模板文件
# vi resource-pod-demo-1.yaml
apiVersion: v1
kind: Pod
metadata: name: resource-pod-demo-1 namespace: ns1
spec: containers: - name: app image: 192.168.9.10/library/nginx:latest resources: requests: memory: 64Mi cpu: 250m limits: memory: 128Mi cpu: 500m
创建Pod
# kubectl aplly -f resource-pod-demo-1.yaml
4再次查询节点资源
# kubectl describe node master
# kubectl describe node node
5删除Pod
# kubectl delete -f resource-pod-demo-1.yaml
3.资源配额
1启用资源配额默认已启用
# vi /etc/kubernetes/manifests/kube-apiserver.yaml
--enable-admission-plugins加上ResourceQuota
# kubectl apply -f /etc/kubernetes/manifests/kube-apiserver.yaml
2创建ResourceQuota
创建模板文件
# vi resourcequota-demo.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1 kind: ResourceQuota metadata: name: pods-high spec: hard: cpu: 1000 memory: 200Gi pods: 10 scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: [high]
- apiVersion: v1 kind: ResourceQuota metadata: name: pods-medium spec: hard: cpu: 10 memory: 20Gi pods: 10 scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: [medium]
- apiVersion: v1 kind: ResourceQuota metadata: name: pods-low spec: hard: cpu: 5 memory: 10Gi pods: 10 scopeSelector: matchExpressions: - operator : In scopeName: PriorityClass values: [low]
创建和查询ResourceQuota
# kubectl apply -f resourcequota-demo.yaml
# kubectl describe quota 2创建一个优先级类high的Pod
创建模板文件
# vi resourcequota-pod-demo.yaml
kind: PriorityClass
apiVersion: scheduling.k8s.io/v1
metadata: name: high
value: 1000000
preemptionPolicy: Never
globalDefault: false --- apiVersion: v1
kind: Pod
metadata: name: high-priority
spec: containers: - name: high-priority image: 192.168.9.10/library/nginx:latest resources: requests: memory: 10Gi cpu: 500m limits: memory: 10Gi cpu: 500m priorityClassName: high
创建Pod和查询ResourceQuota
# kubectl apply -f resourcequota-pod-demo.yaml
# kubectl describe quota
4.LimitRange
1创建命名空间
# kubectl create namespace constraints-cpu-example
2创建 LimitRange
# vi LimitRange-demo.yaml
apiVersion: v1
kind: LimitRange
metadata: name: cpu-min-max-demo-lr
spec: limits: - max: cpu: 800m min: cpu: 200m type: Container
# kubectl apply -f LimitRange-demo.yaml --namespaceconstraints-cpu-example
查询 LimitRange 详情
# kubectl get limitrange cpu-min-max-demo-lr -o yaml -n constraints-cpu-example
3创建Pod
# vi LimitRange-pod-demo-1.yaml
apiVersion: v1
kind: Pod
metadata: name: constraints-cpu-demo
spec: containers: - name: constraints-cpu-demo-ctr image: 192.168.9.10/library/nginx resources: limits: cpu: 800m requests: cpu: 500m
# kubectl apply -f LimitRange-pod-demo-1.yaml --namespaceconstraints-cpu-example
查询 Pod 的详情
# kubectl get pod constraints-cpu-demo -o yaml -n constraints-cpu-example
resources: limits: cpu: 800m requests: cpu: 500m
4创建一个超过最大 CPU 限制的 Pod
# vi LimitRange-pod-demo-2.yaml
apiVersion: v1
kind: Pod
metadata: name: constraints-cpu-demo-2
spec: containers: - name: constraints-cpu-demo-2-ctr image: 192.168.9.10/library/nginx resources: limits: cpu: 1.5 requests: cpu: 500m
# kubectl apply -f LimitRange-pod-demo-2.yaml --namespaceconstraints-cpu-example
5创建一个不满足最小 CPU 请求的 Pod
# vi LimitRange-pod-demo-3.yaml
apiVersion: v1
kind: Pod
metadata: name: constraints-cpu-demo-3
spec: containers: - name: constraints-cpu-demo-3-ctr image: 192.168.9.10/library/nginx resources: limits: cpu: 800m requests: cpu: 100m
# kubectl apply -f LimitRange-pod-demo-3.yaml --namespaceconstraints-cpu-example
6创建一个没有声明 CPU 请求和 CPU 限制的 Pod
# vi LimitRange-pod-demo-4.yaml
apiVersion: v1
kind: Pod
metadata: name: constraints-cpu-demo-4
spec: containers: - name: constraints-cpu-demo-4-ctr image: 192.168.9.10/library/nginx
# kubectl apply -f LimitRange-pod-demo-4.yaml --namespaceconstraints-cpu-example
查询 Pod 的详情
# kubectl get pod constraints-cpu-demo-4 -n constraints-cpu-example -o yaml
实训5-13 Pod调度实训微课视频17分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/busybox-latest.tar
增加Tag
# docker tag busybox:latest 192.168.9.10/library/busybox:latest
上传镜像
# docker push 192.168.9.10/library/busybox:latest
3.使用标签选择算符将Pod调度到节点
1添加标签到节点
# kubectl label node master appapp1
# kubectl label node node appapp2
2创建有 nodeSelector 字段的 Pod
创建模板文件
# vi schedule-pod-demo-1.yaml
apiVersion: v1
kind: Pod
metadata: name: schedule-pod-demo-1 labels: env: test
spec: containers: - name: c-1 image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent nodeSelector:
app: app1
创建Pod
# kubectl apply -f schedule-pod-demo-1.yaml
查询 Pod
# kubectl get pod/schedule-pod-demo-1 -o wide
删除Pod
# kubectl delete pod/schedule-pod-demo-1
4.使用nodeName将Pod调度到节点
1创建有 nodeName 字段的 Pod
创建模板文件
# vi schedule-pod-demo-2.yaml
apiVersion: v1
kind: Pod
metadata: name: schedule-pod-demo-2 labels: env: test
spec: containers: - name: c-2 image: 192.168.9.10/library/busybox:latest imagePullPolicy: IfNotPresent nodeName: master
创建Pod
# kubectl apply -f schedule-pod-demo-2.yaml
查询 Pod
# kubectl get pod/schedule-pod-demo-2 -o wide
删除Pod
# kubectl delete pod/schedule-pod-demo-2
5.节点亲和性
1创建节点亲和性Pod
创建模板文件
# vi node-affinity-pod-demo.yaml
apiVersion: v1
kind: Pod
metadata: name: node-affinity-pod-demo
spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: app operator: In values: - app1 containers: - name: node-affinity-pod-demo
image: 192.168.9.10/library/busybox:latest
imagePullPolicy: IfNotPresent
args: [sleep,infinity]
创建Pod
# kubectl apply -f node-affinity-pod-demo.yaml
查询Pod
# kubectl get pod/node-affinity-pod-demo -o wide
删除Pod
# kubectl delete pod/node-affinity-pod-demo
5.pod 间亲和性与反亲和性
1pod 间亲和性
创建模板文件
# vi pod-affinity-demo.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-affinity-1 labels: app: pod-affinity
spec: nodeName: master containers: - name: pod-affinity-1
image: 192.168.9.10/library/busybox
imagePullPolicy: IfNotPresent args: [sleep,infinity] --- apiVersion: v1
kind: Pod
metadata: name: pod-affinity-2 labels: app: pod-affinity
spec: affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - pod-affinity topologyKey: kubernetes.io/hostname containers: - name: pod-affinity-2
image: 192.168.9.10/library/busybox
imagePullPolicy: IfNotPresent args: [sleep,infinity]
创建Pod
# kubectl apply -f pod-affinity-demo.yaml
查询 Pod
# kubectl get pods -o wide
删除Pod
# kubectl delete -f pod-affinity-demo.yaml
2pod 间反亲和性
创建模板文件
# vi pod-antiaffinity-demo.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-affinity-3 labels: app: pod-affinity
spec: nodeName: master containers: - name: pod-affinity-3
image: 192.168.9.10/library/busybox
imagePullPolicy: IfNotPresent args: [sleep,infinity] --- apiVersion: v1
kind: Pod
metadata: name: pod-affinity-4 labels: app: pod-affinity
spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - pod-affinity topologyKey: kubernetes.io/hostname containers: - name: pod-affinity-4
image: 192.168.9.10/library/busybox
imagePullPolicy: IfNotPresent args: [sleep,infinity]
创建Pod
# kubectl apply -f pod-antiaffinity-demo.yaml
查询 Pod
# kubectl get pods -o wide
删除Pod
# kubectl delete -f pod-antiaffinity-demo.yaml
6.污点和容忍度
1给master节点增加污点
# kubectl taint nodes master taint-1test:NoSchedule
2创建两个Pod
1创建模板文件
# vi pod-taint.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-taint-1 labels: app: pod-taint
spec: affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 preference: matchExpressions: - key: app operator: In values: - app1 containers: - name: pod-taint-1
image: 192.168.9.10/library/busybox
imagePullPolicy: IfNotPresent args: [sleep,infinity] --- apiVersion: v1
kind: Pod
metadata: name: pod-taint-2 labels: app: pod-taint
spec: tolerations: - key: taint-1 operator: Equal value: test effect: NoSchedule affinity: nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 preference: matchExpressions: - key: app operator: In values: - app1 containers: - name: pod-taint-2
image: 192.168.9.10/library/busybox
imagePullPolicy: IfNotPresent args: [sleep,infinity]
2创建Pod并查询
# kubectl apply -f pod-taint.yaml
# kubectl get pods -o wide
3删除master节点污点
# kubectl taint nodes master taint-1test:NoSchedule-
4重新创建Pod并查询
# kubectl delete -f pod-taint.yaml
# kubectl apply -f pod-taint.yaml
# kubectl get pods -o wide
5删除Pod
# kubectl delete -f pod-taint.yaml
7.优先级
1创建优先级类PriorityClass
# vi priority-demo.yaml
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata: name: high-priority-nonpreempting
value: 1000000
preemptionPolicy: Never
globalDefault: false
---
apiVersion: scheduling.k8s.io/v1
kind: PriorityClass
metadata: name: high-priority
value: 1000000
globalDefault: false
# kubectl apply -f priority-demo.yaml
2Pod使用PriorityClass
# vi pod-priority-demo.yaml
apiVersion: v1
kind: Pod
metadata: name: pod-priority-demo labels: env: test
spec: containers: - name: pod-priority-demo image: 192.168.9.10/library/busybox
imagePullPolicy: IfNotPresent
args: [sleep,infinity] priorityClassName: high-priority
# kubectl apply -f pod-priority-demo.yaml
# kubectl get pod/pod-priority-demo -o yaml
# kubectl delete -f pod-priority-demo.yaml
实训5-14 部署wordpress微课视频8分钟
1.实训环境准备
1VMWare网络设置
打开VMware workstation在菜单中选“编辑”→“虚拟网络编辑器”。
设置VMnet8的子网IP192.168.9.0/24
设置VMnet1的子网IP192.168.30.0/24。
2虚拟主机准备
使用虚拟机master设置CD/DVD1使用docker.iso为虚拟光驱设置CD/DVD2使用CentOS-7-x86_64-DVD-2009.iso为虚拟光驱和虚拟机node。
2.载入镜像
载入镜像
# docker load -i /mnt/docker/images/mysql-5.6.tar
# docker load -i /mnt/docker/images/wordpress-latest.tar
增加Tag
# docker tag mysql:5.6 192.168.9.10/library/mysql:5.6
# docker tag wordpress:latest 192.168.9.10/library/wordpress:latest
上传镜像
# docker push 192.168.9.10/library/wordpress:latest
# docker push 192.168.9.10/library/mysql:5.6 3.创建secret
# kubectl create secret generic mysql-pass --from-literalpassword123456
4.创建存储卷
按实训1-3配置好NFS服务器。然后使用下面的模板文件创建两个pv。
# vi wordpress-pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata: name: pv-1
spec: capacity: storage: 2Gi persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce - ReadOnlyMany nfs: server: 192.168.9.10 path: /share --- apiVersion: v1
kind: PersistentVolume
metadata: name: pv-2
spec: capacity: storage: 3Gi persistentVolumeReclaimPolicy: Retain accessModes: - ReadWriteOnce - ReadOnlyMany nfs: server: 192.168.9.10
path: /share
5.创建Mysql
1创建模板文件
# vi mysql-deployment.yaml
apiVersion: v1
kind: Service
metadata: name: wordpress-mysql labels: app: wordpress
spec: ports: - port: 3306 selector: app: wordpress tier: mysql clusterIP: None
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata: name: mysql-pv-claim labels: app: wordpress
spec: storageClassName: managed-nfs-storage accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
---
apiVersion: apps/v1
kind: Deployment
metadata: name: wordpress-mysql labels: app: wordpress
spec: selector: matchLabels: app: wordpress tier: mysql strategy: type: Recreate template: metadata: labels: app: wordpress tier: mysql spec: containers: - image: 192.168.9.10/library/mysql:5.6 name: mysql env: - name: MYSQL_ROOT_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 3306 name: mysql volumeMounts: - name: mysql-persistent-storage mountPath: /var/lib/mysql volumes: - name: mysql-persistent-storage persistentVolumeClaim: claimName: mysql-pv-claim
2创建Mysql
# kubectl apply -f mysql-deployment.yaml
6.创建Wordpress
1创建模板文件
# vi wordpress-deployment.yaml
apiVersion: v1
kind: Service
metadata: name: wordpress labels: app: wordpress
spec: type: NodePort ports: - port: 80 targetPort: 80 nodePort: 30888 selector: app: wordpress tier: frontend
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata: name: wp-pv-claim labels: app: wordpress
spec: accessModes: - ReadWriteOnce resources: requests: storage: 3Gi storageClassName: managed-nfs-storage
---
apiVersion: apps/v1
kind: Deployment
metadata: name: wordpress labels: app: wordpress
spec: selector: matchLabels: app: wordpress tier: frontend strategy: type: Recreate template: metadata: labels: app: wordpress tier: frontend spec: containers: - image: 192.168.9.10/library/wordpress name: wordpress env: - name: WORDPRESS_DB_HOST value: wordpress-mysql - name: WORDPRESS_DB_PASSWORD valueFrom: secretKeyRef: name: mysql-pass key: password ports: - containerPort: 80 name: wordpress volumeMounts: - name: wordpress-persistent-storage mountPath: /var/www/html volumes: - name: wordpress-persistent-storage persistentVolumeClaim: claimName: wp-pv-claim
2创建Wordpress
# kubectl apply -f wordpress-deployment.yaml
7.访问
在浏览器地址栏输入http://192.168.9.10:30888