为了实现服务的高可用,我们需要用到分布式存储构架,现有的条件不允许使用Ceph(我们的机器均为硬件raid阵列卡,非直通磁盘不能使用Ceph),因此我们选择Glusterfs作为分布式存储集群构架方案。
硬件要求
1. 构架:LXC
官方教程中测试的是:Xen, VMware ESX and Workstation, VirtualBox, and KVM,其实LXC的privileged容器也是可以的,不做具体要求。
2. 硬盘:2块
VirtIO构架硬盘(LXC忽略),一块作为系统盘32G即可,另一块是存储盘,自行决定大小
3. 网卡:2个
双网卡并非必须,只是用于将管理流量与数据传输流量分离用的。
本例中三节点,ip为172.27.0.201-203
一定注意!KVM不要直接clone虚拟机,这样会导致UUID一样,从而出现错误!!
虚拟机准备
1. 从Proxmox官方CentOS7镜像部署LXC,设置容器属性为privileged,且开启FUSE模块
2. 添加网卡和挂载点,其中挂载点/storage,开启quota
3. 启动容器后,测试容器之间的连通性
[root@Gluster-JS-1001 ~]# ping 172.27.0.202 PING 172.27.0.202 (172.27.0.202) 56(84) bytes of data. 64 bytes from 172.27.0.202: icmp_seq=1 ttl=64 time=0.610 ms 64 bytes from 172.27.0.202: icmp_seq=2 ttl=64 time=0.212 ms ^C --- 172.27.0.202 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1001ms rtt min/avg/max/mdev = 0.212/0.411/0.610/0.199 ms [root@Gluster-JS-1001 ~]# ping 172.27.0.203 PING 172.27.0.203 (172.27.0.203) 56(84) bytes of data. 64 bytes from 172.27.0.203: icmp_seq=1 ttl=64 time=0.693 ms 64 bytes from 172.27.0.203: icmp_seq=2 ttl=64 time=0.441 ms ^C --- 172.27.0.203 ping statistics --- 2 packets transmitted, 2 received, 0% packet loss, time 1006ms rtt min/avg/max/mdev = 0.441/0.567/0.693/0.126 ms
安装GlusterFS
官方教程点击这里查看,本文不定期更新,文中所用的版本为更新时的最新版本,所以如果追求更新版本的话请到CentOS官网教程中获取。
1. 添加GlusterFS源,更新源,安装GlusterFS:
yum install centos-release-gluster -y && yum update -y && yum install glusterfs-server -y
2. 开启GlusterFS服务,并设置开机启动:
systemctl enable glusterd && systemctl start glusterd
3. 查看GlusterFS状态:
systemctl status glusterd
结果如下:
[root@Gluster-JS-1001 ~]# systemctl status glusterd ● glusterd.service - GlusterFS, a clustered file-system server Loaded: loaded (/usr/lib/systemd/system/glusterd.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-11-21 19:35:33 UTC; 46s ago Docs: man:glusterd(8) Process: 925 ExecStart=/usr/sbin/glusterd -p /var/run/glusterd.pid --log-level $LOG_LEVEL $GLUSTERD_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 926 (glusterd) CGroup: /system.slice/glusterd.service └─926 /usr/sbin/glusterd -p /var/run/glusterd.pid --log-level INFO... Nov 21 19:35:33 Gluster-JS-1001 systemd[1]: Starting GlusterFS, a clustered .... Nov 21 19:35:33 Gluster-JS-1001 systemd[1]: Started GlusterFS, a clustered f.... Hint: Some lines were ellipsized, use -l to show in full.
配置GlusterFS集群
1. 互相添加节点:
在201中
[root@Gluster-JS-1001 ~]# gluster peer probe 172.27.0.202 peer probe: success.
在202中
[root@Gluster-JS-2001 ~]# gluster peer probe 172.27.0.201 peer probe: success. Host 172.27.0.201 port 24007 already in peer list [root@Gluster-JS-2001 ~]# gluster peer probe 172.27.0.203 peer probe: success.
2. 查看集群状态:
[root@Gluster-JS-1001 ~]# gluster peer status Number of Peers: 2 Hostname: 172.27.0.202 Uuid: 0147f0d0-10de-43b6-8bf4-0bc23671ee48 State: Peer in Cluster (Connected) Hostname: 172.27.0.203 Uuid: efb59bd2-cc35-4ae0-8d55-4d03955dc1c9 State: Peer in Cluster (Connected)
3. 添加分布式存储挂载点
mkdir /storage/glusterfs #三个机器里都需要做 #下面这个只需要在一台机器中执行即可 gluster volume create gv0 replica 3 172.27.0.201:/storage/glusterfs 172.27.0.202:/storage/glusterfs 172.27.0.203:/storage/glusterfs #volume create: gv0: success: please start the volume to access data
4. 查看GlusterFS状态
[root@Gluster-JS-1001 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: c321e9fe-0c24-43cd-aa9f-ee5f7b5dc1ff Status: Created Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.27.0.201:/storage/glusterfs Brick2: 172.27.0.202:/storage/glusterfs Brick3: 172.27.0.203:/storage/glusterfs Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off
5. 开启volume:
# gluster volume start gv0 volume start: gv0: success
6. 设置IP白名单:
gluster volume set gv0 auth.allow 172.27.0.1,172.27.1.1,172.27.2.1 #volume set: success
检测性能
mount -t glusterfs 172.27.0.201:/gv0 /mnt for i in `seq -w 1 100`; do cp -rp /var/log/messages /mnt/copy-test-$i; done ls -lA /mnt | wc -l ls -lA /storage/glusterfs