GlusterFS其实双节点就够了,但是我之前搭建的是三节点的,所以这里移除一个节点。
实际操作过程中会遇到各种问题,这里做一个记录。
这里我们需要移除的是172.27.0.202节点,先查看集群信息:
[root@Gluster-JS-1001 ~]# gluster peer status Number of Peers: 2 Hostname: 172.27.0.202 Uuid: 0147f0d0-10de-43b6-8bf4-0bc23671ee48 State: Peer in Cluster (Connected) Hostname: 172.27.0.203 Uuid: efb59bd2-cc35-4ae0-8d55-4d03955dc1c9 State: Peer in Cluster (Connected)
有两个peer,我们要移除的是202,首先我尝试直接在201节点中移除:
[root@Gluster-JS-1001 ~]# gluster peer detach 172.27.0.202 All clients mounted through the peer which is getting detached need to be remounted using one of the other active peers in the trusted storage pool to ensure client gets notification on any changes done on the gluster configuration and if the same has been done do you want to proceed? (y/n) y peer detach: failed: Brick(s) with the peer 172.27.0.202 exist in cluster
但是是失败的,因为volume里有活动的节点,因此需要从volume中移除这个节点,首先还是查看volume信息:
[root@Gluster-JS-1001 ~]# gluster volume info Volume Name: gv0 Type: Replicate Volume ID: c321e9fe-0c24-43cd-aa9f-ee5f7b5dc1ff Status: Started Snapshot Count: 0 Number of Bricks: 1 x 3 = 3 Transport-type: tcp Bricks: Brick1: 172.27.0.201:/storage/glusterfs Brick2: 172.27.0.202:/storage/glusterfs Brick3: 172.27.0.203:/storage/glusterfs Options Reconfigured: auth.allow: 172.27.0.1,172.27.1.1,172.27.2.1 transport.address-family: inet nfs.disable: on performance.client-io-threads: off
然后移除volume的202节点,并且由三副本改为双副本(修改replica参数):
[root@Gluster-JS-1001 ~]# gluster volume remove-brick gv0 replica 2 172.27.0.202 force Replica 2 volumes are prone to split-brain. Use Arbiter or Replica 3 to avaoid this. See: http://docs.gluster.org/en/latest/Administrator%20Guide/Split%20brain%20and%20ways%20to%20deal%20with%20it/. Do you still want to continue? (y/n) y volume remove-brick commit force: success
搞定了!