环境:
192.168.1.5 | zk1 |
---|---|
192.168.1.6 | zk2 |
192.168.1.7 | zk3 |
概念不再阐述,直接上步骤,文章结尾处,会有些常用的zookeeper调优参数
一、准备环境
①拷贝环境
[root@kuting1 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4::1 localhost localhost.localdomain localhost6 localhost6.localdomain6192.168.1.5 zk1192.168.1.6 zk2192.168.1.7 zk3
[root@kuting1 ~]# ssh-keygen -t rsa
[root@kuting1 ~]# for i intail -3 /etc/hosts | awk '{print $2}'
; do ssh-copy-id $i ; done[root@kuting1 ~]# for i in tail -3 /etc/hosts | awk '{print $2}'
; do scp /etc/hosts $i:/etc/hosts ; done ②下载zookeeper安装包
[root@kuting1 ~]# wget https://archive.apache.org/dist/zookeeper/zookeeper-3.4.11/zookeeper-3.4.11.tar.gz
[root@kuting1 ~]# tar zxf zookeeper-3.4.11.tar.gz
[root@kuting1 ~]# mkdir /data/server -p #程序目录[root@kuting1 ~]# mkdir /data/data/zookeeper/0{0..2} -p #数据目录[root@kuting1 ~]# mkdir /data/logs/zookeeper/0{0..2} -p #日志目录③zookeeper环境必须有java环境以及默认的命令路径,部署过程此次省略
二、搭建单机伪分布式zookeeper集群
环境:
①配置zookeeper00
[root@kuting1 ~]# mv zookeeper-3.4.11 /data/server/zookeeper00
[root@kuting1 ~]# cd /data/server/zookeeper00/conf/[root@kuting1 conf]# cp zoo_sample.cfg zoo.cfg[root@kuting1 conf]# vim zoo.cfg# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/data/data/zookeeper/00dataLogDir=/data/logs/zookeeper/00# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge featureautopurge.purgeInterval=48server.1=192.168.1.5:2888:3888server.2=192.168.1.5:2889:3889server.3=192.168.1.5:2890:3890
[root@kuting1 ~]# echo 1 > /data/data/zookeeper/00/myid
②配置zookeeper01
[root@kuting1 ~]# cp -rf /data/server/zookeeper/00 /data/server/zookeeper/01
[root@kuting1 ~]# vim /data/server/zookeeper/01/conf/zoo.cfg# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/data/data/zookeeper/01 #目录名字dataLogDir=/data/logs/zookeeper/01# the port at which the clients will connectclientPort=2182 #端口号必须要修改# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge featureautopurge.purgeInterval=48server.1=192.168.1.5:2888:3888server.2=192.168.1.5:2889:3889server.3=192.168.1.5:2890:3890
[root@kuting1 ~]# echo 2 > /data/data/zookeeper/01/myid
③配置zookeeper02
[root@kuting1 ~]# cp -rf /data/server/zookeeper/00 /data/server/zookeeper/02
[root@kuting1 ~]# vim /data/server/zookeeper/02/conf/zoo.cfg# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/data/data/zookeeper/02dataLogDir=/data/logs/zookeeper/02# the port at which the clients will connectclientPort=2183# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDir#autopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge featureautopurge.purgeInterval=48server.1=192.168.1.5:2888:3888server.2=192.168.1.5:2889:3889server.3=192.168.1.5:2890:3890
[root@kuting1 ~]# echo 3 > /data/data/zookeeper/02/myid
④测试运作
启动第一个节点
[root@kuting1 ~]# cd /data/server/zookeeper00/bin[root@kuting1 bin]# ./zkServer.sh startZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper00/bin/../conf/zoo.cfgStarting zookeeper ... STARTED
[root@kuting1 bin]# ss -anpt | grep java
LISTEN 0 50 :::2181 :::* users:(("java",pid=13285,fd=25))LISTEN 0 50 :::46475 :::* users:(("java",pid=13285,fd=19))LISTEN 0 50 ::ffff:192.168.1.5:3888 :::* users:(("java",pid=13285,fd=26))
启动第二个节点
[root@kuting1 bin]# ../../zookeeper01/bin/zkServer.sh start启动第三个节点
[root@kuting1 bin]# ../../zookeeper02/bin/zkServer.sh start[root@kuting1 bin]# ss -anpt | grep java
LISTEN 0 50 :::37985 :::* users:(("java",pid=14163,fd=19))LISTEN 0 50 :::2181 :::* users:(("java",pid=13285,fd=25))LISTEN 0 50 :::2182 :::* users:(("java",pid=14163,fd=25))LISTEN 0 50 :::2183 :::* users:(("java",pid=14207,fd=25))LISTEN 0 50 ::ffff:192.168.1.5:2889 :::* users:(("java",pid=14163,fd=28))LISTEN 0 50 :::46475 :::* users:(("java",pid=13285,fd=19))LISTEN 0 50 ::ffff:192.168.1.5:3888 :::* users:(("java",pid=13285,fd=26))LISTEN 0 50 ::ffff:192.168.1.5:3889 :::* users:(("java",pid=14163,fd=26))LISTEN 0 50 ::ffff:192.168.1.5:3890 :::* users:(("java",pid=14207,fd=26))LISTEN 0 50 :::42517 :::* users:(("java",pid=14207,fd=19))ESTAB 0 0 ::ffff:192.168.1.5:41592 ::ffff:192.168.1.5:3888 users:(("java",pid=14207,fd=27))ESTAB 0 0 ::ffff:192.168.1.5:38194 ::ffff:192.168.1.5:2889 users:(("java",pid=14207,fd=29))ESTAB 0 0 ::ffff:192.168.1.5:41080 ::ffff:192.168.1.5:3889 users:(("java",pid=14207,fd=28))ESTAB 0 0 ::ffff:192.168.1.5:41584 ::ffff:192.168.1.5:3888 users:(("java",pid=14163,fd=27))ESTAB 0 0 ::ffff:192.168.1.5:3889 ::ffff:192.168.1.5:41080 users:(("java",pid=14163,fd=30))ESTAB 0 0 ::ffff:192.168.1.5:2889 ::ffff:192.168.1.5:38194 users:(("java",pid=14163,fd=31))ESTAB 0 0 ::ffff:192.168.1.5:38188 ::ffff:192.168.1.5:2889 users:(("java",pid=13285,fd=28))ESTAB 0 0 ::ffff:192.168.1.5:2889 ::ffff:192.168.1.5:38188 users:(("java",pid=14163,fd=29))ESTAB 0 0 ::ffff:192.168.1.5:3888 ::ffff:192.168.1.5:41584 users:(("java",pid=13285,fd=27))ESTAB 0 0 ::ffff:192.168.1.5:3888 ::ffff:192.168.1.5:41592 users:(("java",pid=13285,fd=29))
⑤测试创建一些znode
登陆到第一个节点,客户端端口为2181,创建znode
[root@kuting1 00]# ./zkCli.sh -server 127.0.0.1:2181[zk: 127.0.0.1:2181(CONNECTED) 0] ls / [zookeeper][zk: 127.0.0.1:2181(CONNECTED) 1] create /data test-dataCreated /data[zk: 127.0.0.1:2181(CONNECTED) 2] ls / [zookeeper, data][zk: 127.0.0.1:2181(CONNECTED) 3] quit登陆到第二个节点,客户端端口为2182,查看znode是否同步
[root@kuting1 bin]# ./zkCli.sh -server 127.0.0.1:2182[zk: 127.0.0.1:2182(CONNECTED) 0] ls / [zookeeper, data][zk: 127.0.0.1:2182(CONNECTED) 1] get /datatest-data #数据是一致的cZxid = 0x100000002ctime = Sat Aug 04 18:31:39 CST 2018mZxid = 0x100000002mtime = Sat Aug 04 18:31:39 CST 2018pZxid = 0x100000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: 127.0.0.1:2182(CONNECTED) 2] quit登陆到第三个节点,客户端端口为2183,查看znode是否同步
[root@kuting1 bin]# ./zkCli.sh -server 127.0.0.1:2183[zk: 127.0.0.1:2183(CONNECTED) 0] ls /[zookeeper, data][zk: 127.0.0.1:2183(CONNECTED) 1] get /datatest-datacZxid = 0x100000002ctime = Sat Aug 04 18:31:39 CST 2018mZxid = 0x100000002mtime = Sat Aug 04 18:31:39 CST 2018pZxid = 0x100000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: 127.0.0.1:2183(CONNECTED) 2] quit⑥查看集群状态
查看集群的状态、主从信息需要使用 ./zkServer.sh status 命令,但是多个节点的话,逐个查看有些费劲,所以我们写一个简单的shell脚本来批量执行命令。如下
[root@kuting1 ~]# cat checkzk.sh#!/bin/bashn=(0 1 2)for i in ${n[@]};doecho $i/data/server/zookeeper0$i/bin/zkServer.sh statusdone
[root@kuting1 ~]# chmod +x checkzk.sh
[root@kuting1 ~]# ./checkzk.sh0ZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper00/bin/../conf/zoo.cfgMode: follower1ZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper01/bin/../conf/zoo.cfgMode: leader2ZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper02/bin/../conf/zoo.cfgMode: follower
看到其中有节点已经当选为follower,数据已经同步,完成了单机伪分布式集群搭建
三、搭建分布式zookeeper集群
①环境准备
在搭建单机版之前已经拷贝过环境,关闭防火墙,三台机器均必须拥有java环境
将单机版删除,保留一个节点拷贝到其他机器
[root@kuting1 ~]# rm -rf /data/server/zookeeper0{1..2}[root@kuting1 ~]# mv /data/server/zookeeper00/ /data/server/zookeeper在其他节点上创建/data/server预程序目录、/data/data/zookeeper数据目录、/data/logs/zookeeper程序日志目录
[root@kuting1 ~]# rsync -az /data/server/zookeeper zk2:/data/server/zookeeper [root@kuting1 ~]# rsync -az /data/server/zookeeper zk3:/data/server/zookeeper逐个设置变量,添加在/etc/profile尾部,source加载一下
export ZOOKEEPER_HOME=/data/server/zookeeperexport JAVA_HOME=/data/server/javaexport PATH=$PATH:/data/server/java/bin:/data/server/zookeeper/bin
②配置zookeeper1
[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/data/data/zookeeper/dataLogDir=/data/logs/zookeeper/# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDirautopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge featureautopurge.purgeInterval=48server.1=192.168.1.5:2888:3888server.2=192.168.1.6:2888:3888server.3=192.168.1.7:2888:3888
默认server.1为master节点
[root@kuting2 ~]# echo 1 > /data/data/zookeeper/myid③配置zookeeper2
[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/data/data/zookeeper/dataLogDir=/data/logs/zookeeper/# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDirautopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge featureautopurge.purgeInterval=48server.1=192.168.1.5:2888:3888server.2=192.168.1.6:2888:3888server.3=192.168.1.7:2888:3888
[root@kuting2 ~]# echo 2 > /data/data/zookeeper/myid
④配置zookeeper3
[root@kuting1 ~]# vim /data/server/zookeeper/conf/zoo.cfg
# The number of milliseconds of each ticktickTime=2000# The number of ticks that the initial # synchronization phase can takeinitLimit=10# The number of ticks that can pass between # sending a request and getting an acknowledgementsyncLimit=5# the directory where the snapshot is stored.# do not use /tmp for storage, /tmp here is just # example sakes.dataDir=/data/data/zookeeper/dataLogDir=/data/logs/zookeeper/# the port at which the clients will connectclientPort=2181# the maximum number of client connections.# increase this if you need to handle more clients#maxClientCnxns=60## Be sure to read the maintenance section of the # administrator guide before turning on autopurge.## http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance## The number of snapshots to retain in dataDirautopurge.snapRetainCount=3# Purge task interval in hours# Set to "0" to disable auto purge featureautopurge.purgeInterval=48server.1=192.168.1.5:2888:3888server.2=192.168.1.6:2888:3888server.3=192.168.1.7:2888:3888
[root@kuting2 ~]# echo 3 > /data/data/zookeeper/myid
⑤测试启动zookeeper集群
[root@kuting1 conf]# zkServer.sh start
[root@kuting2 conf]# zkServer.sh start[root@kuting3 conf]# zkServer.sh start查看三节点的集群状态(在启动之后有节点选举的过程,注意关闭防火墙)
[root@kuting1 zookeeper]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgMode: leader
[root@kuting2 conf]# zkServer.sh status
ZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgMode: follower
[root@kuting3 conf]# zkServer.sh status
ZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgMode: follower
⑥测试创建znode,查看是否同步
[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.7:2181
[zk: 192.168.1.7:2181(CONNECTED) 0] ls /[zookeeper][zk: 192.168.1.7:2181(CONNECTED) 1] create /real-culster real-dataCreated /real-culster[zk: 192.168.1.7:2181(CONNECTED) 2] ls /[zookeeper, real-culster][zk: 192.168.1.7:2181(CONNECTED) 3] get /real-culsterreal-datacZxid = 0x100000002ctime = Sat Sep 29 11:09:40 CST 2018mZxid = 0x100000002mtime = Sat Sep 29 11:09:40 CST 2018pZxid = 0x100000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: 192.168.1.7:2181(CONNECTED) 4] quit[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.5:2181
[zk: 192.168.1.5:2181(CONNECTED) 0] ls / [zookeeper, real-culster][zk: 192.168.1.5:2181(CONNECTED) 1] get /real-culsterreal-datacZxid = 0x100000002ctime = Sat Sep 29 11:09:40 CST 2018mZxid = 0x100000002mtime = Sat Sep 29 11:09:40 CST 2018pZxid = 0x100000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: 192.168.1.5:2181(CONNECTED) 2] quit[root@kuting3 zookeeper]# zkCli.sh -server 192.168.1.6:2181
[zk: 192.168.1.6:2181(CONNECTED) 0] ls / [zookeeper, real-culster][zk: 192.168.1.6:2181(CONNECTED) 1] get /real-culsterreal-datacZxid = 0x100000002ctime = Sat Sep 29 11:09:40 CST 2018mZxid = 0x100000002mtime = Sat Sep 29 11:09:40 CST 2018pZxid = 0x100000002cversion = 0dataVersion = 0aclVersion = 0ephemeralOwner = 0x0dataLength = 9numChildren = 0[zk: 192.168.1.6:2181(CONNECTED) 2] quit发现两个follower节点已经同步znode
⑦测试集群选举高可用
停止leader节点
[root@kuting1 zookeeper]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgMode: leader
[root@kuting1 zookeeper]# zkServer.sh stop
ZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgStopping zookeeper ... STOPPED
这时去到follower节点看看有没有被选举到leader
[root@kuting2 ~]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgMode: leader
[root@kuting2 ~]# ip a | grep inet | grep ens33$
inet 192.168.1.6/24 brd 192.168.1.255 scope global noprefixroute ens33
[root@kuting1 conf]# zkServer.sh status
ZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgMode: followe
[root@kuting1 conf]# ip a | grep inet | grep ens33$
inet 192.168.1.5/24 brd 192.168.1.255 scope global noprefixroute ens33
发现已经有follower节点被选举被leader,之后将停掉的zk启动起来
[root@kuting3 zookeeper]# zkServer.sh start[root@kuting3 zookeeper]# zkServer.sh statusZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgMode: follower
发现他不会成为leader,而是扮演follower角色
[root@kuting2 ~]# zkServer.sh status ZooKeeper JMX enabled by defaultUsing config: /data/server/zookeeper/bin/../conf/zoo.cfgMode: leader
leader角色没有改变,所以只有在选举的时候集群中的节点才会切换角色
四、使用docker搭建zookeeper集群
①下载docker以及docker-compose工具
②配置docker-compose.yml文件搭建zk集群
[root@kuting1 ~]# docker pull zookeeper
[root@kuting1 ~]# mkdir -p /data/docker/docker-compose/zookeeper-cluster[root@kuting1 ~]# cd $![root@kuting1 zookeeper-cluster]# vim docker-compose.ymlversion: '2'services: zoo1: image: zookeeper restart: always container_name: zk1 ports: - "2181:2181" environment: ZOO_MY_ID: 1 ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 zk2: image: zookeeper restart: always container_name: zk2 ports: - "2182:2181" environment: ZOO_MY_ID: 2 ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888 zk3: image: zookeeper restart: always container_name: zk3 ports: - "2183:2181" environment: ZOO_MY_ID: 3 ZOO_SERVERS: server.1=zk1:2888:3888 server.2=zk2:2888:3888 server.3=zk3:2888:3888
[root@kuting1 zookeeper-cluster]# docker-compose up -d
[root@kuting1 zookeeper-cluster]# docker-compose psName Command State Ports ------------------------------------------------------------------------------------------zk1 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2181->2181/tcp, 2888/tcp, 3888/tcpzk2 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2182->2181/tcp, 2888/tcp, 3888/tcpzk3 /docker-entrypoint.sh zkSe ... Up 0.0.0.0:2183->2181/tcp, 2888/tcp, 3888/tcp
③检查zookeeper集群状态
[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2182
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMTClients: /172.18.0.1:40116[0](queued=0,recved=1,sent=0)Latency min/avg/max: 0/0/0Received: 1Sent: 0Connections: 1Outstanding: 0Zxid: 0x100000002Mode: followerNode count: 4
[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2181
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMTClients: /172.18.0.1:55510[0](queued=0,recved=1,sent=0)Latency min/avg/max: 0/33/66Received: 3Sent: 2Connections: 1Outstanding: 0Zxid: 0x100000002Mode: followerNode count: 4
[root@kuting1 zookeeper-cluster]# echo stat | nc 127.0.0.1 2183
Zookeeper version: 3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMTClients: /172.18.0.1:34678[0](queued=0,recved=1,sent=0)Latency min/avg/max: 0/0/0Received: 1Sent: 0Connections: 1Outstanding: 0Zxid: 0x100000002Mode: leaderNode count: 4Proposal sizes last/min/max: 32/32/36
④检查znode数据同步
[root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2181 #2181端口为zk1,根据yml文件中的映射关系依次类推
[zk: 127.0.0.1:2181(CONNECTED) 3] create /data test-data[root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2182[zk: 127.0.0.1:2182(CONNECTED) 0] ls / [zookeeper, data][root@kuting1 ~]# zkCli.sh -server 127.0.0.1:2183[zk: 127.0.0.1:2183(CONNECTED) 0] ls / [zookeeper, data]发现集群状态与数据都已经正常,搭建完毕
五、关于常用的zookeeper优化
① tickTime:CS通信心跳数,Zookeeper 服务器之间或客户端与服务器之间维持心跳的时间间隔,也就是每个 tickTime时间就会发送一个心跳。tickTime以毫秒为单位。该参数用来定义心跳的间隔时间,zookeeper的客户端和服务端之间也有和web开发里类似的session的概念,而zookeeper里最小的session过期时间就是tickTime的两倍②initLimit:LF初始通信时限,集群中的follower服务器(F)与leader服务器(L)之间 初始连接时能容忍的最多心跳数(tickTime的数量)。此配置表示,允许 follower (相对于 leader 而言的“客户端”)连接 并同步到leader 的初始化连接时间,它以 tickTime 的倍数来表示。当超过设置倍数的 tickTime 时间,则连接失败
③syncLimit:LF同步通信时限,集群中的follower服务器(F)与leader服务器(L)之间 请求和应答之间能容忍的最多心跳数(tickTime的数量)。此配置表示, leader 与 follower 之间发送消息,请求 和 应答时间长度。如果 follower 在设置的时间内不能与leader 进行通信,那么此 follower 将被丢弃
④4lw.commands.whitelist:命令的白名单,没有在的会禁止。4lw.commands.whitelist=stat,ruok, conf, isro
⑤服务器名称与地址:集群信息(服务器编号,服务器地址,LF通信端口,选举端口)
⑥这个配置项的书写格式比较特殊,规则如下:
server.N=YYY:A:B server.1=itcast05:2888:3888server.2=itcast06:2888:3888 server.3=itcast07:2888:3888