Zookeeper3.8.0集群部署

1、环境准备

集群部署zookeeper的节点数只能是奇数,在此部署3节点zookeeper集群,节点环境如下:

主机名 IP 架构 操作系统
hadoop01 192.168.194.133 x86_64 CentOS Linux 7 (Core)
hadoop02 192.168.194.134 x86_64 CentOS Linux 7 (Core)
hadoop03 192.168.194.135 x86_64 CentOS Linux 7 (Core)

zookeeper是基于java开发的,所以部署zookeeper前要有Java的环境,在些部署OpenJDK8,下载地址为:

1
2
3
4
https://www.openlogic.com/openjdk-downloads

# 为了加速下载,也可以下从国内镜像源下载,如:
https://mirrors.tuna.tsinghua.edu.cn/Adoptium

OpenJDK下载后,解压到相应的目录,并配置环境变量即可(3个节点都得配置),如:

1
2
3
4
5
6
7
8
# 在/etc/profile中加入如下内容
export JAVA_HOME=/opt/openjdk-8u332-b09
export PATH=$PATH:$JAVA_HOME/bin

# 保存好/etc/profile文件后,执行
source /etc/profile

# 检验环境变量,是否起效,可以用java或者javac命令测试

2、部署Zookeeper

首先到官网下载zookeeper,选择相应的版本,在此选用的3.8.0版本,下载地址为:

1
2
3
4
5
6
7
# 官网地址
https://zookeeper.apache.org/releases.html#download

# 如果下载速度慢,也可以用国内镜像源下载,如:
https://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper

# 注意:要选择下载带-bin的压缩包,不带-bin的压缩包是源码包!!!

下载好后,先将压缩包解压到1台服务器,如hadoop01(192.168.194.133)上,在此解压到目录为/opt/bigdata中,即:

1
/opt/bigdata/zookeeper-3.8.0

2.1、修改配置

在zookeeper-3.8.0目录下新增data和logs文件夹,分别为数据目录、日志目录。

将conf文件夹中的zoo_sample.cfg复制为zoo.cfg,即:

1
cp zoo_sample.cfg zoo.cfg

并修改zoo.cfg中内容为:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/bigdata/zookeeper-3.8.0/data
dataLogDir=/opt/bigdata/zookeeper-3.8.0/logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# https://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

## Metrics Providers
#
# https://prometheus.io Metrics Exporter
#metricsProvider.className=org.apache.zookeeper.metrics.prometheus.PrometheusMetricsProvider
#metricsProvider.httpHost=0.0.0.0
#metricsProvider.httpPort=7000
#metricsProvider.exportJvmInfo=true

# 在此配置是主机名,所以每个节点的/etc/hosts中要配置域名映射
server.1=hadoop01:2888:3888
server.2=hadoop02:2888:3888
server.3=hadoop03:2888:3888

并在数据目录data中创建myid文件,并将1写入。

将zookeeper-3.8.0目录复制到hadoop02和hadoop03服务器上,即:

1
2
scp -r zookeeper-3.8.0 root@hadoop02:/opt/bigdata/
scp -r zookeeper-3.8.0 root@hadoop03:/opt/bigdata/

注意:此时修改将hadoop02中data下的myid文件中内容改成2,将hadoop03中data下的myid文件中的内容改成3。

2.2、配置环境变量

3个节点都得配置zookeeper的环境变量,即在/etc/profile中加入如下内容:

1
2
3
4
5
6
7
export ZK_HOME=/opt/bigdata/zookeeper-3.8.0

# PATH得修改为
export PATH=$PATH:$JAVA_HOME/bin:$ZK_HOME/bin

# 使环境变量生效
source /etc/profile

2.3、启动

由于3个zookeeper之间有通信,为此最将3个节点的防火墙关闭,即:

1
2
systemctl stop firewalld
systemctl disable firewalld

然后3个节点依次启动zookeeper服务,即:

1
zkServer.sh start

都启动后,可以查看每个节点的状态,即:

1
2
3
4
5
6
7
zkServer.sh status

# 如果返回结果中的Mode为follower则是从节点,为leader则为主节点,例如:
ZooKeeper JMX enabled by default
Using config: /opt/bigdata/zookeeper-3.8.0/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
作者

lzj09

发布于

2022-07-13

更新于

2022-07-13

许可协议

评论