基于k8s的zookeeper搭建

阿里云国内75折 回扣 微信号:monov8
阿里云国际,腾讯云国际,低至75折。AWS 93折 免费开户实名账号 代冲值 优惠多多 微信号:monov8 飞机:@monov6

1.官方文档

  https://kubernetes.io/zh-cn/docs/tutorials/stateful-application/zookeeper/

2.k8s部署

2.1.部署文件

zookeeper.yaml

apiVersion: v1
kind: Service
metadata:
  name: zk-hs
  labels:
    app: zk
spec:
  ports:
    - port: 2888
      name: server
    - port: 3888
      name: leader-election
  clusterIP: None
  selector:
    app: zk
---

apiVersion: v1
kind: Service
metadata:
  name: zk-cs
  labels:
    app: zk
spec:
  ports:
    - port: 2181
      name: client
  selector:
    app: zk

---
apiVersion: policy/v1
kind: PodDisruptionBudget
metadata:
  name: zk-pdb
spec:
  selector:
    matchLabels:
      app: zk
  maxUnavailable: 1

---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zk
spec:
  selector:
    matchLabels:
      app: zk
  serviceName: zk-hs
  replicas: 3
  updateStrategy:
    type: RollingUpdate
  podManagementPolicy: OrderedReady
  template:
    metadata:
      labels:
        app: zk
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 1000
      affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - zk
              topologyKey: "kubernetes.io/hostname"
      containers:
        - name: kubernetes-zookeeper
          imagePullPolicy: Always
          image: "registry.k8s.io/kubernetes-zookeeper:1.0-3.4.10"
          resources:
            requests:
              memory: "200Mi"
              cpu: "0.1"
          ports:
            - containerPort: 2181
              name: client
            - containerPort: 2888
              name: server
            - containerPort: 3888
              name: leader-election
          command:
            - sh
            - -c
            - "start-zookeeper \
          --servers=3 \
          --data_dir=/var/lib/zookeeper/data \
          --data_log_dir=/var/lib/zookeeper/data/log \
          --conf_dir=/opt/zookeeper/conf \
          --client_port=2181 \
          --election_port=3888 \
          --server_port=2888 \
          --tick_time=2000 \
          --init_limit=10 \
          --sync_limit=5 \
          --heap=512M \
          --max_client_cnxns=60 \
          --snap_retain_count=3 \
          --purge_interval=12 \
          --max_session_timeout=40000 \
          --min_session_timeout=4000 \
          --log_level=INFO"
          readinessProbe:
            exec:
              command:
                - sh
                - -c
                - "zookeeper-ready 2181"
            initialDelaySeconds: 10
            timeoutSeconds: 5
          livenessProbe:
            exec:
              command:
                - sh
                - -c
                - "zookeeper-ready 2181"
            initialDelaySeconds: 10
            timeoutSeconds: 5
          volumeMounts:
            - name: datadir
              mountPath: /var/lib/zookeeper
  volumeClaimTemplates:
    - metadata:
        name: datadir
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 200Mi

2.2. 部署说明

2.2.1. PersistentVolume配置:

zookeeper 依赖与PersistentVolume, 需要优先创建3个PersistentVolume

volumeClaimTemplates:
    - metadata:
        name: datadir
      spec:
        accessModes: [ "ReadWriteOnce" ]
        resources:
          requests:
            storage: 200Mi

 2.2.2. PersistentVolume部署:

PersistentVolume.yaml
 ---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv1
  labels:
    type: zookeeper
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/opt/kafka_data1"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv2
  labels:
    type: zookeeper
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/opt/kafka_data2"
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: kafka-pv3
  labels:
    type: zookeeper
spec:
  capacity:
    storage: 200Mi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: "/opt/kafka_data3"

3个PersistentVolume的hostPath必须是不同路径,如果部署在不同机器上面可以相同路径。

sudo kubectl apply -f PersistentVolume.yaml

 2.2.3. 用户配置:

 zookeeper默认启动的用户是容器内部的zookeeper,用户id为1000, 用户组id为1000

securityContext:
        runAsUser: 1000
        fsGroup: 1000

启动用户需要确保给予PersistentVolume的硬盘路径访问权限:

sudo chown -R 1000:1000 /opt/kafka_data1
sudo chown -R 1000:1000 /opt/kafka_data2
sudo chown -R 1000:1000 /opt/kafka_data3

 2.2.4. 亲和性配置:

affinity:
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            - labelSelector:
                matchExpressions:
                  - key: "app"
                    operator: In
                    values:
                      - zk
              topologyKey: "kubernetes.io/hostname"

zookeeper默认配置podAntiAffinity反亲和力配置,一个节点只能部署1个pod

如果只有1台机器的话,就把它去除,我是就2个节点,所以部署的时候就把亲和性删除了。

2.3.部署

  sudo kubectl apply -f zookeeper.yaml

3.验证基本功能

3.1.查看某个pod的配置文件

sudo kubectl exec -it zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it zk-0 -- cat /opt/zookeeper/conf/zoo.cfg
    #This file was autogenerated DO NOT EDIT
    clientPort=2181
    dataDir=/var/lib/zookeeper/data
    dataLogDir=/var/lib/zookeeper/data/log
    tickTime=2000
    initLimit=10
    syncLimit=5
    maxClientCnxns=60
    minSessionTimeout=4000
    maxSessionTimeout=40000
    autopurge.snapRetainCount=3
    autopurge.purgeInteval=12
    server.1=zk-0.zk-hs.default.svc.cluster.local:2888:3888
    server.2=zk-1.zk-hs.default.svc.cluster.local:2888:3888
    server.3=zk-2.zk-hs.default.svc.cluster.local:2888:3888

3.2.获取zookeeper的完整主机名

for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
 qiteck@server:~/program/docker_service/zookeeper$ for i in 0 1 2; do kubectl exec zk-$i -- hostname -f; done
    zk-0.zk-hs.default.svc.cluster.local
    zk-1.zk-hs.default.svc.cluster.local
    zk-2.zk-hs.default.svc.cluster.local

 

3.3.查看集群状态--3个节点的角色

for i in 0 1 2; do kubectl exec zk-$i -- zkServer.sh status; done
 qiteck@server:~/program/docker_service/zookeeper$ for i in 0 1 2; do kubectl exec zk-$i -- zkServer.sh status; done
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: follower
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: leader
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: follower

3.4.查看myid

for i in 0 1 2; do echo -n "zk-$i"; kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
 qiteck@server:~/program/docker_service/zookeeper$ for i in 0 1 2; do echo -n "zk-$i"; kubectl exec zk-$i -- cat /var/lib/zookeeper/data/myid; done
    zk-01
    zk-11
    zk-21

3.5.在其他pod去ping zookeeper主机

sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-0.zk-hs.default.svc.cluster.local
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-0.zk-hs.default.svc.cluster.local
    PING zk-0.zk-hs.default.svc.cluster.local (10.244.1.9) 56(84) bytes of data.
    64 bytes from zk-0.zk-hs.default.svc.cluster.local (10.244.1.9): icmp_seq=1 ttl=64 time=0.340 ms
    64 bytes from zk-0.zk-hs.default.svc.cluster.local (10.244.1.9): icmp_seq=2 ttl=64 time=0.071 ms
    64 bytes from zk-0.zk-hs.default.svc.cluster.local (10.244.1.9): icmp_seq=3 ttl=64 time=0.053 ms
    ^C
    --- zk-0.zk-hs.default.svc.cluster.local ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2031ms
sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-1.zk-hs.default.svc.cluster.local
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-1.zk-hs.default.svc.cluster.local
    PING zk-1.zk-hs.default.svc.cluster.local (10.244.1.10) 56(84) bytes of data.
    64 bytes from 10-244-1-10.zk-cs.default.svc.cluster.local (10.244.1.10): icmp_seq=1 ttl=64 time=0.226 ms
    64 bytes from 10-244-1-10.zk-cs.default.svc.cluster.local (10.244.1.10): icmp_seq=2 ttl=64 time=0.044 ms
    64 bytes from 10-244-1-10.zk-cs.default.svc.cluster.local (10.244.1.10): icmp_seq=3 ttl=64 time=0.042 ms
    ^C
    --- zk-1.zk-hs.default.svc.cluster.local ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2038ms
    rtt min/avg/max/mdev = 0.042/0.104/0.226/0.086 ms
sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-2.zk-hs.default.svc.cluster.local
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it account-6468847985-2fqpf -- ping zk-2.zk-hs.default.svc.cluster.local
    PING zk-2.zk-hs.default.svc.cluster.local (10.244.1.11) 56(84) bytes of data.
    64 bytes from zk-2.zk-hs.default.svc.cluster.local (10.244.1.11): icmp_seq=1 ttl=64 time=0.308 ms
    64 bytes from zk-2.zk-hs.default.svc.cluster.local (10.244.1.11): icmp_seq=2 ttl=64 time=0.080 ms
    64 bytes from zk-2.zk-hs.default.svc.cluster.local (10.244.1.11): icmp_seq=3 ttl=64 time=0.295 ms
    ^C
    --- zk-2.zk-hs.default.svc.cluster.local ping statistics ---
    3 packets transmitted, 3 received, 0% packet loss, time 2024ms
    rtt min/avg/max/mdev = 0.080/0.227/0.308/0.105 ms

4.验证集群功能

4.1.在0容器创建数据

 

进入0容器,执行zkCli.sh客户端,执行create /zk-test hsssss, 获取一下是否看对不对get /zk-test
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it zk-0 -- /bin/bash
    root@zk-0:/# zkCli.sh
    Connecting to localhost:2181
    2023-01-31 08:49:55,163 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
    2023-01-31 08:49:55,166 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-0.zk-hs.default.svc.cluster.local
    2023-01-31 08:49:55,167 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
    2023-01-31 08:49:55,170 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
    2023-01-31 08:49:55,170 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
    2023-01-31 08:49:55,170 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
    2023-01-31 08:49:55,171 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
    2023-01-31 08:49:55,171 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
    2023-01-31 08:49:55,171 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
    2023-01-31 08:49:55,172 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
    2023-01-31 08:49:55,172 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
    2023-01-31 08:49:55,172 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=5.15.0-50-generic
    2023-01-31 08:49:55,173 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
    2023-01-31 08:49:55,173 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
    2023-01-31 08:49:55,173 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
    2023-01-31 08:49:55,175 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
    Welcome to ZooKeeper!
    2023-01-31 08:49:55,234 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    JLine support is enabled
    2023-01-31 08:49:55,410 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
    [zk: localhost:2181(CONNECTING) 0] 2023-01-31 08:49:55,494 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x18606fc8b4a0000, negotiated timeout = 30000
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    
    [zk: localhost:2181(CONNECTED) 0]
    [zk: localhost:2181(CONNECTED) 0] create /zk-test hsssss
    Created /zk-test
    [zk: localhost:2181(CONNECTED) 1] get /zk-test
    hsssss
    cZxid = 0x100000002
    ctime = Tue Jan 31 08:50:53 UTC 2023
    mZxid = 0x100000002
    mtime = Tue Jan 31 08:50:53 UTC 2023
    pZxid = 0x100000002
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 6
    numChildren = 0

 

4.2.在1容器获取数据

进入1容器,执行zkCli.sh客户端,获取数据验证get /zk-test
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it zk-1 -- /bin/bash
    root@zk-1:/# zkCli.sh
    Connecting to localhost:2181
    2023-01-31 08:51:55,701 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
    2023-01-31 08:51:55,710 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-1.zk-hs.default.svc.cluster.local
    2023-01-31 08:51:55,713 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
    2023-01-31 08:51:55,717 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
    2023-01-31 08:51:55,721 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
    2023-01-31 08:51:55,721 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
    2023-01-31 08:51:55,722 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
    2023-01-31 08:51:55,722 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
    2023-01-31 08:51:55,722 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
    2023-01-31 08:51:55,723 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
    2023-01-31 08:51:55,723 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
    2023-01-31 08:51:55,724 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=5.15.0-50-generic
    2023-01-31 08:51:55,725 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
    2023-01-31 08:51:55,725 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
    2023-01-31 08:51:55,726 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
    2023-01-31 08:51:55,730 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
    Welcome to ZooKeeper!
    2023-01-31 08:51:55,777 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    JLine support is enabled
    2023-01-31 08:51:55,943 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
    [zk: localhost:2181(CONNECTING) 0] 2023-01-31 08:51:56,014 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x28606fc8ba00000, negotiated timeout = 30000
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    
    [zk: localhost:2181(CONNECTED) 0] get /zk-test
    hsssss
    cZxid = 0x100000002
    ctime = Tue Jan 31 08:50:53 UTC 2023
    mZxid = 0x100000002
    mtime = Tue Jan 31 08:50:53 UTC 2023
    pZxid = 0x100000002
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 6
    numChildren = 0

可以取到/zk-test对应的数据 hsssss

4.3.删除pod,验证pod挂掉验证集群是否能恢复正常:

qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl get pods -l app=zk
    NAME   READY   STATUS    RESTARTS   AGE
    zk-0   1/1     Running   0          15m
    zk-1   1/1     Running   0          15m
    zk-2   1/1     Running   0          15m
qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl delete pods zk-1 zk-0
    pod "zk-1" deleted
    pod "zk-0" deleted
qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl get pods -l app=zk -w
    NAME   READY   STATUS    RESTARTS   AGE
    zk-0   1/1     Running   0          65s
    zk-1   1/1     Running   0          44s
    zk-2   1/1     Running   0          16m

可以看到,zk-0,zk-1再删除以后会自动创建,集群能够恢复

4.4.验证恢复后的集群状态:

for i in 0 1 2; do kubectl exec zk-$i -- zkServer.sh status; done
 qiteck@server:~/program/docker_service/zookeeper$ for i in 0 1 2; do kubectl exec zk-$i -- zkServer.sh status; done
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: follower
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: leader
    ZooKeeper JMX enabled by default
    Using config: /usr/bin/../etc/zookeeper/zoo.cfg
    Mode: follower

4.5.验证新的节点数据是否正常:

再次进入0容器,执行zkCli.sh客户端,获取数据get /zk-test
 qiteck@server:~/program/docker_service/zookeeper$ sudo kubectl exec -it zk-0 -- /bin/bash
    root@zk-0:/# zkCli.sh
    Connecting to localhost:2181
    2023-01-31 09:00:14,553 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.10-39d3a4f269333c922ed3db283be479f9deacaa0f, built on 03/23/2017 10:13 GMT
    2023-01-31 09:00:14,556 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=zk-0.zk-hs.default.svc.cluster.local
    2023-01-31 09:00:14,556 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_131
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/usr/bin/../build/classes:/usr/bin/../build/lib/*.jar:/usr/bin/../share/zookeeper/zookeeper-3.4.10.jar:/usr/bin/../share/zookeeper/slf4j-log4j12-1.6.1.jar:/usr/bin/../share/zookeeper/slf4j-api-1.6.1.jar:/usr/bin/../share/zookeeper/netty-3.10.5.Final.jar:/usr/bin/../share/zookeeper/log4j-1.2.16.jar:/usr/bin/../share/zookeeper/jline-0.9.94.jar:/usr/bin/../src/java/lib/*.jar:/usr/bin/../etc/zookeeper:
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
    2023-01-31 09:00:14,559 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=5.15.0-50-generic
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
    2023-01-31 09:00:14,560 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
    2023-01-31 09:00:14,561 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/
    2023-01-31 09:00:14,562 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@22d8cfe0
    Welcome to ZooKeeper!
    2023-01-31 09:00:14,594 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1032] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
    JLine support is enabled
    2023-01-31 09:00:14,738 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@876] - Socket connection established to localhost/127.0.0.1:2181, initiating session
    [zk: localhost:2181(CONNECTING) 0] 2023-01-31 09:00:14,808 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1299] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x186070b91d80000, negotiated timeout = 30000
    
    WATCHER::
    
    WatchedEvent state:SyncConnected type:None path:null
    
    [zk: localhost:2181(CONNECTED) 0] get /zk-test
    hsssss
    cZxid = 0x100000002
    ctime = Tue Jan 31 08:50:53 UTC 2023
    mZxid = 0x100000002
    mtime = Tue Jan 31 08:50:53 UTC 2023
    pZxid = 0x100000002
    cversion = 0
    dataVersion = 0
    aclVersion = 0
    ephemeralOwner = 0x0
    dataLength = 6
    numChildren = 0

可以看到获取zk-test的数据还是hssss

至此zookeeper的功能是正常的。

阿里云国内75折 回扣 微信号:monov8
阿里云国际,腾讯云国际,低至75折。AWS 93折 免费开户实名账号 代冲值 优惠多多 微信号:monov8 飞机:@monov6
标签: k8s