kubernetes v1.20项目之二进制安装部署Worker Node

下面的操作主要还是在master node上面执行,因为啥呢,主要还是小编穷,哈哈,当然不是了,应为为了节省资源等方面,master也是可以充当node节点的哈

废话不多说开干

相关所需资源下载
	链接:https://pan.baidu.com/s/1emtDOy7bzxlR_hUw6vY2GQ 
	提取码:a7j4 
	--来自百度网盘超级会员V2的分享
	**部分文件需要更改ip地址或其他的配置,请改成自己的使用**
创建工作目录
[root@k8s-master01 ~]# mkdir -p /opt/kubernetes/{bin,cfg,ssl,logs}
拷贝一下kubelet kube-proxy
[root@k8s-master01 bin]# cd /root/TLS/k8s/kubernetes/server/bin/
[root@k8s-master01 bin]# cp kubelet kube-proxy /opt/kubernetes/bin

部署kubelet

创建配置文件
[root@k8s-master01 bin]# cat > /opt/kubernetes/cfg/kubelet.conf << EOF
> KUBELET_OPTS="--logtostderr=false \\
> --v=2 \\
> --log-dir=/opt/kubernetes/logs \\
> --hostname-override=k8s-master1 \\
> --network-plugin=cni \\
> --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
> --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
> --config=/opt/kubernetes/cfg/kubelet-config.yml \\
> --cert-dir=/opt/kubernetes/ssl \\
> --pod-infra-container-image=lizhenliang/pause-amd64:3.0"
> EOF
[root@k8s-master01 bin]# cat /opt/kubernetes/cfg/kubelet.conf
KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-master1 \
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"
配置参数文件
[root@k8s-master01 k8s]# cat > /opt/kubernetes/cfg/kubelet-config.yml << EOF
> kind: KubeletConfiguration
> apiVersion: kubelet.config.k8s.io/v1beta1
> address: 0.0.0.0
> port: 10250
> readOnlyPort: 10255
> cgroupDriver: cgroupfs
> clusterDNS:
> - 10.0.0.2
> clusterDomain: cluster.local 
> failSwapOn: false
> authentication:
>   anonymous:
>     enabled: false
>   webhook:
>     cacheTTL: 2m0s
>     enabled: true
>   x509:
>     clientCAFile: /opt/kubernetes/ssl/ca.pem 
> authorization:
>   mode: Webhook
>   webhook:
>     cacheAuthorizedTTL: 5m0s
>     cacheUnauthorizedTTL: 30s
> evictionHard:
>   imagefs.available: 15%
>   memory.available: 100Mi
>   nodefs.available: 10%
>   nodefs.inodesFree: 5%
> maxOpenFiles: 1000000
> maxPods: 110
> EOF
[root@k8s-master01 k8s]# cat /opt/kubernetes/cfg/kubelet-config.yml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 0.0.0.0
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local 
failSwapOn: false
authentication:
  anonymous:
    enabled: false
  webhook:
    cacheTTL: 2m0s
    enabled: true
  x509:
    clientCAFile: /opt/kubernetes/ssl/ca.pem 
authorization:
  mode: Webhook
  webhook:
    cacheAuthorizedTTL: 5m0s
    cacheUnauthorizedTTL: 30s
evictionHard:
  imagefs.available: 15%
  memory.available: 100Mi
  nodefs.available: 10%
  nodefs.inodesFree: 5%
maxOpenFiles: 1000000
maxPods: 110
生成kubelet初次加入集群引导kubeconfig文件
[root@k8s-master01 k8s]# KUBE_CONFIG="/opt/kubernetes/cfg/bootstrap.kubeconfig"
[root@k8s-master01 k8s]# KUBE_APISERVER="https://192.168.100.13:6443"
[root@k8s-master01 k8s]# TOKEN="fc2197011151c0c4fb7667395818ef97"
[root@k8s-master01 k8s]# kubectl config set-cluster kubernetes \
>   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
>   --embed-certs=true \
>   --server=${KUBE_APISERVER} \
>   --kubeconfig=${KUBE_CONFIG}
Cluster "kubernetes" set.
[root@k8s-master01 k8s]# kubectl config set-credentials "kubelet-bootstrap" \
>   --token=${TOKEN} \
>   --kubeconfig=${KUBE_CONFIG}
User "kubelet-bootstrap" set.
[root@k8s-master01 k8s]# kubectl config set-context default \
>   --cluster=kubernetes \
>   --user="kubelet-bootstrap" \
>   --kubeconfig=${KUBE_CONFIG}
Context "default" created.
[root@k8s-master01 k8s]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
Switched to context "default".
加入systemctl管理
[root@k8s-master01 k8s]# cat > /usr/lib/systemd/system/kubelet.service << EOF
> [Unit]
> Description=Kubernetes Kubelet
> After=docker.service
> 
> [Service]
> EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
> ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
> Restart=on-failure
> LimitNOFILE=65536
> 
> [Install]
> WantedBy=multi-user.target
> EOF
[root@k8s-master01 k8s]# cat /usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet.conf
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
启动并设置成开机启动
[root@k8s-master01 k8s]# systemctl daemon-reload
[root@k8s-master01 k8s]# systemctl start kubelet
[root@k8s-master01 k8s]# systemctl enable kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@k8s-master01 k8s]# 

[root@k8s-master01 k8s]# ps -ef | grep kubelet
root      20881      1  5 22:38 ?        00:02:48 /opt/kubernetes/bin/kube-apiserver --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --etcd-servers=https://192.168.100.13:2379,https://192.168.100.14:2379,https://192.168.100.15:2379 --bind-address=192.168.100.13 --secure-port=6443 --advertise-address=192.168.100.13 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth=true --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-32767 --kubelet-client-certificate=/opt/kubernetes/ssl/server.pem --kubelet-client-key=/opt/kubernetes/ssl/server-key.pem --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --service-account-issuer=api --service-account-signing-key-file=/opt/kubernetes/ssl/server-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pem --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem --proxy-client-cert-file=/opt/kubernetes/ssl/server.pem --proxy-client-key-file=/opt/kubernetes/ssl/server-key.pem --requestheader-allowed-names=kubernetes --requestheader-extra-headers-prefix=X-Remote-Extra- --requestheader-group-headers=X-Remote-Group --requestheader-username-headers=X-Remote-User --enable-aggregator-routing=true --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/opt/kubernetes/logs/k8s-audit.log
root      21129      1  1 23:31 ?        00:00:00 /opt/kubernetes/bin/kubelet --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --hostname-override=k8s-master1 --network-plugin=cni --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet-config.yml --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=lizhenliang/pause-amd64:3.0
root      21157   9026  0 23:32 pts/3    00:00:00 grep --color=auto kubelet
批准kubelet证书的申请并加入集群
[root@k8s-master01 k8s]# kubectl get csr
NAME                                                   AGE   SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-vziEqAe-ZTSuOX7F1MOWpM4ATlsNwUCosgmAOzcbP8Y   89s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending

###批准申请
[root@k8s-master01 k8s]# kubectl certificate approve node-csr-vziEqAe-ZTSuOX7F1MOWpM4ATlsNwUCosgmAOzcbP8Y
certificatesigningrequest.certificates.k8s.io/node-csr-vziEqAe-ZTSuOX7F1MOWpM4ATlsNwUCosgmAOzcbP8Y approved

###查看节点
[root@k8s-master01 k8s]# kubectl get node
NAME          STATUS     ROLES    AGE   VERSION
k8s-master1   NotReady   <none>   31s   v1.20.5
网络插件还没有部署好,节点就会没有准备就绪notready状态

部署kube-proxy

创建配置文件
[root@k8s-master01 k8s]# cat > /opt/kubernetes/cfg/kube-proxy.conf << EOF
> KUBE_PROXY_OPTS="--logtostderr=false \\
> --v=2 \\
> --log-dir=/opt/kubernetes/logs \\
> --config=/opt/kubernetes/cfg/kube-proxy-config.yml"
> EOF
[root@k8s-master01 k8s]# cat /opt/kubernetes/cfg/kube-proxy.conf
KUBE_PROXY_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--config=/opt/kubernetes/cfg/kube-proxy-config.yml"
配置参数文件
[root@k8s-master01 k8s]# cat > /opt/kubernetes/cfg/kube-proxy-config.yml << EOF
> kind: KubeProxyConfiguration
> apiVersion: kubeproxy.config.k8s.io/v1alpha1
> bindAddress: 0.0.0.0
> metricsBindAddress: 0.0.0.0:10249
> clientConnection:
>   kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
> hostnameOverride: k8s-master1
> clusterCIDR: 10.244.0.0/16
> EOF
[root@k8s-master01 k8s]# cat /opt/kubernetes/cfg/kube-proxy-config.yml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-master1
clusterCIDR: 10.244.0.0/16
生成kube-proxy.kubeconfig文件
[root@k8s-master01 k8s]# cd ~/TLS/k8s
[root@k8s-master01 k8s]# cat > kube-proxy-csr.json << EOF
> {
>   "CN": "system:kube-proxy",
>   "hosts": [],
>   "key": {
>     "algo": "rsa",
>     "size": 2048
>   },
>   "names": [
>     {
>       "C": "CN",
>       "L": "BeiJing",
>       "ST": "BeiJing",
>       "O": "k8s",
>       "OU": "System"
>     }
>   ]
> }
> EOF
[root@k8s-master01 k8s]# cat kube-proxy-csr.json
{
  "CN": "system:kube-proxy",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "L": "BeiJing",
      "ST": "BeiJing",
      "O": "k8s",
      "OU": "System"
    }
  ]
}
生成证书
[root@k8s-master01 k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
2021/11/16 23:40:44 [INFO] generate received request
2021/11/16 23:40:44 [INFO] received CSR
2021/11/16 23:40:44 [INFO] generating key: rsa-2048
2021/11/16 23:40:45 [INFO] encoded CSR
2021/11/16 23:40:45 [INFO] signed certificate with serial number 280940741575309175182701275828099785157071883906
2021/11/16 23:40:45 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
生成kubeconfig文件
[root@k8s-master01 k8s]# KUBE_CONFIG="/opt/kubernetes/cfg/kube-proxy.kubeconfig"
[root@k8s-master01 k8s]# KUBE_APISERVER="https://192.168.100.13:6443"
[root@k8s-master01 k8s]# kubectl config set-cluster kubernetes \
>   --certificate-authority=/opt/kubernetes/ssl/ca.pem \
>   --embed-certs=true \
>   --server=${KUBE_APISERVER} \
>   --kubeconfig=${KUBE_CONFIG}
Cluster "kubernetes" set.
[root@k8s-master01 k8s]# kubectl config set-credentials kube-proxy \
>   --client-certificate=./kube-proxy.pem \
>   --client-key=./kube-proxy-key.pem \
>   --embed-certs=true \
>   --kubeconfig=${KUBE_CONFIG}
User "kube-proxy" set.
[root@k8s-master01 k8s]# kubectl config set-context default \
>   --cluster=kubernetes \
>   --user=kube-proxy \
>   --kubeconfig=${KUBE_CONFIG}
Context "default" created.
[root@k8s-master01 k8s]# kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
Switched to context "default".
systemd管理kube-proxy
[root@k8s-master01 k8s]# cat > /usr/lib/systemd/system/kube-proxy.service << EOF
> [Unit]
> Description=Kubernetes Proxy
> After=network.target
> 
> [Service]
> EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
> ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
> Restart=on-failure
> LimitNOFILE=65536
> 
> [Install]
> WantedBy=multi-user.target
> EOF
[root@k8s-master01 k8s]# cat /usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target

[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-proxy.conf
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target
启动并设置开机启动
[root@k8s-master01 k8s]# systemctl daemon-reload
[root@k8s-master01 k8s]# systemctl start kube-proxy
[root@k8s-master01 k8s]# systemctl enable kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
[root@k8s-master01 k8s]# ps -ef | grep kube-proxy
root      23313      1  0 23:44 ?        00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-proxy-config.yml
root      23457   9026  0 23:44 pts/3    00:00:00 grep --color=auto kube-proxy

部署网络组件

这里需要一个calico.yaml,大家可以通过的网盘下载或者联系我索取.Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案
网址:https://docs.projectcalico.org/manifests/calico.yaml

[root@k8s-master01 k8s]# ls calico.yaml 
calico.yaml
部署Calico
[root@k8s-master01 k8s]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
[root@k8s-master01 k8s]# kubectl get pods -n kube-system
NAME                                      READY   STATUS     RESTARTS   AGE
calico-kube-controllers-97769f7c7-hwgnf   0/1     Pending    0          15s
calico-node-grcgk                         0/1     Init:0/3   0          14s
[root@k8s-master01 k8s]# 

等Calico Pod都Running,节点也会准备就绪,可能会有点慢

##小编这边也是等了半个小时,最后发现selinux没有关闭,哈哈,所以说基础环境配置非常的关键

[root@k8s-master01 k8s]# kubectl get pods -n kube-system
NAME                                      READY   STATUS    RESTARTS   AGE
calico-kube-controllers-97769f7c7-hwgnf   1/1     Running   0          34m
calico-node-grcgk                         1/1     Running   0          34m

[root@k8s-master01 k8s]# kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    <none>   50m   v1.20.5
授权apiserver访问kubelet
[root@k8s-master01 k8s]# cat > apiserver-to-kubelet-rbac.yaml << EOF
> apiVersion: rbac.authorization.k8s.io/v1
> kind: ClusterRole
> metadata:
>   annotations:
>     rbac.authorization.kubernetes.io/autoupdate: "true"
>   labels:
>     kubernetes.io/bootstrapping: rbac-defaults
>   name: system:kube-apiserver-to-kubelet
> rules:
>   - apiGroups:
>       - ""
>     resources:
>       - nodes/proxy
>       - nodes/stats
>       - nodes/log
>       - nodes/spec
>       - nodes/metrics
>       - pods/log
>     verbs:
>       - "*"
> ---
> apiVersion: rbac.authorization.k8s.io/v1
> kind: ClusterRoleBinding
> metadata:
>   name: system:kube-apiserver
>   namespace: ""
> roleRef:
>   apiGroup: rbac.authorization.k8s.io
>   kind: ClusterRole
>   name: system:kube-apiserver-to-kubelet
> subjects:
>   - apiGroup: rbac.authorization.k8s.io
>     kind: User
>     name: kubernetes
> EOF



[root@k8s-master01 k8s]# cat apiserver-to-kubelet-rbac.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
  name: system:kube-apiserver-to-kubelet
rules:
  - apiGroups:
      - ""
    resources:
      - nodes/proxy
      - nodes/stats
      - nodes/log
      - nodes/spec
      - nodes/metrics
      - pods/log
    verbs:
      - "*"
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: system:kube-apiserver
  namespace: ""
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:kube-apiserver-to-kubelet
subjects:
  - apiGroup: rbac.authorization.k8s.io
    kind: User
    name: kubernetes


[root@k8s-master01 k8s]# kubectl apply -f apiserver-to-kubelet-rbac.yaml
clusterrole.rbac.authorization.k8s.io/system:kube-apiserver-to-kubelet created
clusterrolebinding.rbac.authorization.k8s.io/system:kube-apiserver created

新增加work node节点

拷贝已部署好的Node相关文件到node01节点也就是需要添加的节点
[root@k8s-master01 k8s]# scp -r /opt/kubernetes root@192.168.100.14:/opt/
root@192.168.100.14's password: 
kube-apiserver                                                                                                 100%  113MB  43.7MB/s   00:02    
kube-scheduler                                                                                                 100%   42MB  37.5MB/s   00:01    
kube-controller-manager                                                                                        100%  108MB  35.9MB/s   00:02    
kubelet                                                                                                        100%  109MB  27.2MB/s   00:04    
kube-proxy                                                                                                     100%   38MB  30.9MB/s   00:01    
token.csv                                                                                                      100%   84     3.5KB/s   00:00    
kube-apiserver.conf                                                                                            100% 1709   757.3KB/s   00:00    
kube-controller-manager.conf                                                                                   100%  582   256.6KB/s   00:00    
kube-controller-manager.kubeconfig                                                                             100% 6344     2.9MB/s   00:00    
kube-scheduler.kubeconfig                                                                                      100% 6306    49.6KB/s   00:00    
kube-scheduler.conf                                                                                            100%  188    89.1KB/s   00:00    
kubelet.conf                                                                                                   100%  394    12.7KB/s   00:00    
kubelet-config.yml                                                                                             100%  611    37.8KB/s   00:00    
bootstrap.kubeconfig                                                                                           100% 2168    32.1KB/s   00:00    
kubelet.kubeconfig                                                                                             100% 2297    14.1KB/s   00:00    
kube-proxy.conf                                                                                                100%  132    75.5KB/s   00:00    
kube-proxy-config.yml               
*******************************省略好多

[root@k8s-master01 k8s]# scp -r /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.100.14:/usr/lib/systemd/system
root@192.168.100.14's password: 
kubelet.service                                                                                                100%  246    19.0KB/s   00:00    
kube-proxy.service                                                                                             100%  253    88.7KB/s   00:00    

[root@k8s-master01 k8s]# scp /opt/kubernetes/ssl/ca.pem root@192.168.100.14:/opt/kubernetes/ssl
root@192.168.100.14's password: 
ca.pem                                                                                                         100% 1359     1.1MB/s   00:00    
[root@k8s-master01 k8s]#
删除kubelet证书和kubeconfig文件

这一步是在node01也就是在需要添加的新节点上操作上面操作的哈

[root@k8s-node01 ~]# rm -f /opt/kubernetes/cfg/kubelet.kubeconfig
[root@k8s-node01 ~]# rm -f /opt/kubernetes/ssl/kubelet*

注:这几个文件是证书申请审批后自动生成的,每个Node不同,必须删除
修改scp过来的文件部分主机名
[root@k8s-node01 ~]# vi /opt/kubernetes/cfg/kubelet.conf

KUBELET_OPTS="--logtostderr=false \
--v=2 \
--log-dir=/opt/kubernetes/logs \
--hostname-override=k8s-node01 \				##需要修改这里
--network-plugin=cni \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet-config.yml \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=lizhenliang/pause-amd64:3.0"

[root@k8s-node01 ~]# vi /opt/kubernetes/cfg/kube-proxy-config.yml

kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 0.0.0.0
metricsBindAddress: 0.0.0.0:10249
clientConnection:
  kubeconfig: /opt/kubernetes/cfg/kube-proxy.kubeconfig
hostnameOverride: k8s-node01     ##修改这里
clusterCIDR: 10.244.0.0/16
启动并设置开机启动
[root@k8s-node01 ~]# systemctl daemon-reload
[root@k8s-node01 ~]# systemctl start kubelet kube-proxy
[root@k8s-node01 ~]# systemctl enable kubelet kube-proxy
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.

[root@k8s-node01 ~]# ps -ef | grep ube-proxy
root      19537      1  0 00:40 ?        00:00:00 /opt/kubernetes/bin/kube-proxy --logtostderr=false --v=2 --log-dir=/opt/kubernetes/logs --config=/opt/kubernetes/cfg/kube-proxy-config.yml
root      19641   9025  0 00:41 pts/3    00:00:00 grep --color=auto ube-proxy

在Master上批准新Node kubelet证书申请

下面是在master上面操作

# 查看证书请求
[root@k8s-master01 k8s]# kubectl get csr
NAME                                                   AGE     SIGNERNAME                                    REQUESTOR           CONDITION
node-csr-TowW7hgpq7z_969BHf82EMfszn35riOtF0tKfQKzNTo   2m20s   kubernetes.io/kube-apiserver-client-kubelet   kubelet-bootstrap   Pending
####授权请求
[root@k8s-master01 k8s]# kubectl certificate approve node-csr-TowW7hgpq7z_969BHf82EMfszn35riOtF0tKfQKzNTo
certificatesigningrequest.certificates.k8s.io/node-csr-TowW7hgpq7z_969BHf82EMfszn35riOtF0tKfQKzNTo approved

查看node状态

[root@k8s-master01 k8s]# kubectl get node
NAME          STATUS   ROLES    AGE   VERSION
k8s-master1   Ready    <none>   70m   v1.20.5
k8s-node01    Ready    <none>   72s   v1.20.5

结束语

加油少年,相信自己一定可以
上一篇:kubernetes v1.20项目之二进制安装部署Master Node 下一篇:kubernetes v1.20项目之二进制安装部署Dashboard和CoreDNS


阿里云国内75折 回扣 微信号:monov8
阿里云国际,腾讯云国际,低至75折。AWS 93折 免费开户实名账号 代冲值 优惠多多 微信号:monov8 飞机:@monov6
标签: k8s