首页 / 军事 / 环球军事 / 正文

安装包制作(kubernets二进制安装包制作、包含证书、配置、脚本等文件)

放大字体  缩小字体 来源:初学者买什么吉他 2026-04-17 17:10  浏览次数:8

简介

本章节将安装k8s-1.34需要的二进制文件、证书配置文件,脚本,以及etcd、master端组件、worker端组件所用到的配置文件、启动脚本都准备好。同时客户可以自行按照"一.系统初始化及k8s集群规划" 中的软件都下载准备好。该篇文章中的所有配置文件和脚本可以在git地址获取:https://gitee.com/eeenet/k8s-install

== 文章导航===

【k8s-1.34.2安装部署】一.系统初始化及k8s集群规划
【k8s-1.34.2安装部署】二.kubernets软件、证书、配置、脚本等文件准备
【k8s-1.34.2安装部署】三.etcd-v3.6.6 TLS版集群安装
【k8s-1.34.2安装部署】四.kubernets master组件kube-apiserver,kube-controller-manager,kube-scheduler安装
【k8s-1.34.2安装部署】五.worker端containerd2.2.1、kubelet-1.34.2安装
【k8s-1.34.2安装部署】六.企业级部署cilium-1.18.4网络插件
【k8s-1.34.2安装部署】七.coredns-v1.13.1安装
【k8s-1.34.2安装部署】八.metric-server-0.8.0安装
【k8s-1.34.2安装部署】九.k8s管理平台xkube安装部署
【k8s-1.34.2安装部署】十.gateway Api v1.4.0安装
【k8s-1.34.2安装部署】十一.metallb-v0.15.2安装

一.软件下载及安装客户端工具

1.1.安装客户端工具

软件的下载地址参考上一章节:,将软件包解压提取出命令文件放到/usr/local/bin目录下并给予执行权限.
具体软件如下:

1.证书生成软件 cfssl-certinfo_1.6.5_linux_amd64 #重命名为cfssl-certinfo  cfssljson_1.6.5_linux_amd64 #重命名为cfssljson cfssl_1.6.5_linux_amd64 #重命名为cfssl2.k8s网络插件: cilium-linux-amd64.tar.gz   #这是cilium的客户端软件,解压提取出cilium cni-plugins-linux-amd64-v1.8.0.tgz3.containerd 及运行时 containerd-2.2.0-linux-amd64.tar.gz runc.amd64-1.3.34.etcd etcd-v3.6.6-linux-amd64.tar.gz5.k8s包管理工具,安装cilium  helm-v4.0.0-linux-amd64.tar.gz #解压提取出helm6. k8s软件包kubernetes-server-linux-amd64.tar.gz  #解压提取出kubectl 和 kubectl-convert 和 kubeadm

从上边的安装包上解压提取出如下文件,并给予执行权限: chmod +x *,然后拷贝到/usr/local/bin目录下

drwxr-xr-x 2 root root 4096 Dec 16 11:59 ./ drwxr-xr-x 9 root root 4096 Dec 16 11:59 ../ -rwxr-xr-x 1 root root 11890840 May 10 2024 cfssl* -rwxr-xr-x 1 root root 8413336 May 10 2024 cfssl-certinfo* -rwxr-xr-x 1 root root 6205592 May 10 2024 cfssljson* -rwxr-xr-x 1 root root 139694264 Oct 23 01:46 cilium* -rwxr-xr-x 1 root root 65491128 Nov 12 19:39 helm* -rwxr-xr-x 1 root root 74027192 Nov 12 03:26 kubeadm* -rwxr-xr-x 1 root root 60559544 Nov 12 03:26 kubectl* -rwxr-xr-x 1 root root 59642040 Nov 12 03:26 kubectl-convert*

二.准备证书配置文件并制作证书

2.1.证书配置文件列表如下

创建一个csr的目录,配置文件可以从git地址获取:https://gitee.com/eeenet/k8s-install
将如下文件都放在这个csr目录下,如果不从git下载,可以拷贝2.3-2.10的证书配置。

-rw-r--r-- 1 root root 245 Feb 24  2023 admin-csr.json-rw-r--r-- 1 root root 330 Feb 23  2023 ca-config.json-rw-r--r-- 1 root root 284 Nov 21 17:24 ca-csr.json-rw-r--r-- 1 root root 410 Dec  5 12:16 etcd-csr.json-rw-r--r-- 1 root root 458 Dec  5 12:15 kube-apiserver-csr.json-rw-r--r-- 1 root root 292 Feb 23  2023 kube-controller-manager-csr.json-rw-r--r-- 1 root root 274 Feb 23  2023 kube-scheduler-csr.json-rw-r--r-- 1 root root 272 Feb 23  2023 proxy-client-csr.json

2.2.准备制作证书的脚本并制作证书:

脚本:create-cert.sh,拷贝并给予执行权限,和上边的csr文件夹在同一个目录下,准备好以上证书配置文件以后,执行该脚本,就会在当前目录创建cert目录,并将所有证书生成到cert下。
证书配置注意事项:1.证书中的hosts部分可以根据实际情况更换成自己需要的IP或域名。为了以后方便扩容,也可以规划一个泛域名。

#!/bin/shcert_dir="cert"[ -d $cert_dir ] || mkdir -p $cert_direcho "create ca.pem ca-key.pem======="cfssl gencert -initca csr/ca-csr.json | cfssljson -bare $cert_dir/carm $cert_dir/ca.csrecho "create etcd.pem etcd-key.pem======="cfssl gencert -ca=$cert_dir/ca.pem -ca-key=$cert_dir/ca-key.pem -config=csr/ca-config.json -profile=kubernetes csr/etcd-csr.json | cfssljson -bare $cert_dir/etcdrm -f $cert_dir/etcd.csrecho "create kube-apiserver.pem kube-apiserver-key.pem======="cfssl gencert -ca=$cert_dir/ca.pem -ca-key=$cert_dir/ca-key.pem -config=csr/ca-config.json -profile=kubernetes csr/kube-apiserver-csr.json | cfssljson -bare $cert_dir/kube-apiserverrm -f $cert_dir/kube-apiserver.csrecho "create kube-scheduler.pem kube-scheduler-key.pem======="cfssl gencert -ca=$cert_dir/ca.pem -ca-key=$cert_dir/ca-key.pem -config=csr/ca-config.json -profile=kubernetes csr/kube-scheduler-csr.json | cfssljson -bare $cert_dir/kube-schedulerrm -f $cert_dir/kube-scheduler.csrecho "create kube-controller-manager.pem kube-controller-manager-key.pem======="cfssl gencert -ca=$cert_dir/ca.pem -ca-key=$cert_dir/ca-key.pem -config=csr/ca-config.json -profile=kubernetes csr/kube-controller-manager-csr.json | cfssljson -bare $cert_dir/kube-controller-managerrm -f $cert_dir/kube-controller-manager.csrecho "create proxy-client.pem proxy-client-key.pem======="cfssl gencert -ca=$cert_dir/ca.pem -ca-key=$cert_dir/ca-key.pem -config=csr/ca-config.json -profile=kubernetes csr/proxy-client-csr.json  | cfssljson -bare $cert_dir/proxy-clientrm -f $cert_dir/proxy-client.csrecho "create admin.pem admin-key.pem======="cfssl gencert -ca=$cert_dir/ca.pem -ca-key=$cert_dir/ca-key.pem -config=csr/ca-config.json -profile=kubernetes csr/admin-csr.json | cfssljson -bare $cert_dir/adminrm -fv $cert_dir/admin.csr

2.3.ca-config.json

定义ca证书的过期时间,用于生成ca证书

{    "signing": {      "default": {        "expiry": "175200h"      },      "profiles": {        "kubernetes": {           "expiry": "175200h",           "usages": [              "signing",              "key encipherment",              "server auth",              "client auth"          ]        }      }    }  }

2.4.ca-csr.json

定义ca证书的加密算法、地域及组织单位,用于生成ca证书

{    "CN": "kubernetes",    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "Guangzhou",            "ST": "Guangdong",            "O": "k8s",            "OU": "System"        }    ]}

2.5.etcd-csr.json

定义etcd证书中的域名、IP、加密算法及组织单位,hosts中可以配置规划的etcd的主机名,或者etcd的vip、etcd的域名,如果考虑到以后的扩容问题,可以配置一个泛域名,例如:*.cluster.local

{    "CN": "etcd",    "hosts": [        "etcd01.my-k8s.local",        "etcd02.my-k8s.local",        "etcd03.my-k8s.local",        "*.my-k8s.local",        "127.0.0.1"    ],    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "Guangzhou",            "ST": "Guangdong"        }    ]}

2.6.kube-apiserver-csr.json

定义api-server证书中的域名、IP、加密算法及组织单位,hosts中的IP可以配置master的IP,apiserever的IP,以及规划的apiserver的vip,或调用api-server的域名、以及定义的service网段的第一个IP:10.96.0.1

{  "CN": "kubernetes",  "hosts": [    "apiserver.my-k8s.local",    "*.my-k8s.local",    "127.0.0.1",    "10.96.0.1",    "kubernetes",    "kubernetes.default",    "kubernetes.default.svc",    "kubernetes.default.svc.cluster",    "kubernetes.default.svc.cluster.local"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "L": "Guangzhou",      "ST": "Guangdong",      "O": "k8s",      "OU": "system"    }  ]}

2.7.kube-controller-manager-csr.json

定义kube-controller-manager 证书中的api证书地址、节点IP、加密算法及组织单位

{  "CN": "system:kube-controller-manager",  "hosts": [    "127.0.0.1"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "Guangdong",      "L": "Guangzhou",      "O": "system:kube-controller-manager",      "OU": "system"    }  ]}

2.8.kube-scheduler-csr.json

定义kube-scheduler证书中的api证书地址、节点IP、加密算法及组织单位

{  "CN": "system:kube-scheduler",  "hosts": [    "127.0.0.1"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "Guangdong",      "L": "Guangzhou",      "O": "system:kube-scheduler",      "OU": "system"    }  ]}

2.9.admin-csr.json

该配置是用于生成k8s管理客户端kubectl所需的kubeconfig时需要公钥和私钥所必须的证书配置文件

{  "CN": "admin",  "hosts": [],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "Guangdong",      "L": "Guangzhou",      "O": "system:masters",                   "OU": "system"    }  ]}

2.10.proxy-client-csr.json

kube-apiserver 的另一种访问方式就是使用 kubectl proxy 来代理访问, 而该证书就是用来支持SSL代理访问的. 在该种访问模式下, 我们是以http的方式发起请求到代理服务的, 此时, 代理服务会将该请求发送给 kube-apiserver, 在此之前, 代理会将发送给 kube-apiserver 的请求头里加入证书信息

{    "CN": "aggregator",    "hosts": [],    "key": {      "algo": "rsa",      "size": 2048    },    "names": [      {        "C": "CN",        "ST": "Guangdong",        "L": "Guangzhou",        "O": "system:masters",        "OU": "System"      }    ]  }

三.制作kubeconfig文件

3.1.准备脚本:create-kubeconfig.sh

制作这一步的前提是需要上一步的证书文件已经生成好,修改KUBE_APISERVER 为自己定义好的apiserver的域名,然后执行该脚本,就会在当前目录创建kubeconfig,并将配置生成到该目录下,注意该脚本的位置需要和csr相同目录下。

#!/bin/bashcert_dir="cert"kube_dir="kubeconfig"KUBE_APISERVER="https://apiserver.my-k8s.local:6443"[ -d $kube_dir ] || mkdir -p $kube_direcho "create token ====="cat > $kube_dir/token.csv << EOF$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:bootstrappers"EOFecho "create kube-controller-manager.kubeconfig ====="kubectl config set-cluster kubernetes \        --certificate-authority=$cert_dir/ca.pem \        --embed-certs=true \        --server=${KUBE_APISERVER} \        --kubeconfig=$kube_dir/kube-controller-manager.kubeconfigkubectl config set-credentials system:kube-controller-manager \        --client-certificate=$cert_dir/kube-controller-manager.pem \        --client-key=$cert_dir/kube-controller-manager-key.pem \        --embed-certs=true \        --kubeconfig=$kube_dir/kube-controller-manager.kubeconfigkubectl config set-context system:kube-controller-manager \        --cluster=kubernetes \        --user=system:kube-controller-manager \        --kubeconfig=$kube_dir/kube-controller-manager.kubeconfigkubectl config use-context system:kube-controller-manager --kubeconfig=$kube_dir/kube-controller-manager.kubeconfigecho "create kube-scheduler.kubeconfig ====="kubectl config set-cluster kubernetes \        --certificate-authority=$cert_dir/ca.pem \        --embed-certs=true \        --server=${KUBE_APISERVER} \        --kubeconfig=$kube_dir/kube-scheduler.kubeconfigkubectl config set-credentials system:kube-scheduler \        --client-certificate=$cert_dir/kube-scheduler.pem \        --client-key=$cert_dir/kube-scheduler-key.pem \        --embed-certs=true \        --kubeconfig=$kube_dir/kube-scheduler.kubeconfigkubectl config set-context system:kube-scheduler \        --cluster=kubernetes \        --user=system:kube-scheduler \        --kubeconfig=$kube_dir/kube-scheduler.kubeconfigkubectl config use-context system:kube-scheduler --kubeconfig=$kube_dir/kube-scheduler.kubeconfigecho "create kubelet-bootstrap.kubeconfig ====="TOKEN=$(awk -F "," '{print $1}' $kube_dir/token.csv)kubectl config set-cluster kubernetes \          --certificate-authority=$cert_dir/ca.pem \          --embed-certs=true \          --server=${KUBE_APISERVER} \          --kubeconfig=$kube_dir/kubelet-bootstrap.kubeconfigkubectl config set-credentials kubelet-bootstrap \          --token=${TOKEN} \          --kubeconfig=$kube_dir/kubelet-bootstrap.kubeconfigkubectl config set-context default \          --cluster=kubernetes \          --user=kubelet-bootstrap \          --kubeconfig=$kube_dir/kubelet-bootstrap.kubeconfigkubectl config use-context default --kubeconfig=$kube_dir/kubelet-bootstrap.kubeconfigecho "create client kube.config ====="kubectl config set-cluster kubernetes \        --certificate-authority=$cert_dir/ca.pem \        --embed-certs=true \        --server=${KUBE_APISERVER} \        --kubeconfig=$kube_dir/kube.kubeconfigkubectl config set-credentials admin \        --client-certificate=$cert_dir/admin.pem \        --client-key=$cert_dir/admin-key.pem \        --embed-certs=true \        --kubeconfig=$kube_dir/kube.kubeconfigkubectl config set-context kubernetes \        --cluster=kubernetes \        --user=admin \        --kubeconfig=$kube_dir/kube.kubeconfigkubectl config use-context kubernetes --kubeconfig=$kube_dir/kube.kubeconfig

四.准备etcd配置文件及启动脚步

4.1.etcd.conf

etcd01机器的 的配置,每台机不一样

#[Member]ETCD_NAME="etcd01"ETCD_DATA_DIR="/opt/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd01.my-k8s.local:2380"ETCD_ADVERTISE_CLIENT_URLS="https://etcd01.my-k8s.local:2379"ETCD_INITIAL_CLUSTER="etcd01=https://etcd01.my-k8s.local:2380,etcd02=https://etcd02.my-k8s.local:2380,etcd03=https://etcd03.my-k8s.local:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token-my-k8s"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_LISTEN_METRICS_URLS="http://0.0.0.0:2381"

etcd02的配置,每台机不一样

#[Member]ETCD_NAME="etcd02"ETCD_DATA_DIR="/opt/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd02.my-k8s.local:2380"ETCD_ADVERTISE_CLIENT_URLS="https://etcd02.my-k8s.local:2379"ETCD_INITIAL_CLUSTER="etcd01=https://etcd01.my-k8s.local:2380,etcd02=https://etcd02.my-k8s.local:2380,etcd03=https://etcd03.my-k8s.local:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token-my-k8s"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_LISTEN_METRICS_URLS="http://0.0.0.0:2381"

etcd03 的配置,每台机不一样

#[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/opt/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://0.0.0.0:2380"ETCD_LISTEN_CLIENT_URLS="https://0.0.0.0:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://etcd03.my-k8s.local:2380"ETCD_ADVERTISE_CLIENT_URLS="https://etcd03.my-k8s.local:2379"ETCD_INITIAL_CLUSTER="etcd01=https://etcd01.my-k8s.local:2380,etcd02=https://etcd02.my-k8s.local:2380,etcd03=https://etcd03.my-k8s.local:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token-my-k8s"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_LISTEN_METRICS_URLS="http://0.0.0.0:2381"

4.2.etcd启动脚本

[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyEnvironmentFile=-/opt/etcd/conf/etcd.confWorkingDirectory=/opt/etcd/ExecStart=/opt/etcd/bin/etcd \  --cert-file=/opt/etcd/ssl/etcd.pem \  --key-file=/opt/etcd/ssl/etcd-key.pem \  --trusted-ca-file=/opt/etcd/ssl/ca.pem \  --peer-cert-file=/opt/etcd/ssl/etcd.pem \  --peer-key-file=/opt/etcd/ssl/etcd-key.pem \  --peer-trusted-ca-file=/opt/etcd/ssl/ca.pem \  --peer-client-cert-auth \  --client-cert-authRestart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.target

五.准备master相关的配置文件

5.1.kube-apiserver.conf

注意配置中的文件、证书路径。注意以下几个参数:
--etcd-servers: 配置etcd的主机名,需要apiserver能解析此主机名,在/etc/hosts中加入映射。
--service-cluster-ip-range:配置service的网段,用章节一:文章中规划的网段

KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \  --anonymous-auth=false \  --secure-port=6443 \  --authorization-mode=Node,RBAC \  --runtime-config=api/all=true \  --enable-bootstrap-token-auth \  --service-cluster-ip-range=10.96.0.0/16 \  --token-auth-file=/opt/kubernetes/conf/token.csv \  --service-node-port-range=30000-50000 \  --tls-cert-file=/opt/kubernetes/ssl/kube-apiserver.pem \  --tls-private-key-file=/opt/kubernetes/ssl/kube-apiserver-key.pem \  --client-ca-file=/opt/kubernetes/ssl/ca.pem \  --kubelet-client-certificate=/opt/kubernetes/ssl/kube-apiserver.pem \  --kubelet-client-key=/opt/kubernetes/ssl/kube-apiserver-key.pem \  --kubelet-preferred-address-types=InternalIP,Hostname,InternalDNS,ExternalDNS \  --service-account-issuer=https://kubernetes.default.svc \  --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \  --service-account-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \  --etcd-cafile=/opt/etcd/ssl/ca.pem \  --etcd-certfile=/opt/etcd/ssl/etcd.pem \  --etcd-keyfile=/opt/etcd/ssl/etcd-key.pem \  --etcd-servers=https://etcd01.my-k8s.local:2379,https://etcd02.my-k8s.local:2379,https://etcd03.my-k8s.local:2379 \  --allow-privileged=true \  --audit-log-maxage=5 \  --audit-log-maxbackup=3 \  --audit-log-maxsize=100 \  --audit-log-path=/opt/kubernetes/logs/kube-apiserver-audit.log \  --requestheader-allowed-names=aggregator \  --requestheader-group-headers=X-Remote-Group \  --requestheader-username-headers=X-Remote-User \  --requestheader-extra-headers-prefix=X-Remote-Extra- \  --requestheader-client-ca-file=/opt/kubernetes/ssl/ca.pem \  --proxy-client-cert-file=/opt/kubernetes/ssl/proxy-client.pem \  --proxy-client-key-file=/opt/kubernetes/ssl/proxy-client-key.pem \  --v=4"

5.2.kube-controller-manager.conf

注意配置中的文件、证书路径。注意以下几个参数:
--service-cluster-ip-range: service 网段
--cluster-cidr: pod网段

KUBE_CONTROLLER_MANAGER_OPTS="--v=2 \  --kubeconfig=/opt/kubernetes/conf/kube-controller-manager.kubeconfig \  --horizontal-pod-autoscaler-sync-period=10s \  --service-cluster-ip-range=10.96.0.0/16 \  --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \  --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \  --allocate-node-cidrs=true \  --cluster-cidr=10.244.0.0/16 \  --cluster-signing-duration=175200h \  --root-ca-file=/opt/kubernetes/ssl/ca.pem \  --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem \  --leader-elect=true \  --feature-gates=RotateKubeletServerCertificate=true \  --controllers=*,bootstrapsigner,tokencleaner \  --tls-cert-file=/opt/kubernetes/ssl/kube-controller-manager.pem \  --tls-private-key-file=/opt/kubernetes/ssl/kube-controller-manager-key.pem \  --use-service-account-credentials=true"

5.3.kube-scheduler.conf

注意配置中的文件路径

KUBE_SCHEDULER_OPTS="--kubeconfig=/opt/kubernetes/conf/kube-scheduler.kubeconfig \--leader-elect=true \--v=2"

5.4.kubelet.yaml

注意几个参数:
clusterDNS: 定义coredns的服务IP。/opt/kubernetes/ssl/ca.pem resolvConf:/run/systemd/resolve/resolv.conf为系统的resolved的dns配置路径,不配置此项会导致读取/etc/resolv.conf,而/etc/resolv.conf是/run/systemd/resolve/stub-resolv.conf的软连接,里面配置了本地缓存dns,127.0.0.1:53,会和k8s导致dns冲突

kind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: 0.0.0.0port: 10250readOnlyPort: 10255cgroupDriver: systemd clusterDNS:- 10.96.0.10clusterDomain: cluster.local failSwapOn: falseauthentication:  anonymous:    enabled: false  webhook:    cacheTTL: 2m0s    enabled: true  x509:    clientCAFile: /opt/kubernetes/ssl/ca.pem authorization:  mode: Webhook  webhook:    cacheAuthorizedTTL: 5m0s    cacheUnauthorizedTTL: 30sevictionHard:  imagefs.available: 15%  memory.available: 100Mi  nodefs.available: 10%  nodefs.inodesFree: 5%maxOpenFiles: 2048000maxPods: 200resolvConf: /run/systemd/resolve/resolv.conf

4.5.containerd配置文件

containerd的配置文件,可以从containerd命令导出默认配置,并修改里面的镜像地址和SystemdCgroup 为true,导出命令参考如下:

containerd config default | sudo tee /etc/containerd/config.toml sed -i 's#SystemdCgroup.*#SystemdCgroup = true#' /etc/containerd/config.toml sed -i 's#sandbox_image.*#sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.10.1"#' /etc/containerd/config.toml

config.toml 配置

4.6.kube-apiserver.service 启动脚本

kube-apiserver.service

[Unit]Description=Kubernetes API Serverdocumentation=https://github.com/kubernetes/kubernetesAfter=etcd.serviceWants=etcd.service[Service]EnvironmentFile=-/opt/kubernetes/conf/kube-apiserver.confExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failureRestartSec=5Type=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target

4.7.kube-controller-manager.service 启动脚本

kube-controller-manager.service

[Unit]Description=Kubernetes Controller Managerdocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/conf/kube-controller-manager.confExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failureRestartSec=5[Install]WantedBy=multi-user.target

4.8.kube-scheduler.service 启动脚本

kube-scheduler.service

[Unit]Description=Kubernetes Schedulerdocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/conf/kube-scheduler.confExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTSRestart=on-failureRestartSec=5[Install]WantedBy=multi-user.target

4.9.kubelet.service 启动脚本

kubelet.service

[Unit]Description=Kubernetes Kubeletdocumentation=https://github.com/kubernetes/kubernetesAfter=containerd.serviceRequires=containerd.service[Service]ExecStart=/opt/kubernetes/bin/kubelet \  --hostname-override=node-hostname \ #此处需要配置正确的节点的主机名  --bootstrap-kubeconfig=/opt/kubernetes/conf/kubelet-bootstrap.kubeconfig \  --cert-dir=/opt/kubernetes/ssl \  --client-ca-file=/opt/kubernetes/ssl/ca.pem \  --kubeconfig=/opt/kubernetes/conf/kubelet.kubeconfig \  --config=/opt/kubernetes/conf/kubelet.yaml \  --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock \  --v=2Restart=on-failureRestartSec=5LimitNOFILE=65536[Install]WantedBy=multi-user.target

4.10.containerd.service 启动脚本

containerd.service

# Copyright The containerd Authors.## Licensed under the Apache License, Version 2.0 (the "License");# you may not use this file except in compliance with the License.# You may obtain a copy of the License at##     http://www.apache.org/licenses/LICENSE-2.0## Unless required by applicable law or agreed to in writing, software# distributed under the License is distributed on an "AS IS" BASIS,# WITHOUT WARRANTIES OR ConDITIONS OF ANY KIND, either express or implied.# See the License for the specific language governing permissions and# limitations under the License.[Unit]Description=containerd container runtimedocumentation=https://containerd.ioAfter=network.target dbus.service[Service]ExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/containerdType=notifyDelegate=yesKillMode=processRestart=alwaysRestartSec=5# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNPROC=infinityLimitCORE=infinity# Comment TasksMax if your systemd version does not supports it.# only systemd 226 and above support this version.TasksMax=infinityOOMScoreAdjust=-999[Install]WantedBy=multi-user.target

六.制作etcd、master、containerd、node的安装包

6.1. etcd安装包

1.创建目录etcd,解压etcd-v3.6.6-linux-amd64.tar.gz 将etcd、etcdctl、etcdutl 三个文件拷贝到etcd/bin下。
2.拷贝etcd.conf 到etcd/conf目录下,此处的配置文件,三台机分别拷贝etcd01.conf,etcd02.conf,etcd03.conf。
3.拷贝cert目录下生成好的证书到ssl目录下,注:该步骤是需要在前面已经在执行了create-cert.sh和create-kubeconfig.sh以后会生成,生成后不需要重复执行,不能将多次执行生成的cert和kubeconfig文件拷贝到不同的安装包内,一次执行,拷贝到不同的安装包。
4.etcd.service 是启动脚本,安装时需要拷贝到/usr/lib/systemd/system/

etcd目录结构如下:├── bin│   ├── etcd│   ├── etcdctl│   └── etcdutl├── conf│   └── etcd.conf├── etcd.service├── logs└── ssl    ├── ca-key.pem    ├── ca.pem    ├── etcd-key.pem    └── etcd.pem

6.2. 制作master节点安装包

1.master除了安装kube-controller-manager、kube-scheduler、kube-apiserver,以外还需要安装containerd和kubelet。其中四个.service文件是启动脚本,安装时需要拷贝到/usr/lib/systemd/system/

拷贝cert目录下生成好的证书到ssl目录下,需要拷贝的文件参考如下目录结构kubeconfig文件拷贝到conf目录下,需要拷贝的文件参考如下目录结构

master节点目录结构├── bin│   ├── kube-apiserver│   ├── kube-controller-manager│   ├── kubelet│   └── kube-scheduler├── conf  #从kubeconfig目录拷贝kubeconfig文件│   ├── kube-apiserver.conf│   ├── kube-controller-manager.conf│   ├── kube-controller-manager.kubeconfig│   ├── kubelet-bootstrap.kubeconfig│   ├── kubelet.yaml│   ├── kube-scheduler.conf│   ├── kube-scheduler.kubeconfig│   └── token.csv  #在kubeconfig目录下├── kube-apiserver.service├── kube-controller-manager.service├── kubelet.service├── kube-scheduler.service├── logs└── ssl  #从cert目录拷贝    ├── ca-key.pem    ├── ca.pem    ├── kube-apiserver-key.pem    ├── kube-apiserver.pem    ├── kube-controller-manager-key.pem    ├── kube-controller-manager.pem    ├── kube-scheduler-key.pem    ├── kube-scheduler.pem    ├── proxy-client-key.pem    └── proxy-client.pem

6.3. node节点的安装包

1.node只需要安装containerd和kubelet。其中.service文件是启动脚本,安装时需要拷贝到/usr/lib/systemd/system/

拷贝cert目录下生成好的证书到ssl目录下,需要拷贝的文件参考如下目录结构kubeconfig文件拷贝到conf目录下,需要拷贝的文件参考如下目录结构

node节点的目录结构如下:├── bin│   ├── kubelet│   └── kube-proxy├── conf  #从kubeconfig目录拷贝kubeconfig文件│   ├── kubelet-bootstrap.kubeconfig      │   ├── kubelet.yaml├── kubelet.service├── logs└── ssl  #从cert目录拷贝    ├── ca-key.pem    ├── ca.pem

6.4. containerd安装包

containerd2.0 开始安装时需要安装runc 和cni-plugins。该安装包是将runc 和 cni-plugins、containerd的执行文件、配置、启动脚本拷贝到一起。安装时:bin目录下的文件拷贝到/usr/local/bin下,cni目录拷贝到/opt/下,sbin/runc 拷贝到/usr/local/sbin 下,config.toml 拷贝到/etc/containerd,crictl.yaml 拷贝到/etc/ 下,containerd.service 拷贝到/etc/systemd/system/下,具体参考如下目录结构

下载地址:
https://gitee.com/eeenet/k8s-install
https://github.com/kubernetes-sigs/cri-tools/releases
https://github.com/containerd/containerd/releases
https://github.com/opencontainers/runc/releases
https://github.com/containernetworking/plugins/releases

├── etc│   ├── containerd│   │   └── config.toml│   ├── crictl.yaml│   └── systemd│       └── system│           └── containerd.service├── opt│   └── cni│       └── bin│           ├── bandwidth│           ├── bridge│           ├── dhcp│           ├── dummy│           ├── firewall│           ├── host-device│           ├── host-local│           ├── ipvlan│           ├── LICENSE│           ├── loopback│           ├── macvlan│           ├── portmap│           ├── ptp│           ├── README.md│           ├── sbr│           ├── static│           ├── tap│           ├── tuning│           ├── vlan│           └── vrf└── usr    └── local        ├── bin        │   ├── containerd        │   ├── containerd-shim-runc-v2        │   ├── containerd-stress        │   ├── crictl        │   └── ctr        └── sbin            └── runc
打赏
0相关评论
热门搜索排行
精彩图片
友情链接
声明:本站信息均由用户注册后自行发布,本站不承担任何法律责任。如有侵权请告知立立即做删除处理。
违法不良信息举报邮箱:115904045
头条快讯网 版权所有
中国互联网举报中心