首页 > 网络 > 云计算 >

kubeadm搭建环境k8s(v1.9)集群环境

2018-06-05

kubeadm搭建环境k8s(v1 9)集群环境。

kubeadm搭建环境k8s(v1.9)集群环境。

# kubeadm搭建环境

**备注:以下安装除docker外,其他均不需要访问外网**

docker version 18.07,安装方法见博文“CentOS7.2g安装docker-ce”

https://blog.csdn.net/z770816239/article/details/80560747

## 安装docker

配置docker

vi /etc/docker/daemon.json

{

"registry-mirrors": ["https://mdvdy488.mirror.aliyuncs.com"]

}

```

mkdir -p /etc/systemd/system/docker.service.d

cat >> /etc/systemd/system/docker.service.d/http-proxy.conf <[Service]

Environment="HTTP_PROXY=http://${PROXY_IP}:8118"

EOF

systemctl enable docker && systemctl restart docker

保存并重启docker

## 关闭防火墙

```shell

swapoff -a

setenforce 0

sed -i "s|SELINUX=.*|SELINUX=disabled|g" /etc/selinux/config

systemctl stop firewalld

```

## 查看net.bridge.bridge-nf-call-iptables是否为1

`sysctl net.bridge.bridge-nf-call-iptables`

`sysctl net.bridge.bridge-nf-call-ip6tables`

```shell

cat

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

EOF

sysctl --system

```

## 拷贝安装所需文件到机器上

rpms、yaml

## 安装kubelet kubectl kubeadm

```shell

cd rpms

rpm -ivh *

```

#### 修改kubelet的配置

- 替换`/etc/systemd/system/kubelet.service.d/10-kubeadm.conf`中的`Environment="KUBELET_CGROUP_ARGS=--cgroup-driver=cgroupfs"

`和`docker info`的cgroups driver一致

- 增加kubelet启动每个pod的network/ipc namespaces的自定义容器镜像

`vi /etc/systemd/system/kubelet.service.d/10-kubeadm.conf`

`Environment="KUBELET_EXTRA_ARGS=--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"`

#### 重启kubelet服务

`systemctl enable kubelet && systemctl restart kubelet`

## 初始化集群

`kubeadm init --config yaml/kubeadm.yaml`

```shell

[init] Using Kubernetes version: v1.9.2

[init] Using Authorization modes: [Node RBAC]

[preflight] Running pre-flight checks.

[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.0-ce. Max validated version: 17.03

[WARNING FileExisting-crictl]: crictl not found in system path

[preflight] Starting the kubelet service

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [localhost.hcm.domain.com kubernetes kubernetes. default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 ]

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated sa key and public key.

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"

[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"

[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"

[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"

[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"

[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"

[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests".

[init] This might take a minute or longer if the control plane images have to be pulled.

[apiclient] All control plane components are healthy after 157.104694 seconds

[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[markmaster] Will mark node localhost.hcm.domain.com as master by adding a label and a taint

[markmaster] Master localhost.hcm.domain.com tainted and labelled with key/value: node-role.kubernetes.io/master=""

[bootstraptoken] Using token:

[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: kube-dns

[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.

Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node

as root:

kubeadm join --token : --discovery-token-ca-cert-hash sha256:

```

mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config

## 安装flannel

在未安装podnetwork的时候,kube-dns的状态为pending

```shell

[root@localhost ~]# kubectl get pods --namespace=kube-system

NAME READY STATUS RESTARTS AGE

etcd-localhost.hcm.domain.com 1/1 Running 0 38s

kube-apiserver-localhost.hcm.domain.com 1/1 Running 1 1m

kube-controller-manager-localhost.hcm.domain.com 1/1 Running 0 40s

kube-dns-6f4fd4bdf-dndm6 0/3 Pending 0 26s

kube-scheduler-localhost.hcm.domain.com 1/1 Running 0 1m

```

### 执行安装

`kubectl apply -f yaml/kube-flannel.yml`

### 安装后检查

```shell

[root@localhost ~]# kubectl get pods --namespace=kube-system

NAME READY STATUS RESTARTS AGE

etcd-localhost.hcm.domain.com 1/1 Running 0 5m

kube-apiserver-localhost.hcm.domain.com 1/1 Running 1 5m

kube-controller-manager-localhost.hcm.domain.com 1/1 Running 0 5m

kube-dns-6f4fd4bdf-dndm6 3/3 Running 0 4m

kube-flannel-ds-gj992 1/1 Running 0 4m

kube-proxy-pct9t 1/1 Running 0 4m

kube-scheduler-localhost.hcm.domain.com 1/1 Running 0 6m

```

## 初始化master

默认master是不允许进行pod的调度的,如果想要使用master,需要执行下面的命令。

`kubectl taint nodes --all node-role.kubernetes.io/master-`

## 安装dashboard

`kubectl create -f kubernetes-dashboard.yaml`

访问dashboard

`kubectl proxy --address 0.0.0.0 --accept-hosts &#39;.*&#39;`

http://192.168.122.115:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/ingressnamespace=default

http://192.168.122.115:8001/ui

## 添加node节点

#### 修改hostname和/etc/hosts

每个节点的上述两项需保证不一致,否则添加节点会导致节点错乱,引起异常

#### 安装docker

#### 关闭防火墙

#### 查看net.bridge.bridge-nf-call-iptables是否为1

### 安装kubelet kubectl kubeadm

拷贝rpms到机器上

```shell

rpm -ivh *

systemctl enable kubelet && systemctl start kubelet

```

#### 添加节点到集群

在需要添加的节点上执行下面的命令。

`kubeadm join --token : --discovery-token-ca-cert-hash sha256:`

#### token获取

`kubeadm token list`

#### hash获取

`openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed &#39;s/^.* //&#39;`

成功后信息:

```shell

[preflight] Running pre-flight checks.

[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.09.1-ce. Max validated version: 17.03

[WARNING FileExisting-crictl]: crictl not found in system path

[discovery] Trying to connect to API Server ":"

[discovery] Created cluster-info discovery client, requesting info from "https://:"

[discovery] Requesting info from "https://:" again to validate TLS against the pinned public key

[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server ":"

[discovery] Successfully established connection with API Server ":"

This node has joined the cluster:

* Certificate signing request was sent to master and a response

was received.

* The Kubelet was informed of the new secure connection details.

Run &#39;kubectl get nodes&#39; on the master to see this node join the cluster.

```

## 关闭集群

```shell

kubectl drain --delete-local-data --force --ignore-daemonsets

kubectl delete node

kubeadm reset

```

############

kubeadm.yaml

#############

apiVersion: kubeadm.k8s.io/v1alpha1

kind: MasterConfiguration

etcd:

extraArgs:

&#39;listen-client-urls&#39;: &#39;http://127.0.0.1:2379&#39;

&#39;advertise-client-urls&#39;: &#39;http://127.0.0.1:2379&#39;

&#39;listen-peer-urls&#39;: &#39;http://127.0.0.1:2380&#39;

&#39;data-dir&#39;: &#39;/var/lib/etcd&#39;

image: gcr.io/google_containers/etcd-amd64:3.1.11

kubernetesVersion: 1.9.2

tokenTTL: 0s

networking:

podSubnet: 10.244.0.0/16

################

kube-flannel.yaml

################

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

rules:

- apiGroups:

- ""

resources:

- pods

verbs:

- get

- apiGroups:

- ""

resources:

- nodes

verbs:

- list

- watch

- apiGroups:

- ""

resources:

- nodes/status

verbs:

- patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1beta1

metadata:

name: flannel

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: flannel

subjects:

- kind: ServiceAccount

name: flannel

namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: flannel

namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

name: kube-flannel-cfg

namespace: kube-system

labels:

tier: node

app: flannel

data:

cni-conf.json: |

{

"name": "cbr0",

"type": "flannel",

"delegate": {

"isDefaultGateway": true

}

}

net-conf.json: |

{

"Network": "10.244.0.0/16",

"Backend": {

"Type": "vxlan"

}

}

---

apiVersion: extensions/v1beta1

kind: DaemonSet

metadata:

name: kube-flannel-ds

namespace: kube-system

labels:

tier: node

app: flannel

spec:

template:

metadata:

labels:

tier: node

app: flannel

spec:

hostNetwork: true

nodeSelector:

beta.kubernetes.io/arch: amd64

tolerations:

- key: node-role.kubernetes.io/master

operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.9.1-amd64

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conf

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.9.1-amd64

command: [ "/opt/bin/flanneld", "--ip-masq", "--kube-subnet-mgr" ]

securityContext:

privileged: true

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

相关文章
最新文章
热点推荐