服务器环境

使用向日葵远程客户电脑,向日葵账号密码为:815793896/1234567

\2. 服务器共四台分别为:

10.126.13.198 Nginx

10.126.13.200应用

10.126.13.203应用

10.126.13.202应用+数据库

服务器账号密码都为:root/cico@123

其中之前文坚已经在10.126.13.200和10.126.13.203都装好了试用版的应用和部署了docker项目

规划
10.126.13.198 Nginx
10.126.13.200 应用 Master1
10.126.13.202 应用+数据库 Node1
10.126.13.203 应用 Node2

在Master1上安装

修改主机名:(很重要,否正加入集群时没有名称)

  • master节点:
    hostnamectl set-hostname master1
    
  • node1节点:
    hostnamectl set-hostname node1
    
    • node2节点:
      hostnamectl set-hostname node2
      
  • db-nfs节点:
    hostnamectl set-hostname node2
    
cat <<EOF >> /etc/hosts
192.168.88.200 master1
192.168.88.211 node1
192.168.88.212 node2
192.168.88.213 db-nfs
EOF

** 关闭防火墙与 SELinux 已关闭。如 CentOS:

$ systemctl stop firewalld && systemctl disable firewalld

$ setenforce 0

$ vi /etc/selinux/config

SELINUX=disabled

** 关闭swap

swapoff -a
 yes | cp /etc/fstab /etc/fstab_bak
 cat /etc/fstab_bak |grep -v swap > /etc/fstab

sysctl -p # 使配置生效

** 在/etc/sysctl.conf中添加以下配置

cat <<EOF > /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.ipv4.ip_forward = 1

EOF

sysctl --system

** 配置yum源

  • 改成阿里源
    curl -o /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo
    

** docker yum源

cat >> /etc/yum.repos.d/docker.repo <<EOF

[docker-repo]

name=Docker Repository

baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7

enabled=1

gpgcheck=0

EOF

** kubernetes yum源

cat >> /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum clean all
yum makecache

安装docker、kubeadm等

所有节点操作:

yum install -y docker --disableexcludes=docker-repo
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet

在/etc/sysctl.conf中添加以下配置

cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

sysctl --system

配置yum源

  • docker yum源
cat >> /etc/yum.repos.d/docker.repo <<EOF
[docker-repo]
name=Docker Repository
baseurl=http://mirrors.aliyun.com/docker-engine/yum/repo/main/centos/7
enabled=1
gpgcheck=0
EOF
  • kubernetes yum源
cat >> /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
EOF

yum clean all
yum makecache

# 设置Docker阿里源
cat > /etc/docker/daemon.json <<EOF
{"registry-mirrors":["https://registry.docker-cn.com","https://kxv08zer.mirror.aliyuncs.com"]}
EOF

kubeadm初始化集群

在master上操作:

kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.0

** 输出应如下所示(每次安装都不一样,要重新生成,不能直接用):

kubeadm join 192.168.88.200:6443 --token nd5n07.s06kgvfmu3qgni0o \
    --discovery-token-ca-cert-hash sha256:0d9f70f29264f9ae65bbb52c7aeab1207da465e3930670a4543a05602738b853

**注意:请保存输出中的 kubeadm join …………一行命令

**如果忘记了,重新获取上面的命令,运行:

kubeadm token create --print-join-command --ttl 0

要使kubectl为非root用户工作,请运行以下命令,这些命令也是kubeadm init输出的一部分:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

或者,如果是root用户,则可以运行:

export KUBECONFIG=/etc/kubernetes/admin.conf

安装flannel网络

在master上操作:


kubectl apply -f kube-flannel-v0.12.0.yaml
注意,在这里踩过大坑:-)

I was able to get nodes to come up as ready by adding:

      "cniVersion":"0.3.1",

Under this line:

flannel/Documentation/kube-flannel.yml

Line 108 in d893bcb



** 执行 kubectl get pod -n kube-system 查看pod状态,等待所有pod处于running状态即可。

添加计算节点

添加节点到k8s集群

** 在计算节点上操作:

执行第三步中保存的 kubeadm join …………命令

在master上查看node添加状态:kubectl get nodes

安装遇到问题卸载

集群初始化如果遇到问题,可以使用下面的命令进行清理:

kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/

忘记join命令

kubeadm token create --print-join-command

部署dashboard

** 在master上操作:

安装

kubectl apply -f \
heapster.yaml \
recommended.yaml \
admin-token.yaml

创建dashboard admin-token

** 创建admin-token.yaml文件,文件内容如下:

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kube-system
  labels:
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
  • 创建用户
kubectl create -f admin-token.yaml
  • 获取登陆token
kubectl describe secret/$(kubectl get secret -nkube-system |grep admin|awk '{print $1}') -nkube-system

配置外网访问(Ingress-nginx)

wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml

可以用目录下 kube-flannel-v0.12.0.yaml

参考:https://www.cnblogs.com/panwenbin-logs/p/9915927.html

安装Load Balancer(部署MetalLB服务)

先编辑kube-proxy的配置:

kubectl edit configmap -n kube-system kube-proxy

现在可以部署MetalLB服务了:

kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/namespace.yaml
kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.9.3/manifests/metallb.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"

配置文件,配置地址池,和master一个段的空闲地址

cat > metallb-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250
EOF

现在service中可以应用 type: LoadBalancer 参数获取IP

results matching ""

    No results matching ""