kubeadmin安装k8s

他是一个全新的基于容器技术的分布式架构领先方案。Kubernetes(k8s)是Google开源的容器集群管理系统(谷歌内部:Borg)。

在Docker技术的基础上,为容器化的应用提供部署运行、资源调度、服务发现和动态伸缩等一系列完整功能,提高了大规模容器集群管理的便捷性。

Kubernetes是一个完备的分布式系统支撑平台,具有完备的集群管理能力,多扩多层次的安全防护和准入机制、多租户应用支撑能力、透明的服务注册和发现机制、 內建智能负载均衡器、强大的故障发现和自我修复能力、服务滚动升级和在线扩容能力、 可扩展的资源自动调度机制以及多粒度的资源配额管理能力。 同时Kubernetes提供完善的管理工具,涵盖了包括开发、部署测试、运维监控在内的各个环节

CentOs7安装Kubernetes
kubeadm介绍
1kubeadm是Kubernetes官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。
安装方式
1kubeadm 简单方便
2一个个组件安装,以守护进程方式安装
环境准备如下
IP地址 主机名
172.19.33.174 master.zj.com
172.19.33.175 node1.zj.com
172.19.33.176 node2.zj.com
配置host
 1master如下:
 2172.19.33.174 master.zj.com
 3172.19.33.175 node1.zj.com
 4172.19.33.176 node2.zj.com
 5node1如下
 6172.19.33.174 master.zj.com
 7172.19.33.175 node1.zj.com
 8172.19.33.176 node2.zj.com
 9node2如下
10172.19.33.174 master.zj.com
11172.19.33.175 node1.zj.com
12172.19.33.176 node2.zj.com
配置 SSH 免密码登录登录
 1ssh-keygen  三台机器都要执行
 2master执行如下
 3ssh-copy-id node1.zj.com
 4ssh-copy-id node2.zj.com
 5node1执行如下
 6ssh-copy-id master.zj.com
 7ssh-copy-id node2.zj.com
 8node2执行如下
 9ssh-copy-id master.zj.com
10ssh-copy-id node1.zj.com
配置阿里云的yum镜像源
docker镜像源
1cd /etc/yum.repos.d/
2wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 
kubernetes镜像源
1cat <<EOF > /etc/yum.repos.d/kubernetes.repo
2[kubernetes]
3name=Kubernetes
4baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
5enabled=1
6gpgcheck=1
7repo_gpgcheck=1
8gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
9EOF
建立yum缓存
1yum clean all
2yum makecache
3要删除/var/run/yum.pid就可以正常使用了,即rm -rf /var/run/yum.pid
关闭防火墙
1# 每台服务器都要关闭,不关闭kubernetes会执行防火墙操作
2systemctl stop firewalld && systemctl disable firewalld
关闭交换分区
1swapoff -a 
2free -h #查看下
3#关闭后的total应该是0
4#编辑配置文件: /etc/fstab ,注释最后一条 /dev/mapper/centos-swap swap,
5sed -i "s/\/dev\/mapper\/centos-swap/# \/dev\/mapper\/centos-swap/" /etc/fstab
6cat /etc/fstab
关闭 SeLinux
1setenforce 0
2sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
3cat /etc/selinux/config
4SELINUX=disabled
配置iptable
1# 三台都执行
2cat <<EOF >  /etc/sysctl.d/k8s.conf
3net.bridge.bridge-nf-call-ip6tables = 1
4net.bridge.bridge-nf-call-iptables = 1
5EOF
6sysctl --system
安装网络工具包和基础工具包
1yum install net-tools checkpolicy gcc dkms foomatic openssh-server bash-completion -y
安装相关组件
1# master,node都需要安装
2yum install docker-ce-18.06.2.ce -y
3yum install kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 -y
配置国内docker镜像仓库加速器和cgroup修改为systemd
 1mkdir -p /etc/docker
 2cat > /etc/docker/daemon.json <<EOF
 3{
 4  "registry-mirrors": ["https://okxfxhu4.mirror.aliyuncs.com"],
 5  "exec-opts": ["native.cgroupdriver=systemd"],
 6  "log-driver": "json-file",
 7  "log-opts": {
 8    "max-size": "100m"
 9  },
10  "storage-driver": "overlay2",
11  "storage-opts": [
12    "overlay2.override_kernel_check=true"
13  ]
14}
15EOF
启动docker
1systemctl daemon-reload
2systemctl enable docker
3systemctl start docker
查看systemd
1docker info |grep Cgroup
加载IPVS内核
1modprobe ip_vs_rr
2modprobe ip_vs_wrr
3modprobe ip_vs_sh
并添加到开机启动文件/etc/rc.local里面。
1cat <<EOF >> /etc/rc.local
2modprobe ip_vs_rr
3modprobe ip_vs_wrr
4modprobe ip_vs_sh
5EOF
6source /etc/rc.local
安装 master 节点

因为国内没办法访问Google的镜像源,变通的方法是从其他镜像源下载后,注意下载的版本尽量和我们的kubeadm等版本一样,我们选择v1.14.2,修改tag。执行下面这个Shell脚本即可。

 1#!/bin/bash
 2kube_version=:v1.14.2
 3kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver)
 4addon_images=(etcd-amd64:3.3.10 coredns:1.3.1 pause-amd64:3.1)
 5
 6for imageName in ${kube_images[@]} ; do
 7  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
 8  docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version
 9  docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version
10done
11
12for imageName in ${addon_images[@]} ; do
13  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
14  docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
15  docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
16done
17
18docker tag k8s.gcr.io/etcd-amd64:3.3.10 k8s.gcr.io/etcd:3.3.10
19docker image rm k8s.gcr.io/etcd-amd64:3.3.10
20docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1
21docker image rm k8s.gcr.io/pause-amd64:3.1
在master查看相关镜像
1docker images
2REPOSITORY                           TAG                 IMAGE ID            CREATED             SIZE
3k8s.gcr.io/kube-proxy                v1.14.2             5c24210246bb        7 days ago          82.1MB
4k8s.gcr.io/kube-apiserver            v1.14.2             5eeff402b659        7 days ago          210MB
5k8s.gcr.io/kube-controller-manager   v1.14.2             8be94bdae139        7 days ago          158MB
6k8s.gcr.io/kube-scheduler            v1.14.2             ee18f350636d        7 days ago          81.6MB
7k8s.gcr.io/coredns                   1.3.1               eb516548c180        4 months ago        40.3MB
8k8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        5 months ago        258MB
9k8s.gcr.io/pause                     3.1                 da86e6ba6ca1        17 months ago       742kB
设置kubelet开机启动
1systemctl enable kubelet
修改swap内存
1sed -i "s/KUBELET_EXTRA_ARGS=/KUBELET_EXTRA_ARGS=\"--fail-swap-on=false\"/" /etc/sysconfig/kubelet
2cat /etc/sysconfig/kubelet
3KUBELET_EXTRA_ARGS="--fail-swap-on=false"
使用kubeadm init自动安装 Master 节点,需要指定版本
init初始化
1kubeadm init --kubernetes-version=v1.14.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
2# 服务启动后需要根据输出提示,进行配置:
3mkdir -p $HOME/.kube
4sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
5sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看相关状态
1kubectl get pod -n kube-system -owide
2NAME                                    READY   STATUS    RESTARTS   AGE    IP              NODE            NOMINATED NODE   READINESS GATES
3coredns-fb8b8dccf-br8lt                 0/1     Pending   0          118s   <none>          <none>          <none>           <none>
4coredns-fb8b8dccf-kv5st                 0/1     Pending   0          118s   <none>          <none>          <none>           <none>
5etcd-zj.ops.master                      1/1     Running   0          78s    172.19.33.174   zj.ops.master   <none>           <none>
6kube-apiserver-zj.ops.master            1/1     Running   0          71s    172.19.33.174   zj.ops.master   <none>           <none>
7kube-controller-manager-zj.ops.master   1/1     Running   0          79s    172.19.33.174   zj.ops.master   <none>           <none>
8kube-proxy-hg24l                        1/1     Running   0          118s   172.19.33.174   zj.ops.master   <none>           <none>
9kube-scheduler-zj.ops.master            1/1     Running   0          60s    
给pod配置网络
 1# pod网络插件是必要安装,以便pod可以相互通信。在部署应用和启动kube-dns之前,需要部署网络,kubeadm仅支持CNI的网络
 2
 3# pod支持的网络插件有很多,如Calico,Canal,Flannel,Romana,Weave Net等,因为之前我们初始化使用了参数--pod-network-cidr=10.244.0.0/16,所以我们使用插件flannel
 4kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
 5检查是否正常启动,因为要下载flannel镜像,需要时间会稍微长一些
 6kubectl get pods --all-namespaces
 7NAMESPACE     NAME                                    READY   STATUS              RESTARTS   AGE
 8kube-system   coredns-fb8b8dccf-hzlc7                 0/1     ContainerCreating   0          3m54s
 9kube-system   coredns-fb8b8dccf-xsxrf                 0/1     ContainerCreating   0          3m54s
10kube-system   etcd-master.zj.com                      1/1     Running             0          2m57s
11kube-system   kube-apiserver-master.zj.com            1/1     Running             0          3m15s
12kube-system   kube-controller-manager-master.zj.com   1/1     Running             0          3m8s
13kube-system   kube-flannel-ds-amd64-h2mcw             1/1     Running             0          25s
14kube-system   kube-proxy-m4tng                        1/1     Running             0          3m53s
15kube-system   kube-scheduler-master.zj.com            1/1     Running             0          3m12s
故障排查思路:
1确认端口和容器是否正常启动,查看 /var/log/message日志信息
2通过docker logs ID查看容器的启动日志,特别是频繁创建的容器
3使用kubectl --namespace=kube-system describe pod POD-NAME查看错误状态的pod日志。
4使用kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}查看具体错误。
5Calico - Canal - Flannel已经被官方验证过,其他的网络插件有可能有坑,能不能爬出来就看个人能力了。
6一般常见的错误是镜像名称版本不对或者镜像无法下载。
安装node节点
 1#!/bin/bash
 2
 3kube_version=:v1.14.2
 4coredns_version=1.3.1
 5pause_version=3.1
 6
 7docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
 8docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_version
 9docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version
10
11docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
12docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_version
13docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version
14
15docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
16docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_version
17docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version
查看下载好的镜像
1docker images
2REPOSITORY              TAG                 IMAGE ID            CREATED             SIZE
3k8s.gcr.io/kube-proxy   v1.14.2             5c24210246bb        8 days ago          82.1MB
4k8s.gcr.io/coredns      1.3.1               eb516548c180        4 months ago        40.3MB
5k8s.gcr.io/pause        3.1                 da86e6ba6ca1        17 months ago       742kB
添加节点(node1为例)
1kubeadm join 172.19.33.174:6443 --token d58cn9.j4oa27ryu0h5901h \
2    --discovery-token-ca-cert-hash sha256:6cbc7333b1e054fd3ebb2c68e4f148c8ad73f29b7aee38070eac535e3b9b2b3b
3# 上面的是key生成是kubeadm init产生的用于加入node节点到mater
4kubeadm token create --print-join-command
5# 查看master初始化信息
6# 按照提示在Master 上执行kubeadm token create生成一个新的token
7# 如果忘记key的话,可以使用kubeadm token list查看
8# node2执行相同的操作

在master执行,看到node节点加入进来

查看node情况
1kubectl get nodes
2NAME            STATUS     ROLES    AGE     VERSION
3master.zj.com   Ready      master   64m     v1.14.2
4node1.zj.com    Ready      <none>   2m38s   v1.14.2
5node2.zj.com    NotReady   <none>   6s      v1.14.2
node节点配置文件
 1# 可以把master节点的配置文件放到node节点上面,方便node节点使用kubectl
 2node节点上可能没有这个文件
 3mkdir -p /root/.kube
 4
 5node1
 6scp /etc/kubernetes/admin.conf  172.16.8.140:/root/.kube/config
 7node2
 8scp /etc/kubernetes/admin.conf  172.16.8.143:/root/.kube/config
 9node3
10scp /etc/kubernetes/admin.conf  172.16.8.155:/root/.kube/config
Master 节点上查看 Pod 运行状态
 1kubectl get pods -n kube-system -o wide
 2NAME                                          READY   STATUS                  RESTARTS   AGE     IP             NODE                  NOMINATED NODE   READINESS GATES
 3coredns-fb8b8dccf-4ckv9                       1/1     Running                 0          38m     10.244.1.2     node1.liudehan.com    <none>           <none>
 4coredns-fb8b8dccf-dlrb9                       1/1     Running                 0          38m     10.244.1.3     node1.liudehan.com    <none>           <none>
 5etcd-master.liudehan.com                      1/1     Running                 0          37m     172.16.8.153   master.liudehan.com   <none>           <none>
 6kube-apiserver-master.liudehan.com            1/1     Running                 0          37m     172.16.8.153   master.liudehan.com   <none>           <none>
 7kube-controller-manager-master.liudehan.com   1/1     Running                 0          37m     172.16.8.153   master.liudehan.com   <none>           <none>
 8kube-flannel-ds-amd64-8vd2h                   1/1     Running                 0          9m16s   172.16.8.143   node2.liudehan.com    <none>           <none>
 9kube-flannel-ds-amd64-9mb9m                   1/1     Running                 0          10m     172.16.8.140   node1.liudehan.com    <none>           <none>
10kube-flannel-ds-amd64-n28tx                   1/1     Running                 0          9m9s    172.16.8.155   node3.liudehan.com    <none>           <none>
11kube-flannel-ds-amd64-nh2dl                   0/1     Init:ImagePullBackOff   0          30m     172.16.8.153   master.liudehan.com   <none>           <none>
12kube-proxy-6knch                              1/1     Running                 0          9m16s   172.16.8.143   node2.liudehan.com    <none>           <none>
13kube-proxy-l7gvg                              1/1     Running                 0          9m9s    172.16.8.155   node3.liudehan.com    <none>           <none>
14kube-proxy-lm4lz                              1/1     Running                 0          10m     172.16.8.140   node1.liudehan.com    <none>           <none>
15kube-proxy-qc7sn                              1/1     Running                 0          38m     172.16.8.153   master.liudehan.com   <none>           <none>
16kube-scheduler-master.liudehan.com            1/1     Running                 0          37m     172.16.8.153   master.liudehan.com   <none>           <none>
17
18# kube-flannel 在每一个 Node 节点上都有运行
更改kube-proxy配置
1kubectl edit configmap kube-proxy -n kube-system
2# 找到如下部分。
3
4    kind: KubeProxyConfiguration
5    metricsBindAddress: 127.0.0.1:10249
6    mode: "ipvs"
7    nodePortAddresses: null
8    oomScoreAdj: -999
9其中mode原来是空,默认为iptables模式,改为ipvs。scheduler默认是空,默认负载均衡算法为轮训
删除node
封锁node,排干node上的pod
1kubectl drain node3.liudehan.com --delete-local-data --force --ignore-daemonsets
删除node3节点
1kubectl delete node node3.liudehan.com
2node "node3.liudehan.com" deleted
3# 在查看发现node3已经不在master节点了
4kubectl get nodes
5NAME                  STATUS     ROLES    AGE   VERSION
6master.liudehan.com   NotReady   master   57m   v1.14.2
7node1.liudehan.com    Ready      <none>   28m   v1.14.2
8node2.liudehan.com    Ready      <none>   28m   v1.14.2
node3上的pod已经调度到node2
1[root@master] ~$ kubectl get pods -o wide
2NAME                     READY   STATUS    RESTARTS   AGE     IP           NODE               NOMINATED NODE   READINESS GATES
3curl-66959f6557-r4crd    1/1     Running   1          8m34s   10.244.2.7   slave2.hanli.com   <none>           <none>
4nginx-58db6fdb58-5wt7p   1/1     Running   0          3d6h    10.244.1.4   slave1.hanli.com   <none>           <none>
5nginx-58db6fdb58-bhmcv   1/1     Running   0          55s     10.244.2.8   slave2.hanli.com   <none>           <none>
6
7# slave3上执行
8# 在 Node 被删除,需要重启所有 kubeadm 安装状态
9kubeadm reset
重新使node加入集群
1kubeadm join 172.16.8.153:6443 --token 92sot3.g1nyzz82hehvgqh8 \
2--discovery-token-ca-cert-hash sha256:8d69226b405ab30faa52c884502a00485390fcbc2e0669f6c2c7c5fa2f85392d 
k8s上部署一个Whoami
在Master运行部署Whoami
1kubectl create deployment whoami --image=idoall/whoami
2deployment.apps/whoami created
查看Whoami部署状态
1kubectl get deployments
2NAME     READY   UP-TO-DATE   AVAILABLE   AGE
3whoami   5/5     5            5           59m
查看Whoami的部署信息
 1kubectl get deployments
 2NAME     READY   UP-TO-DATE   AVAILABLE   AGE
 3whoami   5/5     5            5           59m
 4[root@master _src]# kubectl describe deployment whoami
 5Name:                   whoami
 6Namespace:              default
 7CreationTimestamp:      Sat, 25 May 2019 18:09:18 +0800
 8Labels:                 app=whoami
 9Annotations:            deployment.kubernetes.io/revision: 1
10Selector:               app=whoami
11Replicas:               5 desired | 5 updated | 5 total | 5 available | 0 unavailable
12StrategyType:           RollingUpdate
13MinReadySeconds:        0
14RollingUpdateStrategy:  25% max unavailable, 25% max surge
15Pod Template:
16  Labels:  app=whoami
17  Containers:
18   whoami:
19    Image:        idoall/whoami
20    Port:         <none>
21    Host Port:    <none>
22    Environment:  <none>
23    Mounts:       <none>
24  Volumes:        <none>
25Conditions:
26  Type           Status  Reason
27  ----           ------  ------
28  Progressing    True    NewReplicaSetAvailable
29  Available      True    MinimumReplicasAvailable
30OldReplicaSets:  <none>
31NewReplicaSet:   whoami-8657469579 (5/5 replicas created)
32Events:          <none>
查看Whoami的容器日志
  1kubectl describe po whoami
  2
  3Name:               whoami-8657469579-6thrj
  4Namespace:          default
  5Priority:           0
  6PriorityClassName:  <none>
  7Node:               node3.liudehan.com/172.16.8.155
  8Start Time:         Sat, 25 May 2019 18:15:27 +0800
  9Labels:             app=whoami
 10                    pod-template-hash=8657469579
 11Annotations:        <none>
 12Status:             Running
 13IP:                 10.244.4.3
 14Controlled By:      ReplicaSet/whoami-8657469579
 15Containers:
 16  whoami:
 17    Container ID:   docker://57676edcf1d7620fc3710a5ab376b94212d1ab9bc89b6e3b48301b73d8748918
 18    Image:          idoall/whoami
 19    Image ID:       docker-pullable://idoall/whoami@sha256:6e79f7182eab032c812f6dafdaf55095409acd64d98a825c8e4b95e173e198f2
 20    Port:           <none>
 21    Host Port:      <none>
 22    State:          Running
 23      Started:      Sat, 25 May 2019 18:15:36 +0800
 24    Ready:          True
 25    Restart Count:  0
 26    Environment:    <none>
 27    Mounts:
 28      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4zswl (ro)
 29Conditions:
 30  Type              Status
 31  Initialized       True
 32  Ready             True
 33  ContainersReady   True
 34  PodScheduled      True
 35Volumes:
 36  default-token-4zswl:
 37    Type:        Secret (a volume populated by a Secret)
 38    SecretName:  default-token-4zswl
 39    Optional:    false
 40QoS Class:       BestEffort
 41Node-Selectors:  <none>
 42Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
 43                 node.kubernetes.io/unreachable:NoExecute for 300s
 44Events:          <none>
 45
 46Name:               whoami-8657469579-k2db4
 47Namespace:          default
 48Priority:           0
 49PriorityClassName:  <none>
 50Node:               node3.liudehan.com/172.16.8.155
 51Start Time:         Sat, 25 May 2019 18:15:27 +0800
 52Labels:             app=whoami
 53                    pod-template-hash=8657469579
 54Annotations:        <none>
 55Status:             Running
 56IP:                 10.244.4.2
 57Controlled By:      ReplicaSet/whoami-8657469579
 58Containers:
 59  whoami:
 60    Container ID:   docker://cfae50bf7fd21386fb6fca2986adaf091f047b4dbe759de177317d5d828e73d1
 61    Image:          idoall/whoami
 62    Image ID:       docker-pullable://idoall/whoami@sha256:6e79f7182eab032c812f6dafdaf55095409acd64d98a825c8e4b95e173e198f2
 63    Port:           <none>
 64    Host Port:      <none>
 65    State:          Running
 66      Started:      Sat, 25 May 2019 18:15:35 +0800
 67    Ready:          True
 68    Restart Count:  0
 69    Environment:    <none>
 70    Mounts:
 71      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4zswl (ro)
 72Conditions:
 73  Type              Status
 74  Initialized       True
 75  Ready             True
 76  ContainersReady   True
 77  PodScheduled      True
 78Volumes:
 79  default-token-4zswl:
 80    Type:        Secret (a volume populated by a Secret)
 81    SecretName:  default-token-4zswl
 82    Optional:    false
 83QoS Class:       BestEffort
 84Node-Selectors:  <none>
 85Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
 86                 node.kubernetes.io/unreachable:NoExecute for 300s
 87Events:          <none>
 88
 89Name:               whoami-8657469579-rtsr4
 90Namespace:          default
 91Priority:           0
 92PriorityClassName:  <none>
 93Node:               node2.liudehan.com/172.16.8.143
 94Start Time:         Sat, 25 May 2019 18:15:23 +0800
 95Labels:             app=whoami
 96                    pod-template-hash=8657469579
 97Annotations:        <none>
 98Status:             Running
 99IP:                 10.244.2.3
100Controlled By:      ReplicaSet/whoami-8657469579
101Containers:
102  whoami:
103    Container ID:   docker://89f518bd04ff6830e252e6e45004eb0d7aa64d0acc7550d432ccffda12d5f2d7
104    Image:          idoall/whoami
105    Image ID:       docker-pullable://idoall/whoami@sha256:6e79f7182eab032c812f6dafdaf55095409acd64d98a825c8e4b95e173e198f2
106    Port:           <none>
107    Host Port:      <none>
108    State:          Running
109      Started:      Sat, 25 May 2019 18:15:37 +0800
110    Ready:          True
111    Restart Count:  0
112    Environment:    <none>
113    Mounts:
114      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4zswl (ro)
115Conditions:
116  Type              Status
117  Initialized       True
118  Ready             True
119  ContainersReady   True
120  PodScheduled      True
121Volumes:
122  default-token-4zswl:
123    Type:        Secret (a volume populated by a Secret)
124    SecretName:  default-token-4zswl
125    Optional:    false
126QoS Class:       BestEffort
127Node-Selectors:  <none>
128Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
129                 node.kubernetes.io/unreachable:NoExecute for 300s
130Events:          <none>
131
132Name:               whoami-8657469579-vkv92
133Namespace:          default
134Priority:           0
135PriorityClassName:  <none>
136Node:               node2.liudehan.com/172.16.8.143
137Start Time:         Sat, 25 May 2019 18:15:23 +0800
138Labels:             app=whoami
139                    pod-template-hash=8657469579
140Annotations:        <none>
141Status:             Running
142IP:                 10.244.2.2
143Controlled By:      ReplicaSet/whoami-8657469579
144Containers:
145  whoami:
146    Container ID:   docker://9418e3560da086e1bcfe831d4168797ed9b8ff1f303b82633e142d53282d2d71
147    Image:          idoall/whoami
148    Image ID:       docker-pullable://idoall/whoami@sha256:6e79f7182eab032c812f6dafdaf55095409acd64d98a825c8e4b95e173e198f2
149    Port:           <none>
150    Host Port:      <none>
151    State:          Running
152      Started:      Sat, 25 May 2019 18:15:36 +0800
153    Ready:          True
154    Restart Count:  0
155    Environment:    <none>
156    Mounts:
157      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4zswl (ro)
158Conditions:
159  Type              Status
160  Initialized       True
161  Ready             True
162  ContainersReady   True
163  PodScheduled      True
164Volumes:
165  default-token-4zswl:
166    Type:        Secret (a volume populated by a Secret)
167    SecretName:  default-token-4zswl
168    Optional:    false
169QoS Class:       BestEffort
170Node-Selectors:  <none>
171Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
172                 node.kubernetes.io/unreachable:NoExecute for 300s
173Events:          <none>
174
175Name:               whoami-8657469579-xqb7j
176Namespace:          default
177Priority:           0
178PriorityClassName:  <none>
179Node:               node1.liudehan.com/172.16.8.140
180Start Time:         Sat, 25 May 2019 18:09:23 +0800
181Labels:             app=whoami
182                    pod-template-hash=8657469579
183Annotations:        <none>
184Status:             Running
185IP:                 10.244.1.4
186Controlled By:      ReplicaSet/whoami-8657469579
187Containers:
188  whoami:
189    Container ID:   docker://09bb5819083dfebfc067680fb4af8434b07d1a3a3fc4e3630d7a0a194fa01578
190    Image:          idoall/whoami
191    Image ID:       docker-pullable://idoall/whoami@sha256:6e79f7182eab032c812f6dafdaf55095409acd64d98a825c8e4b95e173e198f2
192    Port:           <none>
193    Host Port:      <none>
194    State:          Running
195      Started:      Sat, 25 May 2019 18:09:34 +0800
196    Ready:          True
197    Restart Count:  0
198    Environment:    <none>
199    Mounts:
200      /var/run/secrets/kubernetes.io/serviceaccount from default-token-4zswl (ro)
201Conditions:
202  Type              Status
203  Initialized       True
204  Ready             True
205  ContainersReady   True
206  PodScheduled      True
207Volumes:
208  default-token-4zswl:
209    Type:        Secret (a volume populated by a Secret)
210    SecretName:  default-token-4zswl
211    Optional:    false
212QoS Class:       BestEffort
213Node-Selectors:  <none>
214Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
215                 node.kubernetes.io/unreachable:NoExecute for 300s
216Events:          <none>
为Whoami扩展端口
1kubectl create service nodeport whoami --tcp=80:80
2service/whoami created
3# 上面的命令将在主机上为 Whoami 部署创建面向公众的服务。
4# 由于这是一个节点端口部署,因此 kubernetes 会将此服务分配给32000+范围内的主机上的端口
查看当前的服务状态
 1kubectl get svc,pods -o wide
 2NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE    SELECTOR
 3service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        161m   <none>
 4service/whoami       NodePort    10.100.216.150   <none>        80:31203/TCP   73m    app=whoami
 5
 6NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE   READINESS GATES
 7pod/whoami-8657469579-6thrj   1/1     Running   0          68m   10.244.4.3   node3.liudehan.com   <none>           <none>
 8pod/whoami-8657469579-k2db4   1/1     Running   0          68m   10.244.4.2   node3.liudehan.com   <none>           <none>
 9pod/whoami-8657469579-rtsr4   1/1     Running   0          68m   10.244.2.3   node2.liudehan.com   <none>           <none>
10pod/whoami-8657469579-vkv92   1/1     Running   0          68m   10.244.2.2   node2.liudehan.com   <none>           <none>
11pod/whoami-8657469579-xqb7j   1/1     Running   0          74m   10.244.1.4   node1.liudehan.com   <none>           <none>
12
13上面的服务可以看到Whoami运行在32707端口,通过ip+端口访问 
测试Whoami服务是否运行正常
1curl node1.liudehan.com:31203
2[mshk.top]I'm whoami-8657469579-6thrj
扩展部署应用
1kubectl scale --replicas=5 deployment/whoami
2deployment.extensions/whoami scaled

查看扩展后的结果,可以看到 Whoami 在 c1、c2、c3上面都有部署

 1kubectl get svc,pods -o wide
 2NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE    SELECTOR
 3service/kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        165m   <none>
 4service/whoami       NodePort    10.100.216.150   <none>        80:31203/TCP   78m    app=whoami
 5
 6NAME                          READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODE   READINESS GATES
 7pod/whoami-8657469579-6thrj   1/1     Running   0          72m   10.244.4.3   node3.liudehan.com   <none>           <none>
 8pod/whoami-8657469579-k2db4   1/1     Running   0          72m   10.244.4.2   node3.liudehan.com   <none>           <none>
 9pod/whoami-8657469579-rtsr4   1/1     Running   0          72m   10.244.2.3   node2.liudehan.com   <none>           <none>
10pod/whoami-8657469579-vkv92   1/1     Running   0          72m   10.244.2.2   node2.liudehan.com   <none>           <none>
11pod/whoami-8657469579-xqb7j   1/1     Running   0          78m   10.244.1.4   node1.liudehan.com   <none>           <none>
测试扩展后的结果
 1root@master _src]# curl node1.liudehan.com:31203
 2[mshk.top]I'm whoami-8657469579-k2db4
 3[root@master _src]# curl node1.liudehan.com:31203
 4[mshk.top]I'm whoami-8657469579-xqb7j
 5[root@master _src]# curl node1.liudehan.com:31203
 6[mshk.top]I'm whoami-8657469579-xqb7j
 7[root@master _src]# curl node1.liudehan.com:31203
 8[mshk.top]I'm whoami-8657469579-vkv92
 9[root@master _src]# curl node1.liudehan.com:31203
10[mshk.top]I'm whoami-8657469579-xqb7j
11[root@master _src]# curl node1.liudehan.com:31203
12[mshk.top]I'm whoami-8657469579-vkv92
13[root@master _src]# curl node1.liudehan.com:31203
14[mshk.top]I'm whoami-8657469579-k2db4
删除Whoami部署
1[root@master _src]# kubectl delete deployment whoami
2deployment.extensions "whoami" deleted
3[root@master _src]# kubectl get deployments
4No resources found.
查看运行的server
1[root@master _src]# kubectl get service
2NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
3kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        172m
4whoami       NodePort    10.100.216.150   <none>        80:31203/TCP   84m
删除servre
1root@master _src]# kubectl delete svc whoami
2service "whoami" deleted
3[root@master _src]# kubectl get service
4NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
5kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   175m