第三方数据未拉取取不了现 4.kubernetes集群搭建
实验环境 主机名IP备注
k8s-80
192.168.188.80
2U2G、
k8s-81
192.168.188.81
1U1G、node
k8s-82
192.168.188.82
1U1G、node
有两种方式,第一种是二进制的方式,可定制但是部署复杂容易出错;第二种是工具安装,部署简单,不可定制化。本次我们部署 版。
注意:本次试验中写的yaml文件或者创建各种资源都没有指定命名空间,所以默认使用空间;在生产上一定要指明命名空间,不然会默认使用空间
环境准备
若无特别说明,都是所有机子都操作
1.1、关闭安全策略与防火墙
关闭防火墙是为了方便日常使用,不会给我们造成困扰。在生成环境中建议打开。
# 安全策略
# 永久关闭
sed -i 's#enforcing#disabled#g' /etc/sysconfig/selinux
# 临时关闭
setenforce 0
# 防火墙
systemctl disable firewalld
systemctl stop firewalld
systemctl status firewalld
1.2、关闭 swap 分区
一旦触发 swap,会导致系统性能急剧下降,所以一般情况下,K8S 要求关闭swap 分区。
第一种方法:关闭swap分区
swapoff -a
sed -ri 's/.*swap.*/#&/' /etc/fstab
第二种方法:在k8s上设置忽略swap分区
echo 'KUBELET_EXTRA_ARGS="--fail-swap-on=false"' > /etc/sysconfig/kubelet
1.3、配置国内 yum 源
cd /etc/yum.repos.d/
mkdir bak
mv ./* bak/
wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum makecache
1.4、升级内核版本
由于 运行需要较新的系统内核功能,例如 ipvs 等等,所以一般情况下,我们需要使用4.0+以上版本的系统内核。
### 载入公钥
rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
### 安装ELRepo
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
### 载入elrepo-kernel元数据
yum --disablerepo=* --enablerepo=elrepo-kernel repolist # 34个
### 查看可用的rpm包
yum --disablerepo=* --enablerepo=elrepo-kernel list kernel*
### 安装长期支持版本的kernel
yum --disablerepo=* --enablerepo=elrepo-kernel install -y kernel-lt.x86_64
### 删除旧版本工具包
yum remove kernel-tools-libs.x86_64 kernel-tools.x86_64 -y
### 安装新版本工具包
yum --disablerepo=* --enablerepo=elrepo-kernel install -y kernel-lt-tools.x86_64
### 查看默认启动顺序
awk -F' '$1=="menuentry " {print $2}' /etc/grub2.cfg
CentOS Linux (4.4.183-1.el7.elrepo.x86_64) 7 (Core)
CentOS Linux (3.10.0-327.10.1.el7.x86_64) 7 (Core)
CentOS Linux (0-rescue-c52097a1078c403da03b8eddeac5080b) 7 (Core)
#默认启动的顺序是从0开始,新内核是从头插入(目前位置在0,而4.4.4的是在1),所以需要选择0。
grub2-set-default 0
#重启并检查
reboot
Ubuntu16.04
#打开 http://kernel.ubuntu.com/~kernel-ppa/mainline/ 并选择列表中选择你需要的版本(以4.16.3为例)。
#接下来,根据你的系统架构下载 如下.deb 文件:
Build for amd64 succeeded (see BUILD.LOG.amd64):
linux-headers-4.16.3-041603_4.16.3-041603.201804190730_all.deb
linux-headers-4.16.3-041603-generic_4.16.3-041603.201804190730_amd64.deb
linux-image-4.16.3-041603-generic_4.16.3-041603.201804190730_amd64.deb
#安装后重启即可
sudo dpkg -i *.deb
1.5、安装 ipvs
ipvs 是系统内核中的一个模块,其网络转发性能很高。一般情况下,我们首选ipvs。
# 安装 IPVS
yum install -y conntrack-tools ipvsadm ipset conntrack libseccomp
# 加载 IPVS 模块
cat > /etc/sysconfig/modules/ipvs.modules <<EOF
#!/bin/bash
ipvs_modules="ip_vs ip_vs_lc ip_vs_wlc ip_vs_rr ip_vs_wrr ip_vs_lblc ip_vs_lblcr ip_vs_dh ip_vs_sh ip_vs_fo ip_vs_nq ip_vs_sed ip_vs_ftp nf_conntrack"
for kernel_module in ${ipvs_modules}; do
/sbin/modinfo -F filename ${kernel_module} > /dev/null 2>&1
if [ $? -eq 0 ]; then
/sbin/modprobe ${kernel_module}
fi
done
EOF
# 验证
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
1.6、内核参数优化
内核参数优化的主要目的是使其更适合 的正常运行。
cat > /etc/sysctl.d/k8s.conf << EOF
net.ipv4.ip_forward = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-ip6tables = 1
fs.may_detach_mounts = 1
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp.keepaliv.probes = 3
net.ipv4.tcp_keepalive_intvl = 15
net.ipv4.tcp.max_tw_buckets = 36000
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp.max_orphans = 327680
net.ipv4.tcp_orphan_retries = 3
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.ip_conntrack_max = 65536
net.ipv4.tcp_max_syn_backlog = 16384
net.ipv4.top_timestamps = 0
net.core.somaxconn = 16384
EOF
# 立即生效
sysctl --system
1.7、安装
主要是作为 k8s 管理的常用的容器工具之一。
# step 1: 安装必要的一些系统工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# Step 2: 添加软件源信息
yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# Step 3
sed -i 's+download.docker.com+mirrors.aliyun.com/docker-ce+' /etc/yum.repos.d/docker-ce.repo
# Step 4: 更新并安装Docker-CE
yum makecache fast
yum -y install docker-ce
# Step 4: 开启Docker服务
service docker start
systemctl enable docker
# Step 5: 镜像加速
mkdir -p /etc/docker
tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://niphmo8u.mirror.aliyuncs.com"]
}
EOF
systemctl daemon-reload
systemctl restart docker
1.8、同步集群时间
[root@k8s-80 ~]# vim /etc/chrony.conf
[root@k8s-80 ~]# grep -Ev "#|^$" /etc/chrony.conf
server 3.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.0.0/16
logdir /var/log/chrony
node
vim /etc/chrony.conf
grep -Ev "#|^$" /etc/chrony.conf
server 192.168.188.80 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
logdir /var/log/chrony
all
systemctl restart chronyd
# 验证
date
1.9、映射
[root@k8s-80 ~]# vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.188.80 k8s-80
192.168.188.81 k8s-81
192.168.188.82 k8s-82
[root@k8s-80 ~]# scp -p /etc/hosts 192.168.188.81:/etc/hosts
[root@k8s-80 ~]# scp -p /etc/hosts 192.168.188.82:/etc/hosts
1.10、配置 源
这里配置的是阿里源,可以去看教程
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
#setenforce 0
#yum install -y kubelet kubeadm kubectl
#systemctl enable kubelet && systemctl start kubelet
# 注意
# 由于官网未开放同步方式, 可能会有索引gpg检查失败的情况, 这时请用 yum install -y --nogpgcheck kubelet kubeadm kubectl 安装
# 这里安装的是1.22.3版本
yum makecache --nogpgcheck
#kubeadm与kubectl是命令,kubelet是一个服务
yum install -y kubelet-1.22.3 kubeadm-1.22.3 kubectl-1.22.3
systemctl enable kubelet.service
1.11、镜像拉取
因为国内无法直接拉取镜像回来,所以这里自己构建阿里云的与阿里云的容器镜像服务进行构建拉取
# 打印 kubeadm 将使用的镜像列表。 配置文件用于自定义任何镜像或镜像存储库的情况
#大版本兼容,小版本不一致不影响
[root@k8s-80 ~]# kubeadm config images list
I0526 12:52:43.766362 3813 version.go:255] remote version is much newer: v1.24.1; falling back to: stable-1.22
k8s.gcr.io/kube-apiserver:v1.22.10
k8s.gcr.io/kube-controller-manager:v1.22.10
k8s.gcr.io/kube-scheduler:v1.22.10
k8s.gcr.io/kube-proxy:v1.22.10
k8s.gcr.io/pause:3.5
k8s.gcr.io/etcd:3.5.0-0
k8s.gcr.io/coredns/coredns:v1.8.4
新版代码管理
1.在新建仓库,然后新建目录,再创建文件
2.返回容器镜像服务,创建镜像仓库
3.绑定
4.建立个人访问令牌
5.返回容器镜像服务,填写绑定的个人访问令牌
6.修改规则,要指定分支目录里的文件
7.拉取镜像
all
# 构建好拉取镜像下来
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/kube-apiserver:v1.22.10
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/kube-controller-manager:v1.22.10
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/kube-scheduler:v1.22.10
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/kube-proxy:v1.22.10
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/pause:3.5
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/etcd:3.5.0-0
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/coredns:v1.8.4
# 重新打tag,还原成查询出来的样式
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/kube-apiserver:v1.22.10 k8s.gcr.io/kube-apiserver:v1.22.10
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/kube-controller-manager:v1.22.10 k8s.gcr.io/kube-controller-manager:v1.22.10
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/kube-scheduler:v1.22.10 k8s.gcr.io/kube-scheduler:v1.22.10
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/kube-proxy:v1.22.10 k8s.gcr.io/kube-proxy:v1.22.10
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/pause:3.5 k8s.gcr.io/pause:3.5
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/etcd:3.5.0-0 k8s.gcr.io/etcd:3.5.0-0
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/coredns:v1.8.4 k8s.gcr.io/coredns/coredns:v1.8.4
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/apiserver:v1.22.10
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/controller:v1.22.10
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/etcd:v3.5.0
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/pause:v3.5
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/proxy:v1.22.10
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/scheduler:v1.22.10
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/coredns:v1.8.4
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/apiserver:v1.22.10
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/controller:v1.22.10
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/etcd:v3.5.0
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/pause:v3.5
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/proxy:v1.22.10
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/scheduler:v1.22.10
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/coredns:v1.8.4
1.12、节点初始化
[root@k8s-master ~]# kubeadm init --help
[root@k8s-80 ~]# kubeadm init --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.188.80
注:pod网段要与flannel网段一致
在安装的过程中,会出现
[root@k8s-80 ~]# tail -100 /var/log/messages # 查看日志可以看到
failed to create kubelet: misconfiguration: kubelet cgroup driver: "cgroupfs" is different from docker cgroup driver: "systemd"
文件驱动默认由改成, 而我们安装的使用的文件驱动是, 造成不一致, 导致镜像无法启动
[root@k8s-master ~]# docker info |grep cgroup #查看驱动
Cgroup Driver: cgroupfs
现在有两种方式, 一种是修改, 另一种是修改;这里采用第一种,第二种请看这篇文章
[root@k8s-80 ~]# vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"], # 添加这配置
"registry-mirrors": ["https://niphmo8u.mirror.aliyuncs.com"]
}
systemctl daemon-reload
systemctl restart docker
# 删除起初初始化产生的文件,初始化提示里面会有
rm -rf XXX
# 然后再执行这个清除命令
kubeadm reset
# 重新初始化
[root@k8s-80 ~]# kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=192.168.188.80
# 初始化完毕在最后有两个步骤提示,分别是在master创建目录和一条24h时效的token,需要在规定时间内使用添加节点
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
# get nodes 命令就提供了 Kubernetes 的状态、角色和版本
# kubectl get no 或者 kubectl get nodes
[root@k8s-80 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-80 NotReady control-plane,master 6m9s v1.22.3
node
[root@k8s02 ~]#kubeadm join 192.168.188.80:6443 --token cp36la.obg1332jj7wl11az
--discovery-token-ca-cert-hash sha256:ee5053647a18fc69b59b648c7e3f7a8f039d5553531d627793242d193879e0ba
This node has joined the cluster #
# 当失效的时候可以使用以下命令重新生成
# 新令牌
kubeadm token create --print-join-command
我的number
kubeadm join 192.168.91.133:6443 --token 3do9zb.7unh9enw8gv7j4za
--discovery-token-ca-cert-hash sha256:b75f52f8e2ab753c1d18b73073e74393c72a3f8dc64e934765b93a38e7389385
[root@k8s-80 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-80 NotReady control-plane,master 6m55s v1.22.3
k8s-81 NotReady <none> 18s v1.22.3
k8s-82 NotReady <none> 8s v1.22.3
# 每个 get 命令都可以使用 –namespace 或 -n 参数指定对应的命名空间。这点对于查看 kube-system 中的 Pods 会非常有用,因为这些 Pods 是 Kubernetes 自身运行所需的服务。
[root@k8s-80 ~]# kubectl get po -n kube-system # 此时有几个服务是无法使用,因为缺少网络插件
[root@k8s-master ~]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-n6mfw 0/1 Pending 0 164m
coredns-78fcd69978-xshwb 0/1 Pending 0 164m
etcd-k8s-master 1/1 Running 0 165m
kube-apiserver-k8s-master 1/1 Running 0 165m
kube-controller-manager-k8s-master 1/1 Running 1 165m
kube-proxy-g7z79 1/1 Running 0 144m
kube-proxy-wl4ct 1/1 Running 0 145m
kube-proxy-x59w9 1/1 Running 0 164m
kube-scheduler-k8s-master 1/1 Running 1 165m
1.13、安装网络插件
关于详细介绍
网络模型
需要使用第三方的网络插件来实现 的网络功能,这样一来,安装网络插件成为必要前提;第三方网络插件有多种,常用的有 、 和 (+),不同的网络组件,都提供基本的网络功能,为各个 Node 节点提供 IP 网络等。
设计了网络模型,但却将它的实现交给了网络插件,CNI 网络插件最主要的功能就是实现POD资源能够跨主机进行通讯。常见的 CNI 网络插件: 1. 2. 3. Canal 4. 5. 6. NSX-T 7. Kube-
这里使用,可以来这里保存这个yml文件上传到服务器
的地址
[root@k8s-80 ~]# ls
anaconda-ks.cfg flannel.yml
# 某些命令需要配置文件,而 apply 命令可以在集群内调整配置文件应用于资源。虽然也可以通过命令行 standard in (STNIN) 来完成,但 apply 命令更好一些,因为它可以让你知道如何使用集群,以及要应用哪种配置文件。
# 可以应用几乎任何配置,但是一定要明确所要应用的配置,否则可能会引发意料之外的后果。
[root@k8s-80 ~]# kubectl apply -f flannel.yml
[root@k8s-master ~]# cat kube-flannel.yaml |grep Network
"Network": "10.244.0.0/16",
hostNetwork: true
[root@k8s-master ~]# cat kube-flannel.yaml |grep -w image |grep -v "#"
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.1
当拉不下来镜像的时候可以从阿里云自己搭建的镜像仓库中的构建进行拉取,包括前面也是采用这个方法拉取的镜像
all
# 拉取
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/mirrored-flannelcni-flannel:v0.17.0
docker pull registry.cn-shenzhen.aliyuncs.com/uplooking/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
# 打标签
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/mirrored-flannelcni-flannel:v0.17.0 rancher/mirrored-flannelcni-flannel:v0.17.0
docker tag registry.cn-shenzhen.aliyuncs.com/uplooking/mirrored-flannelcni-flannel-cni-plugin:v1.0.1 rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
[root@k8s-80 ~]# kubectl get no # 检查状态,此时全为Ready证明集群初步完成且正常
NAME STATUS ROLES AGE VERSION
k8s-80 Ready control-plane,master 47m v1.22.3
k8s-81 Ready <none> 41m v1.22.3
k8s-82 Ready <none> 40m v1.22.3
1.14、kube-proxy开启ipvs
需要使用第三方的网络插件来实现 的网络功能,这样一来,安装网络插件成为必要前提;第三方网络插件有多种,常用的有 、 和 (+),不同的网络组件,都提供基本的网络功能,为各个 Node 节点提供 IP 网络等。默认使用。
当创建好资源后,如果需要修改,该怎么办?这时候就需要 edit 命令了。
可以用这个命令编辑集群中的任何资源。它会打开默认文本编辑器。
# 更改kube-proxy配置
[root@k8s-80 ~]# kubectl edit configmap kube-proxy -n kube-system
找到如下部分的内容
minSyncPeriod: 0s
scheduler: ""
syncPeriod: 30s
kind: KubeProxyConfiguration
metricsBindAddress: 127.0.0.1:10249
mode: "ipvs" # 加上这个
nodePortAddresses: null
其中mode原来是空,默认为iptables模式,改为ipvs
scheduler默认是空,默认负载均衡算法为轮训
编辑完,保存退出
3、删除所有kube-proxy的pod
kubectl delete pod xxx -n kube-system
# kubectl delete po `kubectl get po -n kube-system | grep proxy | awk '{print $1}'` -n kube-system
4、查看kube-proxy的pod日志
kubectl logs kube-proxy-xxx -n kube-system
.有.....Using ipvs Proxier......即可.
或者ipvsadm -l
# 删除对应kube-proxy的pod重新生成
# 删除指定命名空间内的kube-proxy的pod
# kubectl delete ns xxxx 删除整个命名空间
[root@k8s-80 ~]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-d8cv5 1/1 Running 0 6m43s
coredns-78fcd69978-qp7f6 1/1 Running 0 6m43s
etcd-k8s-80 1/1 Running 0 6m57s
kube-apiserver-k8s-80 1/1 Running 0 6m59s
kube-controller-manager-k8s-80 1/1 Running 0 6m58s
kube-flannel-ds-88kmk 1/1 Running 0 2m58s
kube-flannel-ds-wfvst 1/1 Running 0 2m58s
kube-flannel-ds-wq2vz 1/1 Running 0 2m58s
kube-proxy-4fpm9 1/1 Running 0 6m28s
kube-proxy-hhb5s 1/1 Running 0 6m25s
kube-proxy-jr5kl 1/1 Running 0 6m43s
kube-scheduler-k8s-80 1/1 Running 0 6m57s
[root@k8s-80 ~]# kubectl delete pod kube-proxy-4fpm9 -n kube-system
pod "kube-proxy-4fpm9" deleted
[root@k8s-80 ~]# kubectl delete pod kube-proxy-hhb5s -n kube-system
pod "kube-proxy-hhb5s" deleted
[root@k8s-80 ~]# kubectl delete pod kube-proxy-jr5kl -n kube-system
pod "kube-proxy-jr5kl" deleted
# 检查集群状态
[root@k8s-80 ~]# kubectl get po -n kube-system # 此时已经重新生成kube-proxy的pod
# 检查ipvs
[root@k8s-80 ~]# ipvsadm -l
get
使用 get 命令可以获取当前集群中可用的资源列表,包括:
每个 get 命令都可以使用 – 或 -n 参数指定对应的命名空间。这点对于查看 kube- 中的 Pods 会非常有用,因为这些 Pods 是 自身运行所需的服务。
[root@k8s-80 ~]# kubectl get ns # 查看有什么命名空间
NAME STATUS AGE
default Active 23h # 不用加-n 都能进的,默认空间
kube-node-lease Active 23h # 监控相关的空间
kube-public Active 23h # 公用空间
kube-system Active 23h # 系统空间
# 创建一个新的空间
[root@k8s-80 ~]# kubectl create ns dev
namespace/dev created
[root@k8s-80 ~]# kubectl get ns
NAME STATUS AGE
default Active 23h
dev Active 1s
kube-node-lease Active 23h
kube-public Active 23h
kube-system Active 23h
# 不建议随便删除空间,会把所有资源都删除,但是可以通过etcd找回
# 查看指定名称空间的service信息
# 不带命名空间就是默认default空间
[root@k8s-80 ~]# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 23h
# 使用 kubectl get endpoints 命令来验证 DNS 的端点是否公开,解析的是k8s内部的
# 这些ip是虚拟的,在宿主机ip a或者 ifconfig查看没有
[root@k8s-80 ~]# kubectl get ep -n kube-system
NAME ENDPOINTS AGE
kube-dns 10.244.2.8:53,10.244.2.9:53,10.244.2.8:53 + 3 more... 23h
# 获取指定名称空间中的指定pod
[root@k8s-80 ~]# kubectl get po -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-78fcd69978-f9pcw 1/1 Running 3 (12h ago) 24h
coredns-78fcd69978-hprbh 1/1 Running 3 (12h ago) 24h
etcd-k8s-80 1/1 Running 4 (12h ago) 24h
kube-apiserver-k8s-80 1/1 Running 4 (12h ago) 24h
kube-controller-manager-k8s-80 1/1 Running 6 (98m ago) 24h
kube-flannel-ds-28w79 1/1 Running 4 (12h ago) 24h
kube-flannel-ds-bsw2t 1/1 Running 2 (12h ago) 24h
kube-flannel-ds-rj57q 1/1 Running 3 (12h ago) 24h
kube-proxy-d8hs2 1/1 Running 0 11h
kube-proxy-gms7v 1/1 Running 0 11h
kube-proxy-phbnk 1/1 Running 0 11h
kube-scheduler-k8s-80 1/1 Running 6 (98m ago) 24h
创建pod
中内建了很多 (控制器),这些相当于一个状态机,用来控制 Pod 的具体状态和行为。
为 Pod 和 提供了一个声明式定义()方法,用来替代以前的 r 来方便的管理应用。典型的应用场景包括:
1.1 yml文件建立pod
前面我们已经使用过了yml文件,这里还是要提醒一下注意缩进问题!!!
1.1.1 pod资源清单详解
apiVersion: v1 #必选,api的版本号
kind: Deployment #必选,pod的类型
metadata: #必选,元数据
name: nginx #必选,pod的名字
namespace: nginx #可选,可指定pod所在的命名空间,不选默认为default命名空间
labels: #可选,不过一般写上。标签,是用来联系上下文服务的
- app: nginx
annotations: #可选,注释列表
- app: nginx
spec: #必选,pod的详细属性
replicas: 3 #必选,生成的副本数,即生成3个pod
selector: #副本选择器
matchLabels: #匹配标签,匹配的就是上面我们写的标签
app: nginx
template: #匹配模板,也是上面我们写的元数据
matadata:
labels:
app:nginx
spec: #必选,容器的详细属性
containers: #必选,容器列表
- name: nginx #容器名字
image: nginx:1.20.2 #容器所用的镜像版本
ports:
containerPort: 80 #开放容器的80端口
写一个简单的pod
[root@k8s-80 ~]# mkdir /k8syaml
[root@k8s-80 ~]# cd /k8syaml
[root@k8s-80 k8syaml]# vim nginx.yaml # 来源于官网
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20.2
ports:
- containerPort: 80
# 应用资源,创建并运行;create也能创建但是不运行
[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml
[root@k8s-80 k8syaml]# kubectl get po # 默认空间里面创建pod
NAME READY STATUS RESTARTS AGE
nginx-deployment-66b6c48dd5-hpxc6 0/1 ContainerCreating 0 18s
nginx-deployment-66b6c48dd5-nslqj 0/1 ContainerCreating 0 18s
nginx-deployment-66b6c48dd5-vwxlp 0/1 ContainerCreating 0 18s
[root@k8s-80 k8syaml]# kubectl describe po nginx-deployment-66b6c48dd5-hpxc6 # 把某个pod的详细信息输出
# 访问的方式
# 进入pod启动的容器
# kubectl exec -it podName -n nsName /bin/sh #进入容器
# kubectl exec -it podName -n nsName /bin/bash #进入容器
[root@k8s-80 k8syaml]# kubectl exec -it nginx-deployment-66b6c48dd5-hpxc6 -- bash
# 列出可用的API版本
[root@k8s-80 k8syaml]# kubectl api-versions # 可以找到对应在nginx.yaml中apiVersion
[root@k8s-80 k8syaml]# kubectl get po # 查看空间里pod状态
NAME READY STATUS RESTARTS AGE
nginx-cc4b758d6-rrtcc 1/1 Running 0 2m14s
nginx-cc4b758d6-vmrw5 1/1 Running 0 2m14s
nginx-cc4b758d6-w84qb 1/1 Running 0 2m14s
# 查看某一个pod的具体情况
[root@k8s-80 k8syaml]# kubectl describe po nginx-deployment-66b6c48dd5-b82ck
# 所有节点信息都输出
[root@k8s-80 k8syaml]# kubectl describe node # 一个节点建议pods不超出40个,20~30个最好(阿里建议)
定义
在 中是一个 REST 对象,和 Pod 类似。 像所有的 REST 对象一样, 定义可以基于 POST 方式,请求 API 创建新的实例。 对象的名称必须是合法的 .。
由前面的理论我们可以知道,我们有时候是需要外部来访问我们的pod的,那么这时候就要需要来帮助。 是一个抽象资源,它相当于附着在pod上面的代理层或者负载层,当我们访问代理层就访问到pod。常用的类型为与
允许指定你所需要的 类型,默认是 。
Type 的取值以及行为如下:
其中, 类型 控制平面将在 ---node-port-range 标志指定的范围内分配端口(默认值:30000-32767);在生产上不是很建议,服务少的情况可以使用,但是当服务多的时候局限性就呈现出来,适合测试的时候使用
2.1 的yml文件清单详解
apiVersion: v1 #必选,api的版本号
kind: Service #必选,类型为Service
metadata: #必选,元数据
name: nginx #必选,service的名字,一般与服务名一样
namespace: nginx #可选,可指定service所在的命名空间,不选默认为default命名空间
spec: #必选,详细属性
type: #可选,service的类型,默认为ClusterIP
selector: #副本选择器
app: nginx
ports: #必选,端口的详细信息
- port: 80 #svc暴露的端口
targetPort: 80 #映射pod暴露的端口
nodePort: 30005 #可选,范围为:30000-32767
2.2
是集群模式,只能集群内部访问,是k8s默认的服务类型,外部是无法访问的。其主要用于为集群内 Pod 访问时,提供的固定访问地址,默认是自动分配地址,可使用 关键字指定固定 IP。
2.2.1 的yml文件
在这里,我们一般会把所有资源写在一个yml文件中,所以我们在nginx.yml文件中继续写,但是记住两种资源中间加—
cd /opt/k8s
vim nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
2.2.2 运行并查看 IP
注意,apply有更新的作用,不用删除再启动
apply -f nginx.yml
get svc
我们可以看到已经多了一个IP
2.2.3 验证
(1)这时我们集群内部访问这个ip,发现可以访问。
curl 10.101.187.222
(2)这时,我们在外部访问一下,发现无法访问。
(3)综上所述,符合的特性,内部可访问,外部不可访问。
2.3 类型
通过每个节点上的 IP 和静态端口()暴露服务。这里需要注意的是,当我们在节点开了这个端口后,那么在么一个节点上都可以访问到这个IP端口,那么就可以访问到服务。这种类型的缺点是端口只有2768个,当服务比端口多的时候,这种类型就不行了。
2.3.1 的yml文件
只是在类型的yml文件上多了type和,注意范围。
cd /opt/k8s
vim nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: NodePort
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
nodePort: 30005
2.3.2 运行并查看 IP
apply -f nginx.yml
get svc
我们可以看到类型变成了,端口变成80:30005
2.3.3 验证
(1)这时我们集群内部访问这个ip,发现可以访问。
curl 10.101.187.222
(2)这时,重要的是外网是否能访问,我们在外部访问一下,注意外部访问是访问我们的宿主机IP:30005(是的IP是虚拟的,是k8s给的这个IP,所以在内部是可以访问到,但是外部是不可访问到的),发现可以访问。
#浏览器输入三台宿主机的ip加30005此时能正常访问,但是因为是4层代理有会话保持,所以轮询效果比较难看到
(3)综上所述,符合的特性,内部可访问,外部也可访问。
2.4
使用云提供商的负载均衡器向外部暴露服务,这种方式是将ing直接绑定到slb上。
2.4.1 的yml文件
cd /opt/k8s
vim nginx.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
selector:
app: nginx
ports:
- port: 80
targetPort: 80
2.4.2 查看IP
2.5
是 的一个特例,它没有选择器,也没有定义任何端口或 。它的作用是返回集群外 的外部别名。它将外部地址经过集群内部的再一次封装(实际上就是集群 DNS 服务器将 CNAME解析到了外部地址上),实现了集群内部访问即可。
例如你们公司的镜像仓库,最开始是用 ip 访问,等到后面域名下来了再使用域名访问。你不可能去修改每处的引用。但是可以创建一个 ,首先指向到 ip,等后面再指向到域名。
2.5.1 的yml文件
这里,我们验证一下,在容器内部,访问nginx,看下会不会跳转到百度上
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.20.1
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: ExternalName
externalName: www.baidu.com
2.5.2 运行并查看状态
apply -f .yml
get svc
2.5.3 验证
我们进入到容器内部
[root@-01 k8s]# exec -it nginx--hj4gv – bash
使用工具查看是否会跳转
nginx
证明,采用模式,在居群内部访问服务名,会跳转到我们设置好的地址中。
2.6 NGINX
[root@k8s-80 k8syaml]# kubectl delete -f nginx.yaml # 删除 nginx.yaml 文件中定义的类型和名称的 pod,全干掉
deployment.apps "nginx-deployment" deleted
service "nginx" deleted
官方的yaml文件,我把其中的镜像拉下来放置在阿里云上并进行对应修改,请根据这个修改
-nginx
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- ingress-controller-leader
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- coordination.k8s.io
resourceNames:
- ingress-controller-leader
resources:
- leases
verbs:
- get
- update
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- coordination.k8s.io
resources:
- leases
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- discovery.k8s.io
resources:
- endpointslices
verbs:
- list
- watch
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
data:
allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-controller
namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: registry.k8s.io/ingress-nginx/controller:v1.4.0@sha256:34ee929b111ffc7aa426ffd409af44da48e5a0eea1eb2207994d9e0c0882d143
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20220916-gd32f8c343@sha256:39c5b2e3310dc4264d638ad28d9d1d96c4cbb2b2dcfb52368fe4e3c63f61e10f
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.4.0
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
[root@k8s-80 k8syaml]# kubectl apply -f ingress-nginx.yml
[root@k8s-80 k8syaml]# kubectl get po -n ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-wjb9d 0/1 Completed 0 12s
ingress-nginx-admission-patch-s9pc8 0/1 Completed 0 12s
ingress-nginx-controller-6b548d5677-t42qc 0/1 ContainerCreating 0 12s
注:如果拉不下来,就使用阿里云仓库
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/webhook-certgen:v20220916
docker pull registry.cn-guangzhou.aliyuncs.com/testbydocker/controller_ingress:v1.4.0
建议修改yaml文件内的image
[root@k8s-80 k8syaml]# kubectl describe po ingress-nginx-admission-create-wjb9d -n ingress-nginx
[root@k8s-80 k8syaml]# kubectl get po -n ingress-nginx # admission是密钥不用管
NAME READY STATUS RESTARTS AGE
ingress-nginx-admission-create-wjb9d 0/1 Completed 0 48s
ingress-nginx-admission-patch-s9pc8 0/1 Completed 0 48s
ingress-nginx-controller-6b548d5677-t42qc 1/1 Running 0 48s
[root@k8s-80 k8syaml]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.101.104.205 <none> 80:30799/TCP,443:31656/TCP 66s
ingress-nginx-controller-admission ClusterIP 10.107.116.128 <none> 443/TCP 66s
测试
[root@k8s-80 k8syaml]# vim nginx.yaml # 更换镜像源,这里使用的是阿里云拉取下来的镜像
registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3
[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-b65884ff7-24f4j 1/1 Running 0 49s
nginx-b65884ff7-8qss6 1/1 Running 0 49s
nginx-b65884ff7-vhnbt 1/1 Running 0 49s
[root@k8s-80 k8syaml]# kubectl exec -it nginx-b65884ff7-24f4j -- bash
[root@nginx-b65884ff7-24f4j /]# yum provides nslookup
[root@nginx-b65884ff7-24f4j /]# yum -y install bind-utils
[root@nginx-b65884ff7-24f4j /]# nslookup kubernetes
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: kubernetes.default.svc.cluster.local
Address: 10.96.0.1
[root@nginx-b65884ff7-24f4j /]# nslookup nginx
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: nginx.default.svc.cluster.local
Address: 10.111.201.41
[root@nginx-b65884ff7-24f4j /]# curl nginx.default.svc.cluster.local # 是正常输出html
[root@nginx-b65884ff7-24f4j /]# curl nginx # 也能访问,其实访问的就是service的名字
开启集群模式
# 把Service里的type和nodePort注释掉
[root@k8s-80 k8syaml]# vim nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
#type: NodePort
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
#nodePort: 30005
[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml
[root@k8s-80 k8syaml]# kubectl exec -it nginx-b65884ff7-24f4j -- bash
[root@nginx-b65884ff7-24f4j /]# curl nginx # 是从内部解析的
# 另开一个终端
[root@k8s-80 /]# kubectl get svc # 可以看到其实解析的就是name(nginx),当使用无头服务的时候就是使用CLUSTER-IP
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 30m
nginx ClusterIP 10.111.201.41 <none> 80/TCP 4m59s
[root@k8s-80 /]# kubectl get svc -n ingress-nginx # 查看Service暴露的端口
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.101.104.205 <none> 80:30799/TCP,443:31656/TCP 7m13s
ingress-nginx-controller-admission ClusterIP 10.107.116.128 <none> 443/TCP 7m13s
[root@nginx-b65884ff7-24f4j /]# exit
开启
[root@k8s-80 /]# kubectl explain ing # 查看VERSION
[root@k8s-80 k8syaml]# vi nginx.yaml # 增加了kind: Ingress那一项匹配值
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
#type: NodePort
selector:
app: nginx
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
#nodePort: 30005
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
spec:
ingressClassName: nginx
rules:
- host: www.lin.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx
port:
number: 80
[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-b65884ff7-24f4j 1/1 Running 0 6m38s
nginx-b65884ff7-8qss6 1/1 Running 0 6m38s
nginx-b65884ff7-vhnbt 1/1 Running 0 6m38s
# 修改成php页面
[root@k8s-80 k8syaml]# kubectl exec -it nginx-b65884ff7-24f4j -- bash
[root@nginx-b65884ff7-24f4j /]# mv /usr/local/nginx/html/index.html /usr/local/nginx/html/index.php
[root@nginx-b65884ff7-24f4j /]# >/usr/local/nginx/html/index.php
[root@nginx-b65884ff7-24f4j /]# vi /usr/local/nginx/html/index.php
<?
phpinfo();
?>
[root@nginx-b65884ff7-24f4j /]# /etc/init.d/php-fpm restart
Gracefully shutting down php-fpm . done
Starting php-fpm done
[root@nginx-b65884ff7-24f4j /]# exit
exit
# 其余两个副本皆是如此操作
# 修改Windows上的host文件,做好解析 192.168.188.80 www.lin.com
# 浏览器访问域名也只能是域名,但是需要加端口号 ==> www.lin.com:30799,会出现PHP的页面
# 不知道端口号的,使用命令kubectl get svc -n ingress-nginx
反代
安装nginx
选择在上安装,因为此时压力较小
# 这里是使用脚本
# 请注意配置 Kubernetes 源使用阿里源当时说明了由于官网未开放同步方式, 可能会有索引gpg检查失败的情况
# 所以需要先执行这两条命令
[root@k8s-80 ~]# sed -i "s#repo_gpgcheck=1#repo_gpgcheck=0#g" /etc/yum.repos.d/kubernetes.repo
[root@k8s-80 ~]# sed -i "s#gpgcheck=1#gpgcheck=0#g" /etc/yum.repos.d/kubernetes.repo
[root@k8s-80 ~]# sh nginx.sh
修改配置文件实现四层转发
[root@k8s-80 ~]# cp /usr/local/nginx/conf/nginx.conf /usr/local/nginx/conf/nginx.conf.bak
[root@k8s-80 /]# grep -Ev "#|^$" /usr/local/nginx/conf/nginx.conf
worker_processes 1;
events {
worker_connections 1024;
}
stream {
upstream tcp_proxy {
server 192.168.188.80:30799;
}
server {
listen 80;
proxy_pass tcp_proxy;
}
}
http {
include mime.types;
default_type application/octet-stream;
sendfile on;
keepalive_timeout 65;
}
[root@k8s-80 config]# nginx -t
nginx: the configuration file /usr/local/nginx/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx/conf/nginx.conf test is successful
[root@k8s-80 config]# nginx -s reload
# 设置开机自启
[root@k8s-80 config]# vim /etc/rc.local
添加一条命令
nginx
[root@k8s-80 config]# chmod +x /etc/rc.d/rc.local
# 浏览器验证,输入www.lin.com
标签 第一种pod打标签
# 获取pod信息,默认是default名称空间,并查看附加信息【如:pod的IP及在哪个节点运行】
[root@k8s-80 ~]# kubectl get po -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-b65884ff7-24f4j 1/1 Running 2 (142m ago) 6h10m 10.244.1.12 k8s-81 <none> <none>
nginx-b65884ff7-8qss6 1/1 Running 2 (142m ago) 6h10m 10.244.2.15 k8s-82 <none> <none>
nginx-b65884ff7-vhnbt 1/1 Running 2 (142m ago) 6h10m 10.244.1.13 k8s-81 <none> <none>
[root@k8s-80 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-b65884ff7-24f4j 1/1 Running 2 (142m ago) 6h10m
nginx-b65884ff7-8qss6 1/1 Running 2 (142m ago) 6h10m
nginx-b65884ff7-vhnbt 1/1 Running 2 (142m ago) 6h10m
# 查看所有pod的标签
[root@k8s-80 ~]# kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-b65884ff7-24f4j 1/1 Running 2 (155m ago) 6h23m app=nginx,pod-template-hash=b65884ff7
nginx-b65884ff7-8qss6 1/1 Running 2 (155m ago) 6h23m app=nginx,pod-template-hash=b65884ff7
nginx-b65884ff7-vhnbt 1/1 Running 2 (155m ago) 6h23m app=nginx,pod-template-hash=b65884ff7
# 打标签
方法1:使用kubectl edit pod nginx-b65884ff7-24f4j
方法2:
[root@k8s-80 ~]# kubectl label po nginx-b65884ff7-24f4j uplookingdev=shy
pod/nginx-b65884ff7-24f4j labeled
[root@k8s-80 ~]# kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-b65884ff7-24f4j 1/1 Running 2 (157m ago) 6h25m app=nginx,pod-template-hash=b65884ff7,uplookingdev=shy
nginx-b65884ff7-8qss6 1/1 Running 2 (157m ago) 6h25m app=nginx,pod-template-hash=b65884ff7
nginx-b65884ff7-vhnbt 1/1 Running 2 (157m ago) 6h25m app=nginx,pod-template-hash=b65884ff7
删除标签
[root@k8s-80 ~]# kubectl label po nginx-b65884ff7-24f4j uplookingdev-
pod/nginx-b65884ff7-24f4j labeled
[root@k8s-80 ~]# kubectl get po --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx-b65884ff7-24f4j 1/1 Running 2 (158m ago) 6h26m app=nginx,pod-template-hash=b65884ff7
nginx-b65884ff7-8qss6 1/1 Running 2 (158m ago) 6h26m app=nginx,pod-template-hash=b65884ff7
nginx-b65884ff7-vhnbt 1/1 Running 2 (158m ago) 6h26m app=nginx,pod-template-hash=b65884ff7
第二种svc节点打标签方式
[root@k8s-master ~]# kubectl get svc --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
kubernetes ClusterIP 10.96.0.1 443/TCP 22h component=apiserver,provider=kubernetes
[root@k8s-master ~]# kubectl label svc kubernetes today=happy
service/kubernetes labeled
[root@k8s-master ~]# kubectl get svc --show-labels
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE LABELS
kubernetes ClusterIP 10.96.0.1 443/TCP 22h component=apiserver,provider=kubernetes,today=happy
第三种节点打标签方式
[root@k8s-master ~]# kubectl label ns default today=happy
namespace/default labeled
[root@k8s-master ~]# kubectl get ns --show-labels
NAME STATUS AGE LABELS
default Active 22h kubernetes.io/metadata.name=default,today=happy
ingress-nginx Active 14m app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx,kubernetes.io/metadata.name=ingress-nginx
kube-flannel Active 19h kubernetes.io/metadata.name=kube-flannel,pod-security.kubernetes.io/enforce=privileged
kube-node-lease Active 22h kubernetes.io/metadata.name=kube-node-lease
kube-public Active 22h kubernetes.io/metadata.name=kube-public
kube-system Active 22h kubernetes.io/metadata.name=kube-system
第四种node节点打标签方式
[root@k8s-80 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-80 Ready control-plane,master 6h52m v1.22.3
k8s-81 Ready <none> 6h52m v1.22.3
k8s-82 Ready <none> 6h52m v1.22.3
[root@k8s-80 ~]# kubectl label no k8s-81 node-role.kubernetes.io/php=true
node/k8s-81 labeled
[root@k8s-80 ~]# kubectl label no k8s-81 node-role.kubernetes.io/bus=true
node/k8s-81 labeled
[root@k8s-80 ~]# kubectl label no k8s-82 node-role.kubernetes.io/go=true
node/k8s-82 labeled
[root@k8s-80 ~]# kubectl label no k8s-82 node-role.kubernetes.io/bus=true
node/k8s-82 labeled
[root@k8s-80 ~]# kubectl get no
NAME STATUS ROLES AGE VERSION
k8s-80 Ready control-plane,master 6h56m v1.22.3
k8s-81 Ready bus,php 6h55m v1.22.3
k8s-82 Ready bus,go 6h55m v1.22.3
给节点打标签设定某些pod运行在特定节点:
kubectl label no $node node-role.kubernetes.io/$mark=true $node为节点名字 $mark为你要打的标签
将某些特定的pod调度到某个标签的节点时,在yaml文件里写入标签选择:
spec: (第二个spec)
nodeSelector:
node-role.kubernetes.io/$mark: "true"
# 查看之前推出的版本(历史版本)
[root@k8s-80 k8syaml]# kubectl rollout history deployment
deployment.apps/nginx
REVISION CHANGE-CAUSE
1 <none>
[root@k8s-80 k8syaml]# kubectl rollout history deployment --revision=1 # 查看deployment修订版1的详细信息
deployment.apps/nginx with revision #1
Pod Template:
Labels: app=nginx
pod-template-hash=b65884ff7
Containers:
nginx:
Image: registry.cn-shenzhen.aliyuncs.com/adif0028/nginx_php:74v3
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts: <none>
Volumes: <none>
回滚
# 回滚前一rollout
# undo
[root@k8s-80 k8syaml]# kubectl rollout undo deployment nginx #默认回滚上一个版本
# 回滚到指定版本
[root@k8s-80 k8syaml]# kubectl rollout undo deployment nginx --to-revision=版本号
更新
支持热更新
[root@k8s-80 k8syaml]# kubectl set image deployment nginx nginx=1.20.1 #需要仓库有镜像
deployment.apps/nginx image updated
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-b65884ff7-8qss6 1/1 Running 2 (3h5m ago) 6h53m
nginx-bc779cc7c-jxh87 0/1 ContainerCreating 0 9s
[root@k8s-80 k8syaml]# kubectl edit deployment nginx # 进去看到版本号就是1.20.1
# 修改成2个副本集
# 镜像修改成nginx:1.20.1
扩容
当业务的用户越来越多,目前的后端服务已经无法满足业务要求当前的业务要求,传统的解决办法是将其横向增加服务器,从而满足我们的业务要求。K8S 中也是支持横向扩容的方法的。
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-58b9b8ff79-2nc2q 1/1 Running 0 4m59s
nginx-58b9b8ff79-rzx5c 1/1 Running 0 4m14s
[root@k8s-80 k8syaml]# kubectl scale deployment nginx --replicas=5 # 横向扩容5个
deployment.apps/nginx scaled
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-58b9b8ff79-2nc2q 1/1 Running 0 7m11s
nginx-58b9b8ff79-f6g6x 1/1 Running 0 2s
nginx-58b9b8ff79-m7n9b 1/1 Running 0 2s
nginx-58b9b8ff79-rzx5c 1/1 Running 0 6m26s
nginx-58b9b8ff79-s6qtx 1/1 Running 0 2s
# 第二种扩容 Patch (少用)
[root@k8s-80 k8syaml]# kubectl path deployment nginx -p '{"spec":{"replicas":6}}'
缩容
[root@k8s-80 k8syaml]# kubectl scale deployment nginx --replicas=3
deployment.apps/nginx scaled
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-58b9b8ff79-2nc2q 1/1 Running 0 9m45s
nginx-58b9b8ff79-m7n9b 1/1 Running 0 2m36s
nginx-58b9b8ff79-rzx5c 1/1 Running 0 9m
# 还可以比yaml文件里面设置的副本集数还要少,因为yaml文件里面的只是期望值并不是强制
暂停部署(少用)
[root@k8s-80 k8syaml]# kubectl rollout pause deployment nginx
取消暂停,开始部署(少用)
[root@k8s-80 k8syaml]# kubectl rollout resume deployment nginx
服务探针(重点)
对线上业务来说,保证服务的正常稳定是重中之重,对故障服务的及时处理避免影响业务以及快速恢复一直是开发运维的难点。 提供了健康检查服务,对于检测到故障服务会被及时自动下线,以及通过重启服务的方式使服务自动恢复。
探针传送门
存活性探测()
用于判断容器是否存活,即 Pod 是否为 状态,如果 探针探测到容器不健康,则将 kill 掉容器,并根据容器的重启策略判断按照那种方式重启,如果一个容器不包含 探针,则认为容器的 探针的返回值永远成功。
存活性探测支持的方法有三种:,,。
Exec(命令)
这个稳定一点,一般选择这个
[root@k8s-80 k8syaml]# vim nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18.0
livenessProbe:
exec:
command:
- cat
- /opt/a.txt
#也可以写成:
#- cat /opt/a.txt
#也可以写成脚本形式:
#- /bin/sh
#- -c
#- touch /tmp/healthy; sleep 30; rm -f /tmp/healthy; sleep 600
initialDelaySeconds: 5
timeoutSeconds: 1
ports:
- containerPort: 80
# 肯定报错,因为没有/opt/a.txt这个文件
[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml
deployment.apps/nginx configured
service/nginx configured
ingress.networking.k8s.io/nginx unchanged
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-58b9b8ff79-rzx5c 1/1 Terminating 0 21m
nginx-74cd54c6d8-6s9xq 1/1 Running 0 6s
nginx-74cd54c6d8-8kk86 1/1 Running 0 5s
nginx-74cd54c6d8-g6fx2 1/1 Running 0 3s
[root@k8s-80 k8syaml]# kubectl describe po nginx-58b9b8ff79-rzx5c # 会看到提示说重启
[root@k8s-80 k8syaml]# kubectl get po # 可以看到RESTARTS的次数
NAME READY STATUS RESTARTS AGE
nginx-74cd54c6d8-6s9xq 1/1 Running 2 (60s ago) 3m
nginx-74cd54c6d8-8kk86 1/1 Running 2 (58s ago) 2m59s
nginx-74cd54c6d8-g6fx2 1/1 Running 2 (57s ago) 2m57s
使用正确的条件
# 把 nginx.yaml中的/opt/a.txt
# 修改成 /usr/share/nginx/html/index.html
[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml
deployment.apps/nginx configured
service/nginx unchanged
ingress.networking.k8s.io/nginx unchanged
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-cf85cd887-cwv9t 1/1 Running 0 15s
nginx-cf85cd887-m4krx 1/1 Running 0 13s
nginx-cf85cd887-xgmhv 1/1 Running 0 12s
[root@k8s-80 k8syaml]# kubectl describe po nginx-cf85cd887-cwv9t # 看详细信息
[root@k8s-80 k8syaml]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7h54m
nginx ClusterIP 10.105.11.148 <none> 80/TCP 7h28m
[root@k8s-80 k8syaml]# kubectl get ing # 查看ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
nginx nginx www.lin.com 192.168.188.81 80 7h22m
# 192.168.188.81就是Ingress controller为了实现Ingress而分配的IP地址。RULE列表示所有发送给该IP的流量都被转发到了BACKEND所列的Kubernetes service上
[root@k8s-80 k8syaml]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-controller NodePort 10.101.104.205 <none> 80:30799/TCP,443:31656/TCP 7h32m
ingress-nginx-controller-admission ClusterIP 10.107.116.128 <none> 443/TCP
# 此时浏览器打开正常访问
第二种使用条件
# 把 nginx.yaml中的
- cat
- /usr/local/nginx/html/index.html
# 修改成
- /bin/sh
- -c
- nginx -t
[root@k8s-80 k8syaml]# kubectl apply -f nginx.yaml
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-fcf47cfcc-6txxf 1/1 Running 0 41s
nginx-fcf47cfcc-fd5qc 1/1 Running 0 43s
nginx-fcf47cfcc-x2qzg 1/1 Running 0 46s
[root@k8s-80 k8syaml]# kubectl describe po nginx-fcf47cfcc-6txxf # 查看详细信息
# 浏览器打开页面正常输出
# 如果不加就绪探针和健康检查,有可能状态是running但不能服务,所以一般两种探针都加
[root@k8s-80 k8syaml]# cp nginx.yaml f-tcpsocket.yaml
[root@k8s-80 k8syaml]# vim f-tcpsocket.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.18.0
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
timeoutSeconds: 1
ports:
- containerPort: 80
[root@k8s-80 k8syaml]# kubectl apply -f f-tcpsocket.yaml
[root@k8s-80 k8syaml]# kubectl get po
NAME READY STATUS RESTARTS AGE
nginx-5f48dd7bb9-6crdg 1/1 Running 0 48s
nginx-5f48dd7bb9-t7q9k 1/1 Running 0 47s
nginx-5f48dd7bb9-tcn2d 1/1 Running 0 49s
[root@k8s-80 k8syaml]# kubectl describe po nginx-5f48dd7bb9-6crdg # 查看详细信息无误