kubernetes学习随笔

核心组件

主要组件

  • etcd: 保存整个集群的状态

  • apiservice: 提供资源操作的唯一入口,并提供访问控制、API注册、发现等机制

  • scheduler: 负责资源的调度,按照预定的调度策略将Pod调度到相应的机器上

  • controller manager: 负责维护集群的状态,比如故障检测、自动扩容、滚动更新等

  • kubelet: 负责维护容器的生命周期,同时也负责数据卷(CVI)和网络(CNI)的管理

  • kube-proxy: 负责为Service提供内部的服务发现和负载均衡

  • Container runtime: 负责镜像管理以及Pod和容器的真正运行(CRI)

扩展组件

  • kube-dns: 负责为整个集群提供DNS服务

  • Metrics: 提供资源监控

  • Dashboard: 提供GUI

  • Ingress Controller: 为服务提供外网入口

  • Federation: 提供跨可用区的集群

  • Fluentd-elasticsearch: 提供集群日志采集、存储与查询

基本概念

集群管理

  • Master: K8s集群的管理节点,负责整个集群的管理和控制

  • Node: K8s集群的工作节点,负责集群中的工作负载

  • Namespace: 为K8s集群提供虚拟的隔离作用

  • Label: 通过给指定资源捆绑一个或多个不同的资源标签,来实现多维度的资源分组管理

资源管理

  • Pause: Pod的父容器,负责僵尸进程的回收管理,通过Pause容器可以使用一个Pod里面多个容器共享的存储,网络,PID,IPC等

  • Pod: K8s集群中运行部署应用的最小单元,可以支持多容器

  • RC: K8s集群中最早保证Pod高可用的API对象,之后扩展匹配模式新增RS

  • Deployment: 一个应用模式更广的API对象,通过操作RS进行创建、更新、滚动升级服务

  • Statefulset: K8s提供的管理有状态应用的负载管理控制器API

  • DaemonSet: 确保其创建的Pod在机器中的每一台(或指定)Node上都运行一个副本

  • Job: K8s用来控制批处理型人物的API对象,之后基于时间管理新增的CronJob

  • Service: 定义一个服务的多个Pod逻辑合集和访问Pod的策略,实现服务发现和负载均衡

  • HPA: 实现基于CPU使用率(或在使用自定义指标)的Pod自动伸缩的功能

存储管理

  • Secret: 用来保存和传递密码、秘钥、认证凭证这些敏感信息的对象

  • ConfigMap: 将配置信息和镜像内容分离,以使容器化的应用程序具有可移植性

  • Volume: 是Pod中能够被多个容器访问的共享目录

  • PV: 持久化存储与之相关的持久化存储声明(PVC),使得K8S集群具备了存储的逻辑抽象能力

零宕机必备知识-Pod探针

Pod三种检测方式

  • StartupProbe 启动检测; 表示程序是否启动; 例如当容器内某个进程或程序,检测成功启动后不在检测,如果启用这个探针,其他探针检测靠后;常用于启动时间过长的程序

  • LivenessProbe 存活探针; 表示程序是否正常运行; 例如当容器内进程或应用程序正常运行中,由于出现BUG,导致程序挂了,会重启该状态容器

  • ReadnessProbe 就绪检测; 表示程序是否准备好提供服务; 例如当程序启动了,并且也已经运行了,但是有很多数据还没有加载完成,如果设置这个探针,暂时不会提供对外服务,等待加载完成才能提供服务

Pod探针的三种方式

  • ExecAction:在容器内执行指定命令。如果命令退出时返回码为 0 则认为诊断成功。

  • TCPSocketAction:对指定端口上的容器的 IP 地址进行 TCP 检查。如果端口打开,则诊断被认为是成功的。

  • HTTPGetAction:对指定的端口和路径上的容器的 IP 地址执行 HTTP Get 请求。如果响应的状态码大于等于200 且小于 400,则诊断被认为是成功的

Pod探针的三种结果

  • 成功:容器通过了诊断。

  • 失败:容器未通过诊断。

  • 未知:诊断失败,因此不会采取任何行动

启动检测-startupProbe

存活检测-LivenessProbe

检测/tmp/live,每隔60秒就会被删除,liveness检测,如果被删除,就会返回失败,重启pod。陷入无限循环。

exec

apiVersion: v1
kind: Pod
metadata:
  name: liveness-exec-pod
  namespace: default
spec:
  containers:
  - name: liveness-exec-container
    image: busybox
    imagePullPolicy: IfNotPresent
    command: ["/bin/sh","-c","touch /tmp/live ; sleep 60; rm -rf /tmp/live; sleep
3600"]
    livenessProbe:
      exec:
        command: ["test","-e","/tmp/live"]
      initialDelaySeconds: 1
      periodSeconds: 3

httpget

apiVersion: v1
kind: Pod
metadata:
  name: liveness-httpget-pod
  namespace: default
spec:
  containers:
  - name: liveness-httpget-container
    image: wangyanglinux/myapp:v1
    imagePullPolicy: IfNotPresent
    ports:
    - name: http
      containerPort: 80
    livenessProbe:
      httpGet:
        port: http
        path: /index.html
      initialDelaySeconds: 1
      periodSeconds: 3
      timeoutSeconds: 10

tcp


apiVersion: v1
kind: Pod
metadata:
  name: probe-tcp
spec:
  containers:
  - name: nginx
    image: wangyanglinux/myapp:v1
    livenessProbe:
      initialDelaySeconds: 5
      timeoutSeconds: 1
      tcpSocket:
        port: 8080
      periodSeconds: 3

就绪检测-ReadinessProbe

readinessProbe-httpget

apiVersion: v1
kind: Pod
metadata:
  name: readiness-httpget-pod
  namespace: default
spec:
  containers:
  - name: readiness-httpget-container
    image: wangyanglinux/myapp:v1
    imagePullPolicy: IfNotPresent
    readinessProbe:
      httpGet:
        port: 80
        path: /index1.html
      initialDelaySeconds: 1
      periodSeconds: 3

Pod启动、退出动作事件

  • postStart 当一个主容器启动后,Kubernetes 将立即发送 postStart 事件

  • preStop 在主容器被终结之前,Kubernetes 将发送一个 preStop 事件

apiVersion: v1
kind: Pod
metadata:
  name: lifecycle-demo
spec:
  containers:
  - name: lifecycle-demo-container
    image: wangyanglinux/myapp:v1
    lifecycle:
      postStart:
        exec:
          command: ["/bin/sh", "-c", "echo Hello from the postStart handler > /usr/share/message"]
      preStop:
        exec:
          command: ["/bin/sh", "-c", "echo Hello from the poststop handler > /usr/share/message"]

配置 Probe 参数

Probe中有很多精确和详细的配置,通过它们你能准确的控制liveness和readiness检查:

  • initialDelaySeconds:容器启动后第一次执行探测是需要等待多少秒。

  • periodSeconds:执行探测的频率。默认是10秒,最小1秒。

  • timeoutSeconds:探测超时时间。默认1秒,最小1秒。

  • successThreshold:探测失败后,最少连续探测成功多少次才被认定为成功。默认是1。对于liveness必须是1。最小值是1。

  • failureThreshold:探测成功后,最少连续探测失败多少次才被认定为失败。默认是3。最小值是1。

HTTP probe中可以给 httpGet设置其他配置项:

  • host:连接的主机名,默认连接到pod的IP。你可能想在http header中设置”Host”而不是使用IP。

  • scheme:连接使用的schema,默认HTTP。

  • path: 访问的HTTP server的path。

  • httpHeaders:自定义请求的header。HTTP运行重复的header。

  • port:访问的容器的端口名字或者端口号。端口号必须介于1和65525之间。

RC与Replicaset

ReplicationController(简称RC)是确保用户定义的Pod副本数保持不变。 ReplicaSet(RS)是Replication Controller(RC)的升级版本。 两者区别:对选择器的支持

介绍

ReplicaSet(RS)是Replication Controller(RC)的升级版本。 ReplicaSet 和 Replication Controller之间的唯一区别是对选择器的支持。 ReplicaSet支持labels user guide中描述的set-based选择器要求 Replication Controller仅支持equality-based的选择器要求。

RC 替代方法

(1)ReplicaSet: RC 升级版 主要用作Deployment协调pod创建、删除和更新。请注意,除非需要自定义更新编排或根本不需要更新,否则建议使用Deployment而不是直接使用ReplicaSets。

(2)Deployment(推荐): ReplicationController和ReplicaSet这两种资源对象需要其他控制器进行配合才可以实现滚动升级,并且难度大,因此k8s提供了一种基于ReplicaSet的资源对象Deployment可以支持声明式地更新应用。

(3)对比

  • 大多数kubectl 支持Replication Controller 命令的也支持ReplicaSets。

  • ReplicaSets可以独立使用,但它主要被 Deployments用作pod 机制的创建、删除和更新。

  • 使用Deployment时,你不必担心创建pod的ReplicaSets,因为可以通过Deployment实现管理ReplicaSets。ReplicaSet能确保运行指定数量的pod。

  • Deployment 是一个更高层次的概念,它能管理ReplicaSets,并提供对pod的更新等功能。

  • 建议使用Deployment来管理ReplicaSets,除非你需要自定义更新编排。

  • ReplicaSet也可以作为 Horizontal Pod Autoscalers (HPA)的目标 ,一个ReplicaSet可以由一个HPA来自动伸缩。

Deployment

介绍

用于部署应用程序并以声明的方式升级应用,从而更好地解决pod编排问题。常用于无状态应用

原理

Deployment在内部使用了ReplicaSet实现编排pod功能,当创建一个Deployment时,ReplicaSet资源会随之创建,ReplicaSet是新一代的ReplicationController,并推荐使用它替代ReplicationController来复制和管理Pod,在使用Deployment时,实际的Pod是由Deployment的ReplicaSet创建和管理的。

Deployment命令

若需要指定命名空间时,需要加上: -n

Deployment创建

kubectl create -f xxx.yaml

Deployment删除

# 基于 Yaml 摸版文件删除
kubectl delete -f xxx.yaml

# 基于 Deployment 名称删除
kubectl delete deploy <deploy-name>

Deployment更新

两种更新方式,默认滚动更新

  • Recreate(重建) 设置spec.strategy.type=Recreate,更新方式为:Deployment在更新Pod时,会先杀掉所有正在运行的Pod,然后创建新的Pod。

  • RollingUpdate(滚动更新) 设置spec.strategy.type=RollingUpdate,更新方式为:Deployment会以滚动的方式来渐变性的更新Pod,即Pod新版本的递增,旧版本的递减的一个过程。

  • 命令方式

原理

1.初始创建Deployment,系统创建了一个ReplicaSet,并按照用户的需求创建了3个Pod副本(假如3个Pod副本); 2.当更新Deployment时,系统创建一个新的ReplicaSet,并将其副本数量扩展到1,然后将旧的ReplicaSet缩减为2; 3.统继续按照相同的更新策略对新旧两个ReplicaSet进行逐个调整。 4.最后,新的ReplicaSet运行了3个新版本的Pod副本,旧的ReplicaSet副本数量则缩减为0。

# 命令方式,例如原nginx版本: 1.6
kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.8

# 基于 Yaml 摸版文件更新
kubectl apply -f xxx.yaml

# 基于 Deployment 名称更新
kubectl edit deploy/<deploy-name>

Deployment回滚

# 查看历史更新版本
kubectl rollout history deploy <deploy-name>

# 查看某个历史记录的详细信息
kubectl rollout history deploy <deploy-name> --revision=<version_num>

# 回滚到上一个版本
kubectl rollout undo deploy <deploy-name>

# 回滚到指定版本
kubect rollout undo deploy <deploy-name> --to-revision=<version_num>

Deployment查看

# 基于 Yaml 摸版文件查看
kubectl get deploy <deploy-name> -o yaml

# 基于 deploy 名称查看
kubectl describe deploy <deploy-name>

# 查看 deploy 列表
kubectl get deploy -A

# 查看 RS 列表
kubectl get rs -n <namesapce>

# 查看 Pod 列表
kubectl get pod -n <namesapce>

Deployment扩容、缩容

# 扩容
kubectl scale deploy <deploy-name> --replicas=<scale_num>

# 缩容
kubectl scale deploy <deploy-name> --replicas=<scale_num>

Deployment自动扩容、缩容

HPA 控制器,用于实现基于(CPU使用率|磁盘|内存等)进行自动Pod扩容和缩容的功能。 k8s1.11版本后,需要安装 metric server 插件,用来收集及统计资源的利用率。

# 基于 Deployment 名称自动扩容,缩容
kubectl autoscale deployment <deploy-name> --min=1 --max=6 --cpu-percent=50


# 基于 Yaml 文件自动扩容,缩容
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: <hpa-name>
spec:
  scaleTargetRef:
    apiVersion: apps/v1beta1
    kind: Deployment
    name: <deploy-name>
  minReplicas: 1
  maxReplicas: 10
  targetCPUUtilizationPercentage: 50

# 对<deploy-name>的deployment的对象创建HPA控制器,当CPU的使率超过50%时实现自动化扩容,支持1到6之前Pod副本数量,以使得Pod CPU使用率维持在50% 以内。当CPU使用率下降后,自动缩容
kubectl  top node

# 查看资源监控信息
kubectl  get hpa

# 查看对应的对象创建情况,正在陆续创建
kubectl  get deployment 
kubectl  get rs
kubectl  get pod

Deployment+NFS+wordpress

[root@kuboard wp-data]# cat /etc/exports
/nfs/data *(rw,async,no_root_squash)
[root@k8s-master-1 lnmp]# cat mysql-pass.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: mysql-pass
type: Opaque
data:
  password: MTIzNDU2
# 定义 MYSQL 存储 PV
apiVersion: v1
kind: PersistentVolume
metadata:
  name: mysql-pv
  labels:
    apps: mysql-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /nfs/data/wp-data
    server: 10.10.181.242
  # 最好使用 glusterfs
  #glusterfs:
  #  endpoints: "glusterfs-cluster"
  #  path: "gv1"
---
# 定义 MYSQL存储 PVC,使用上面 PV
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: mysql-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      apps: mysql-pv
---
apiVersion: v1
kind: Service
metadata:
  name: wp-mysql
  labels:
    app: wordpress
spec:
  ports:
    - port: 3306
  selector:
    app: wordpress
    tier: mysql
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wp-mysql
  labels:
    app: wordpress
spec:
  replicas: 2
  selector:
    matchLabels:
      app: wordpress
      tier: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: wordpress
        tier: mysql
    spec:
      containers:
      - name: wp-mysql
        image: mysql:5.6
        imagePullPolicy: IfNotPresent
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              name: mysql-pass
              key: password
        ports:
        - containerPort: 3306
          name: wp-mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      #imagePullSecrets:
       # - name: registrypullsecret
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: mysql-claim
# 定义 PV
apiVersion: v1
kind: PersistentVolume
metadata:
  name: web-pv
  labels:
    apps: web-pv
spec:
  capacity:
    storage: 10Gi
  accessModes:
    - ReadWriteMany
  nfs:
    path: /nfs/data/wp-data
    server: 10.10.181.242
---
# 定义 pvc 消费上面 web-pv
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: web-claim
  labels:
    app: wordpress
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      apps: web-pv
---
# 定义 svc ,做负载均衡,流量分发
apiVersion: v1
kind: Service
metadata:
  name: wp-web
  labels:
    app: wordpress
spec:
  ports:
    - port: 80
  selector:
    app: wordpress
    tier: nginx-php
  type: NodePort
  sessionAffinity: ClientIP
---
# 定义 Deployment 部署集
apiVersion: apps/v1
kind: Deployment
metadata:
  name: wp-web
  labels:
    app: wordpress
spec:
  replicas: 1
  selector:
    matchLabels:
      app: wordpress
      tier: nginx-php
  template:
    metadata:
      labels:
        app: wordpress
        tier: nginx-php
    spec:
      containers:
      - image: 你自己的 nginx-php-fpm 的镜像
        name: wp-web
        ports:
        - containerPort: 9000
        - containerPort: 80
          name: wp-web
        volumeMounts:
        - name: web-persistent-storage
          mountPath: /var/www/html/
      # imagePullSecrets:
      # - name: my-secret
      volumes:
      - name: web-persistent-storage
        persistentVolumeClaim:
          claimName: web-claim

如果应用程序不需要任何稳定的标识符或有序的部署、删除或伸缩,则应该使用 由一组无状态的副本控制器提供的工作负载来部署应用程序,比如 Deployment 或者 ReplicaSet 可能更适用于你的无状态应用部署需要

StatefulSets

概述

用来管理有状态应用的工作负载API对象,用于具有持久化存储方面

使用场景

  • 稳定的、唯一的网络标识符

  • 稳定的、持久的存储

  • 有序的、优雅的部署和缩放

  • 有序的、自动的滚动和更新

使用条件

  • 需要 storage class 存储来提供 PV 驱动

  • 需要无头 service 服务,负责 Pod 的网络标识

  • 默认Pod管理策略(OrderedReady)时滚动更新,可能需要人工干预

创建 StatefulSet

apiVersion: v1
kind: Service
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - port: 80
    name: web
  clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: web
spec:
  selector:
    matchLabels:
      app: nginx
  serviceName: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: k8s.gcr.io/nginx-slim:0.8
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: www
          mountPath: /usr/share/nginx/html
  volumeClaimTemplates:
  - metadata:
      name: www
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 1Gi
# 查看服务
[root@localhost ~]# kubectl get service nginx
NAME    TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
nginx   ClusterIP   None         <none>        80/TCP    130m

# 查看 StatefulSet
[root@localhost ~]# kubectl get sts web
NAME   READY   AGE
web    3/3     130m

# 查看pod, Pod 被部署时是按照 {0 …… N-1} 的序号顺序创建的, 在上一个pod出于 Running和Ready状态后,后面的pod才会被启动
[root@localhost ~]# kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          84m
web-1   1/1     Running   0          55m
web-2   1/1     Running   0          84m

使用稳定的网络身份标识

StatefulSet 中的 Pod 拥有一个具有黏性的、独一无二的身份标志。基于 StatefulSet 控制器分配给每个 Pod 的唯一顺序索引 Pod 的名称的形式为-

# 每个 Pod 都拥有一个基于其顺序索引的稳定的主机名。使用kubectl exec在每个 Pod 中执行hostname。
for i in 0 1;do kubectl exec web-$i -- sh -c 'hostname'; done
web-0
web-1


# 检查他们在集群内部的 DNS 地址
kubectl run -i --tty --image busybox:1.28 dns-test --restart=Never --rm
nslookup web-0.nginx
nslookup web-1.nginx

# 测试一下:
kubectl delete pod -l app=nginx

Pod 的序号、主机名、SRV 条目和记录名称没有改变,但和 Pod 相关联的 IP 地址可能发生了改变

写入稳定的存储

动态提供 PersistentVolume,所有的 PersistentVolume 都是自动创建和绑定的。

[root@localhost ~]# kubectl get pvc -l app=nginx
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-web-0   Bound    pvc-6d64a47d-5c78-4143-b53a-9221117125dc   1Gi        RWO            standard       148m
www-web-1   Bound    pvc-9b95549a-56b7-400a-b0da-75680471297d   1Gi        RWO            standard       147m

测试数据是否会丢失

# 写入数据
for i in 0 1; do kubectl exec "web-$i" -- sh -c 'echo "$(hostname)" > /usr/share/nginx/html/index.html'; done
for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
web-0
web-1


# 删除 pod
kubectl delete pod -l app=nginx
kubectl get pod -w -l app=nginx

for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
web-0
web-1

虽然 web-0 和 web-1 被重新调度了,但它们仍然继续监听各自的主机名,因为和它们的 PersistentVolumeClaim 相关联的 PersistentVolume 被重新挂载到了各自的 volumeMount 上。 不管 web-0 和 web-1 被调度到了哪个节点上,它们的 PersistentVolumes 将会被挂载到合适的挂载点上。

StatefulSet 扩容/缩容

扩容

# 扩容; 或者修改 web.yml 文件,并重新 apply 
kubectl scale sts web --replicas=5

# 观察: 扩容顺序; StatefulSet 按序号索引顺序的创建每个 Pod,并且会等待前一个 Pod 变为 Running 和 Ready 才会启动下一个 Pod。
kubectl get pods -w -l app=nginx

缩容

# 缩容;  kubectl patch 将 StatefulSet 缩容回三个副本。
kubectl patch sts web -p '{"spec":{"replicas":3}}'

# 观察: 缩容顺序; 控制器会按照与 Pod 序号索引相反的顺序每次删除一个 Pod。在删除下一个 Pod 前会等待上一个被完全关闭。
kubectl get pods -w -l app=nginx

注意: 当删除 StatefulSet 的 Pod 时,挂载到 StatefulSet 的 Pod 的 PersistentVolumes 不会被删除。

[root@localhost ~]# kubectl get pvc -l app=nginx
NAME        STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
www-web-0   Bound    pvc-6d64a47d-5c78-4143-b53a-9221117125dc   1Gi        RWO            standard       148m
www-web-1   Bound    pvc-9b95549a-56b7-400a-b0da-75680471297d   1Gi        RWO            standard       147m
www-web-2   Bound    pvc-2b7172fe-72db-4ada-b0c8-ed14093c1f49   1Gi        RWO            standard       109m
www-web-3   Bound    pvc-0de17d66-2c52-4e5b-b0da-2db1ed90b9ec   1Gi        RWO            standard       109m
www-web-4   Bound    pvc-af46892f-7f44-4ed5-b026-ae5920cf7213   1Gi        RWO            standard       108m

StatefulSet 更新

Kubernetes 1.7 版本的 StatefulSet 控制器支持自动更新。 更新策略由 StatefulSet API Object 的spec.updateStrategy 字段决定。这个特性能够用来更新一个 StatefulSet 中的 Pod 的 container images,resource requests,以及 limits,labels 和 annotations。 RollingUpdate滚动更新是 StatefulSets 默认策略

kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'

StatefulSet 灰度发布

StatefulSet 级联||非级联删除

非级联方式删除,StatefulSet 的 Pod 不会被删除。 使用级联删除时,StatefulSet 和它的 Pod 都会被删除。 不管是级联或者非级联删除,存储卷是不会被删除

非级联删除

# --cascade=false 参数告诉k8s使用非级联删除方式
[root@localhost ~]# kubectl delete statefulset web --cascade=false
pod "web-0" deleted


# StatefulSet 已删除,但Pod还存在,如果手动删除Pod,由于 StatefulSet 被删除,不会重新启动pod
[root@localhost ~]# kubectl get sts     
No resources found in default namespace.
[root@localhost ~]# kubectl get po
NAME    READY   STATUS    RESTARTS   AGE
web-0   1/1     Running   0          3h13m
web-1   1/1     Running   0          165m
web-2   1/1     Running   0          3h14m

级联删除

# 级联删除会删除 StatefulSet 和 Pod,但是无非删除和 StatefulSet 关联的 service
kubectl delete statefulset web
kubectl delete svc nginx
# 重新部署
[root@localhost ~]# kubectl apply -f web.yaml 
service/nginx created
statefulset.apps/web created

[root@localhost ~]# for i in 0 1; do kubectl exec -i -t "web-$i" -- curl http://localhost/; done
web-0
web-1

删除了 StatefulSet 和 Pod,当重新部署这个 web.yaml 后,Pod 将会被重新创建并挂载它们的 PersistentVolumes,并且 web-0 和 web-1 将仍然使用它们的主机名提供服务。

StatefulSet 如何管理Pod

Pod管理策略

  • OrderedReady Pod 管理策略 -- 默认,遵循上下文,顺序性启动和停止pod

  • Parallel Pod 管理策略 -- 并行,启动和停止会同时进行

kind: StatefulSet
....略
spec:
  podManagementPolicy: "Parallel"
... 

DaemonSet

工作原理

  • 每当向集群中添加一个节点时,指定的 Pod 副本也将添加到该节点上

  • 当节点从集群中移除时,Pod 也就被垃圾回收了

  • 删除一个 DaemonSet 可以清理所有由其创建的 Pod

使用场景

  • 日志采集agent,如fluentd或logstash

  • 监控采集agent,如Prometheus Node Exporter,Sysdig Agent,Ganglia gmond

  • 分布式集群组件,如Ceph MON,Ceph OSD,glusterd,Hadoop Yarn NodeManager等

  • k8s必要运行组件,如网络flannel,weave,calico,kube-proxy等

DaemonSet创建

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: fluentd-elasticsearch
  namespace: kube-system
  labels:
    k8s-app: fluentd-logging
spec:
  selector:
    matchLabels:
      name: fluentd-elasticsearch
  template:
    metadata:
      labels:
        name: fluentd-elasticsearch
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: fluentd-elasticsearch
        image: fluent/fluentd-kubernetes-daemonset:v1.7.1-debian-syslog-1.0
        resources:
          limits:
            memory: 200Mi
          requests:
            cpu: 100m
            memory: 200Mi
        volumeMounts:
        - name: varlog
          mountPath: /var/log
        - name: varlibdockercontainers
          mountPath: /var/lib/docker/containers
          readOnly: true
      terminationGracePeriodSeconds: 30
      volumes:
      - name: varlog
        hostPath:
          path: /var/log
      - name: varlibdockercontainers
        hostPath:
          path: /var/lib/docker/containers
# 创建 DaemonSet
[root@localhost ~]# kubectl apply -f daemonset.yaml 
daemonset.apps/fluentd-elasticsearch created

# 查看 DaemonSet
[root@localhost ~]# kubectl get daemonset -n kube-system fluentd-elasticsearch
NAME                    DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE
fluentd-elasticsearch   1         1         1       1            1           <none>          54m

DaemonSet更新

更新方式: 先删除Pod再创建

# 滚动更新
[root@node-1 ~]# kubectl set image daemonsets fluentd-elasticsearch fluentd-elasticsearch=quay.io/fluentd_elasticsearch/fluentd:latest -n kube-system
daemonset.extensions/fluentd-elasticsearch image updated

# 查看滚动更新状态
[root@localhost ~]# kubectl rollout status daemonset -n kube-system fluentd-elasticsearch 
Waiting for daemon set "fluentd-elasticsearch" rollout to finish: 0 of 1 updated pods are available...
daemon set "fluentd-elasticsearch" successfully rolled out

[root@localhost ~]# kubectl describe po -n kube-system fluentd-elasticsearch
''' 已更新成功
	Image:          quay.io/fluentd_elasticsearch/fluentd:lates
'''

DaemonSet回滚

回滚方式: 先删除Pod再创建

# 查看DaemonSet滚动更新版本,REVSION 1为初始的版本
[root@node-1 ~]# kubectl rollout history daemonset -n kube-system fluentd-elasticsearch 
daemonset.extensions/fluentd-elasticsearch 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>


# 更新回退,如果配置没有符合到预期可以回滚到原始的版本
[root@node-1 ~]# kubectl rollout undo daemonset -n kube-system fluentd-elasticsearch --to-revision=1
daemonset.extensions/fluentd-elasticsearch rolled back

# 查看回滚后的结果
[root@localhost ~]# kubectl describe daemonset -n kube-system fluentd-elasticsearch
'''
	 Image:      fluent/fluentd-kubernetes-daemonset:v1.7.1-debian-syslog-1.0
'''

DaemonSet删除

[root@localhost ~]# kubectl delete daemonsets -n kube-system fluentd-elasticsearch 
daemonset.extensions "fluentd-elasticsearch" deleted

[root@localhost ~]# kubectl get po -n kube-system |grep fluentd
fluentd-elasticsearch-d6f6f      0/1     Terminating   0          110m

DaemonSet调度

  • 指定 nodeName 节点运行

  • 通过 nodeSelector 标签运行

  • 通过 node Affinity和node Anti-affinity 亲和力运行

运行在指定标签

# 给 node 打上标签
[root@localhost ~]# kubectl label nodes localhost.localdomain app=web
node/localhost.localdomain labeled

[root@localhost ~]#  kubectl get nodes --show-labels 
NAME                    STATUS   ROLES    AGE    VERSION   LABELS
localhost.localdomain   Ready    master   3d2h   v1.17.3   app=web,......略
....
   spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:  #优先满足条件
          - weight: 1
            preference:
              matchExpressions:
              - key: app 
                operator: In
                values:
                - web 
          requiredDuringSchedulingIgnoredDuringExecution:  #要求满足条件
            nodeSelectorTerms:
            - matchExpressions:
              - key: kubernetes.io/hostname
                operator: In
                values:
                - localhost.localdomain
                - node-1
....
# 重新生成 DaemonSet
kubectl delete ds -n kube-system fluentd-elasticsearch 
kubectl apply -f daemonset.yaml

# 查看校验Pod运行情况,是否调度pod到指定的标签节点上
kubectl get daemonsets -n kube-system fluentd-elasticsearch 
kubectl get pods -n kube-system -o wide 

Node Pod节点标签

1.增加节点标签 备注 =:代表增加标签

kubectl label nodes node3 node-role.kubernetes.io/node3=

2.减少节点标签 备注 -:代表减少标签

kubectl label nodes node3 node-role.kubernetes.io/node3-

Label添加删除和修改

kubectl label nodes <node-name> <label-key>=<label-value> 

添加label

# 查看现有node及label
[root@master ~]# kubectl get nodes --show-labels 
NAME     STATUS   ROLES    AGE     VERSION   LABELS
master   Ready    master   54d     v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node01   Ready    <none>   54d     v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node01
node02   Ready    <none>   6d19h   v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02

# 添加一个key为disktype和value为ssd的label
[root@master ~]# kubectl label nodes node01 disktype=ssd
node/node01 labeled

# 查看是否被添加
[root@master ~]# kubectl get nodes --show-labels        
NAME     STATUS   ROLES    AGE     VERSION   LABELS
master   Ready    master   54d     v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node01   Ready    <none>   54d     v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/hostname=node01

删除Label

# 语法
kubectl label nodes <node-name> <label-key>-
# 删除key为disktype的label
[root@master ~]# kubectl label nodes node01 disktype-
node/node01 labeled

[root@master ~]# kubectl get nodes --show-labels     
NAME     STATUS   ROLES    AGE     VERSION   LABELS
master   Ready    master   54d     v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node01   Ready    <none>   54d     v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node01
node02   Ready    <none>   6d19h   v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02

修改Label

#语法: 需要加上--overwrite参数:
kubectl label nodes <node-name> <label-key>=<label-value> --overwrite

[root@master ~]# kubectl label nodes node01 disktype=ssd
node/node01 labeled
[root@master ~]# kubectl label nodes node01 disktype=hdd --overwrite
node/node01 labeled
[root@master ~]# kubectl get nodes --show-labels 
NAME     STATUS   ROLES    AGE     VERSION   LABELS
master   Ready    master   54d     v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=master,node-role.kubernetes.io/master=
node01   Ready    <none>   54d     v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=hdd,kubernetes.io/hostname=node01
node02   Ready    <none>   6d19h   v1.13.4   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=node02

Pod选择Label

# 添加nodeSelector选项用来选择对应的node
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    env: test
spec:
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
  nodeSelector:
    disktype: ssd

过滤 pod 标签 labels

1、查看pod标签:

查看所有pod的标签:

kubectl get pod --show-labels

查看单个pod的标签:

kubectl get pod redis --show-labels

查看namespaces下所有pod的标签:

kubectl get pod --all-namespaces --show-labels

2、列出标签key是db的Pod

格式:kubectl get pod -l 标签的key名

多标签过滤格式:kubectl get pod -l 标签的key,标签的key ###注 逗号是英文的

3、列出带key是db、值redis的标签

4、过滤出标签key不是app、值不是nginx的pod

** 5、过滤出带有key是db,并且值是redis与nginx的pod**

6、过滤出标签key没有db、并且没有pod值与nginx值的pod

Metric-server安装

metric-server简介

  • 提供基础资源如CPU、内存监控接口查询;

  • 接口通过 Kubernetes aggregator注册到kube-apiserver中;

  • 对外通过Metric API暴露给外部访问;

  • 自定义指标使用需要借助Prometheus实现

metric-server API

  • /node 获取所有节点的指标,指标名称为NodeMetrics

  • /node/<node_name> 特定节点指标

  • /namespaces/{namespace}/pods 获取命名空间下的所有pod指标

  • /namespaces/{namespace}/pods/{pod} 特定pod的指标,指标名称为PodMetrics

未来将能够支持指标聚合,如max最大值,min最小值,95th峰值,以及自定义时间窗口,如1h,1d,1w等

1、 核心监控实现

  • 通过kubelet收集资源估算+使用估算

  • metric-server负责数据收集,不负责数据存储

  • metric-server对外暴露Metric API接口

  • 核心监控指标客用户HPA,kubectl top,scheduler和dashboard

2、 自定义监控实现

  • 自定义监控指标包括监控指标和服务指标

  • 需要在每个node上部署一个agent上报至集群监控agent,如prometheus

  • 集群监控agent收集数据后需要将监控指标+服务指标通过API adaptor转换为apiserver能够处理的接口

  • HPA通过自定义指标实现更丰富的弹性扩展能力,需要通过HPA adaptor API做次转换。

metric-server部署

方法一:

git clone https://github.com/kubernetes-sigs/metrics-server.git

# 我的环境是 1.8+,切换到指定 tag , 详情看 Github
[root@localhost metrics-server]# cd metrics-server
[root@localhost metrics-server]# git checkout v0.3.7
[root@localhost metrics-server]# vi deploy/1.8+/metrics-server-deployment.yaml
'''
args:
  ......
  - --kubelet-insecure-tls
  - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  
'''
kubectl apply -f deploy/1.8+/

方法二:

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
''' 添加下面两行内容
args:
  ......
  - --kubelet-insecure-tls
  - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
  
'''
kubectl apply -f components.yaml

查看

[root@localhost ~]# kubectl get deploy -n kube-system metrics-server
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
metrics-server   1/1     1            1           3m42s

[root@localhost ~]# kubectl get po -n kube-system  |grep metrics
metrics-server-7458c4478b-nj56w                 1/1     Running   0          4m33s

[root@localhost ~]# kubectl top no
NAME                    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
localhost.localdomain   78m          3%     1495Mi          40%     

[root@localhost ~]# kubectl top pod -A
NAMESPACE     NAME                                            CPU(cores)   MEMORY(bytes)           
kube-system   metrics-server-7458c4478b-nj56w                 1m           11Mi            
kube-system   storage-provisioner                             1m           14Mi 
.....略...


[root@localhost ~]# kubectl api-versions |grep metrics
metrics.k8s.io/v1beta1

通过API 获取监控资源

# 创建一个kube proxy代理,用于链接apiserver,默认将监听在127的8001端口
[root@node-1 ~]# kubectl proxy 
Starting to serve on 127.0.0.1:8001
# 查看node列表的监控数据,可以获取到所有node的资源监控数据,usage中包含cpu和memory
http://127.0.0.1:8001/apis/metrics.k8s.io/v1beta1/nodes 

# 指定某个具体的node访问到具体node的资源监控数据
- http://127.0.0.1:8001/apis/metrics.k8s.io/v1beta1/nodes/<node-name>

# 查看所有pod的列表信息
- http://127.0.0.1:8001/apis/metrics.k8s.io/v1beta1/pods

# 查看某个具体pod的监控数据
- http://127.0.0.1:8001/apis/metrics.k8s.io/v1beta1/namespace/<namespace-name>/pods/<pod-name

或者

其他近似的接口有:

kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes   获取所有node的数据

kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/<node_name>  获取特定node数据

kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods    获取所有pod的数据

kubectl get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/haproxy-ingress-demo-5d487d4fc-sr8tm 获取某个特定pod的数据

metric-server 部署问题

  1. kubectl top node命令提示如下:

# kubectl top node
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

查看metrics-server日志

# kubectl logs -n kube-system deploy/metrics-server
unable to fully collect metrics: [unable to fully scrape metrics from source kubelet_summary:k8s-node01.ljmict.com: unable to fetch metrics from Kubelet k8s-node01.ljmict.com (k8s-node01.ljmict.com): Get https://k8s-node01.ljmict.com:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-node01.ljmict.com on 10.96.0.10:53: no such host, unable to fully scrape metrics from source kubelet_summary:k8s-node02.ljmict.com: unable to fetch metrics from Kubelet k8s-node02.ljmict.com (k8s-node02.ljmict.com): Get https://k8s-node02.ljmict.com:10250/stats/summary?only_cpu_and_memory=true: dial tcp: lookup k8s-node02.ljmict.com on 10.96.0.10:53: no such host]

从上面错误提示信息来看就是Kubernetes集群中DNS:10.96.0.10是无法解析出k8s-node02.ljmict.com这个域名的。 解决方法:修改metrics-server-deployment.yaml文件,在metrics-server容器配置位置添加如下配置:

command:
- /metrics-server
- --kubelet-preferred-address-types=InternalIP
- --kubelet-insecure-tls

然后删除重新部署

  1. v1beta1.metrics.k8s.io failed with: failing or missing response from

当再次使用kubectl top node的时候发现还是有问题 查看metrics-server日志是正常的。

# kubectl logs -n kube-system deploy/metrics-server 
I0225 07:27:40.020739       1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0225 07:27:40.480958       1 secure_serving.go:116] Serving securely on [::]:4443

查看kube-apiserver日志,发现:

# systemctl status kube-apiserver -l 
v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.96.142.17:443/apis/metrics.k8s.io/v1beta1: Get https://10.96.142.17:443/apis/metrics.k8s.io/v1beta1: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

解决方法:在kube-apiserver选项中添加如下配置选项:

--enable-aggregator-routing=true

重启kube-apiserver,再次使用kubectl top node命令查看

# kubectl top node 
NAME                    CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-node01.ljmict.com   35m          1%     659Mi           38%       
k8s-node02.ljmict.com   46m          2%     800Mi           46%

HPA 水平横向动态扩展

工作原理

  • 根据应用分配资源使用情况,动态增加或者减少Pod副本数量,以实现集群资源的扩容

如何实现

  • 当CPU利用率超过requests分配的80%时即扩容。

实现机制

  • HPA需要依赖于监控组件,调用监控数据实现动态伸缩,如调用Metrics API接口

  • HPA是二级的副本控制器,建立在Deployments,ReplicaSet,StatefulSets等副本控制器基础之上

  • HPA根据获取资源指标不同支持两个版本:v1和v2alpha1

  • HPA V1获取核心资源指标,如CPU和内存利用率,通过调用Metric-server API接口实现

  • HPA V2获取自定义监控指标,通过Prometheus获取监控数据实现

  • HPA根据资源API周期性调整副本数,检测周期horizontal-pod-autoscaler-sync-period定义的值,默认15s

当前HPA V1扩展使用指标只能基于CPU分配使用率进行扩展,功能相对有限,更丰富的功能需要由HPA V2版来实现,其由不同的API来实现:

  • metrics.k8s.io 资源指标API,通过metric-server提供,提供node和pod的cpu,内存资源查询;

  • custom.metrics.k8s.io 自定义指标,通过adapter和kube-apiserver集成,如promethues;

  • external.metrics.k8s.io 外部指标,和自定义指标类似,需要通过adapter和k8s集成。

基于CPU和内存

# hpa-demo.yaml
apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
  name: hpa-demo
spec:
  maxReplicas: 5
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: hpa-demo
---
apiVersion: v1
kind: Service
metadata:
  name: hpa-demo
  labels:
    run: hpa-demo
spec:
  selector:
    run: hpa-demo
  ports:
  - port: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: hpa-demo
spec:
  selector:
    matchLabels:
      run: hpa-demo
  replicas: 1
  template:
    metadata:
      labels:
        run: hpa-demo
    spec:
      containers:
      - name: hpa-demo
        image: nginx:1.7.9
        resources:
          limits:
            cpu: 100m
          requests:
            cpu: 50m
        ports:
        - containerPort: 80
          protocol: TCP

部署HPA

# 虽然 deploy 副本是1个,但是 HPA 会自动扩容到 hpa 最小值
[root@localhost ~]# kubectl apply -f hpa-demo.yaml
[root@localhost ~]# kubectl get hpa,svc,deploy,po
NAME                                           REFERENCE             TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-demo   Deployment/hpa-demo   0%/80%    2         5         2          3m24s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/hpa-demo     ClusterIP   10.101.20.116   <none>        80/TCP    3m24s
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   118m

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/hpa-demo   2/2     2            2           3m24s

NAME                           READY   STATUS    RESTARTS   AGE
pod/hpa-demo-688c79cfc-djhn6   1/1     Running   0          3m9s
pod/hpa-demo-688c79cfc-l59bl   1/1     Running   0          3m24s

测试压力

# 增加负载*
[root@localhost ~]# kubectl exec -it pod/hpa-demo-688c79cfc-djhn6 -- /bin/sh -c 'dd if=/dev/zero of=/dev/null'



# 再次查看
[root@localhost ~]# kubectl get hpa,po
NAME                                           REFERENCE             TARGETS    MINPODS   MAXPODS   REPLICAS   AGE
horizontalpodautoscaler.autoscaling/hpa-demo   Deployment/hpa-demo   101%/80%   2         5         3          7m29s

NAME                           READY   STATUS    RESTARTS   AGE
pod/hpa-demo-688c79cfc-djhn6   1/1     Running   0          7m14s
pod/hpa-demo-688c79cfc-h78mj   1/1     Running   0          40s
pod/hpa-demo-688c79cfc-l59bl   1/1     Running   0          7m29s


# 关闭测试,过一会会自动变成最少的2个副本

基于自定义指标

标签和选择器

标签: 是附加在 kubernetes 对象上的一组键值对, 用来标识 kubernetes 对象,一个对象上可有多个标签,同一个对象标签的key必须唯一

语法

标签的 key 可以有两个部分:可选的前缀和标签名,通过 / 分隔。

  • 标签名:

    • 标签名部分是必须的

    • 不能多于 63 个字符

    • 必须由字母、数字开始和结尾

    • 可以包含字母、数字、减号-、下划线_、小数点.

  • 标签前缀:

    • 标签前缀部分是可选的

    • 如果指定,必须是一个DNS的子域名,例如:k8s.eip.work

    • 不能多于 253 个字符

    • 使用 / 和标签名分隔

  • 标签的 value 必须:

    • 不能多于 63 个字符

    • 可以为空字符串

    • 如果不为空,则

    • 必须由字母、数字开始和结尾

    • 可以包含字母、数字、减号-、下划线_、小数点.

基于等式的选择方式

可以使用三种操作符 =、==、!=。

# 选择了标签名为 `environment` 且 标签值为 `production` 的Kubernetes对象
environment = production

# 选择了标签名为 `tier` 且标签值不等于 `frontend` 的对象,以及不包含标签 `tier` 的对象
tier != frontend

例如 Pod 节点选择器

apiVersion: v1
kind: Pod
metadata:
  name: cuda-test
spec:
  containers:
    - name: cuda-test
      image: "k8s.gcr.io/cuda-vector-add:v0.1"
      resources:
        limits:
          nvidia.com/gpu: 1
  nodeSelector:
    accelerator: nvidia-tesla-p100

基于集合的选择方式

Set-based 标签选择器可以根据标签名的一组值进行筛选。支持的操作符有三种:in、notin、exists。例如:

# 选择所有的包含 `environment` 标签且值为 `production` 或 `qa` 的对象
environment in (production, qa)

# 选择所有的 `tier` 标签不为 `frontend` 和 `backend`的对象,或不含 `tier` 标签的对象
tier notin (frontend, backend)

# 选择所有包含 `partition` 标签的对象
partition

# 选择所有不包含 `partition` 标签的对象
!partition

可以组合多个选择器,用 , 分隔,, 相当于 AND 操作符。例如: 选择包含 partition 标签(不检查标签值)且 environment 不是 qa 的对象

partition,environment notin (qa)

基于集合的选择方式是一个更宽泛的基于等式的选择方式,例如,environment=production 等价于 environment in (production);environment!=production 等价于 environment notin (production)。 基于集合的选择方式可以和基于等式的选择方式可以混合使用,例如: partition in (customerA, customerB),environment!=qa

servcie

什么是service?

为一组具有相同功能的容器应用提供一个统一的入口地址, 通过标签选择器发现后端 Pod 服务,并将请求进行复制分发到后端的哥哥容器应用上的控制器

如何访问,类型有哪些?

访问请求来源有两种: k8s集群内部的程序(Pod)和 k8s集群外部的程序。

类型

  • ClusterIP: 提供一个集群内部的虚拟IP以供Pod访问(service默认类型)。

  • NodePort: 在每个Node上打开一个端口以供外部访问

  • LoadBalancer: 通过外部的负载均衡器来访问

Service 和 Pod 如何建立关联?

  • Service 通过 selector 选择器和 Pod 建立关联

  • **k8s 会根据 service 关联的 Pod 的 IP 地址信息组合成一个 endpoint **

  • 若 service 定义中没有 seletor 字段,service 被创建时, endpoint controller 不会自动创建 endpoint

Service 负载分发策略

  • RoundRobin:轮询模式,即轮询将请求转发到后端的各个pod上(默认模式)

  • SessionAffinity:会话保持模式,基于客户端IP地址进行会话保持的模式,第一次客户端访问后端某个pod,之后的请求都转发到这个pod上。

如何发现 service 服务?

疑问: Service 解决了Pod的服务发现问题,但不提前知道Service的IP,怎么发现service服务呢?

k8s提供了两种方式进行服务发现:

  • 环境变量: 当创建一个Pod的时候,kubelet会在该Pod中注入集群内所有Service的相关环境变量。需要注意的是,要想一个Pod中注入某个Service的环境变量,则必须Service要先比该Pod创建。这一点,几乎使得这种方式进行服务发现不可用。

  • DNS: 可以通过cluster add-on的方式轻松的创建KubeDNS来对集群内的Service进行服务发现————这也是k8s官方强烈推荐的方式。为了让Pod中的容器可以使用kube-dns来解析域名,k8s会修改容器的/etc/resolv.conf配置。

内部service相互调用

格式: <service_name>.<namespace>.svc.cluster.local

例如: nginx
如果无法调用:   k8s-dns ip 查询方式
kubectl get svc kube-dns -n kube-system

如果有nginx的话在nginx.conf中
增加配置resolver <k8s-dns.ip>;
在http, server, location中都可以,按照自己需要。

在 Kubernetes 中,内部服务相互调用是通过 Service 对象实现的。Service 对象可以提供一个虚拟 IP 地址和端口,用于将请求转发到一组 Pods 上。这些 Pods 可以是实现同一个应用程序的多个实例,或者是实现不同服务的多个应用程序实例。

如果一个 Service 要访问另一个 Service,它可以通过使用另一个 Service 的虚拟 IP 地址和端口号来进行调用。对于请求,Kubernetes 的 DNS 服务可以将 Service 名称解析为对应的虚拟 IP 地址。

例如,如果 ServiceA 要访问 ServiceB,它可以通过向 ServiceB.namespace.svc.cluster.local 进行请求来实现。

在设计应用程序时,需要注意内部服务之间的相互调用可能会影响系统的整体性能和可用性。因此,在设计内部服务架构时需要认真考虑性能和可靠性的因素。

k8s服务发现原理

  • endpoint endpoint是k8s集群中的一个资源对象,存储在etcd中,用来记录一个service对应的所有pod的访问地址。

  • endpoint controller

endpoint controller是k8s集群控制器的其中一个组件,其功能如下:

负责生成和维护所有endpoint对象的控制器 负责监听service和对应pod的变化 监听到service被删除,则删除和该service同名的endpoint对象 监听到新的service被创建,则根据新建service信息获取相关pod列表,然后创建对应endpoint对象 监听到service被更新,则根据更新后的service信息获取相关pod列表,然后更新对应endpoint对象 监听到pod事件,则更新对应的service的endpoint对象,将podIp记录到endpoint中

Ingress helm安装

一、安装Helm

wget https://get.helm.sh/helm-v3.3.4-linux-amd64.tar.gz

tar -zxvf helm-v3.3.4-linux-amd64.tar.gz

mv linux-amd64/helm /usr/local/bin/helm

helm version
'''
    version.BuildInfo{Version:"v3.3.4", GitCommit:"a61ce5633af99708171414353ed49547cf05013d", GitTreeState:"clean", GoVersion:"go1.14.9"}
'''

二、下载ingress

# 存放目录
mkdir ingress && cd ingress

# 添加helm的ingress仓库
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx

# 下载ingress的helm包,若版本: --version=3.7.1
helm pull ingress-nginx/ingress-nginx
tar -xf ingress-nginx-3.7.1.tgz && cd ingress-nginx

三、安装ingress

  1. 修改values.yaml

# 修改controller镜像地址, 我是国外服务器,所以可不用修改,国内地址可能已失效
repository: registry.cn-beijing.aliyuncs.com/dotbalo/controller

# dnsPolicy
dnsPolicy: ClusterFirstWithHostNet

# 使用hostNetwork,即使用宿主机上的端口80 443
hostNetwork: true

# 使用DaemonSet,将ingress部署在指定节点上
kind: DaemonSet

# 节点选择,将需要部署的节点打上ingress=true的label
  nodeSelector:
    kubernetes.io/os: linux
    ingress: "true"
     
# 修改type,改为ClusterIP。如果在云环境,有loadbanace可以使用loadbanace
type: ClusterIP

# 修改kube-webhook-certgen镜像地址,我是国外服务器,所以可不用修改,国内地址可能已失效
registry.cn-beijing.aliyuncs.com/dotbalo/kube-webhook-certgen
  1. 安装ingress

# 选择节点打label,尽量不在master
kubectl label node k8s-node01 ingress=true

# 创建一个ingress的namespace
kubectl create ns ingress-nginx

# 创建ingress
helm install ingress-nginx -n ingress-nginx .

kubectl get pods -n ingress-nginx -o wide
'''
	NAME                             READY   STATUS    RESTARTS   AGE   IP               NODE     NOMINATED NODE   READINESS GATES
	ingress-nginx-controller-mrb2t   1/1     Running   0          15h   192.168.10.243   k8s-node01   <none>           <none>
'''

四、使用ingress

  1. Ingress配置文件

# ingress-example.yaml
 
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
  annotations:
    kubernetes.io/ingressClass: "nginx"
spec:
  rules:
  - host: foo.bar.com
    http:
      paths: # 相当于nginx的location配合,同一个host可以配置多个path /
      - path: /
        backend:
          serviceName: nginx-svc
          servicePort: 80
      # 多路径配置多个 service
      #- path: /
      #  backend:
      #    serviceName: nginx-svc2
      #    servicePort: 80

# 多域名续配置多个host
#  - host: foo2.bar.com
#    http:
#      paths: # 相当于nginx的location配合,同一个host可以配置多个path /
#      - path: /
#        backend:
#          serviceName: nginx-svc
#          servicePort: 80
kubectl create -f ingress-example.yaml

kubernetes.io/ingressClass: "nginx":使用ingressClass: "nginx",告诉ingress实现的配置 rules: 一个rules可以有多个host host : 访问ingress的域名 path : 类似于nginx的location配置,同一个host可以配置多个path backend:描述Service和ServicePort的组合。对ingress匹配主机和路径的HTTP与HTTPS请求将被转发到后端Pod

  1. 创建一个nginx的Deployment

# nginx-deployment.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-svc
  labels:
    app: nginx-svc
spec:
  selector:
    app: nginx
  ports:
  - name: http
    protocol: TCP
    port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  labels:
    app: nginx
  name: nginx
  namespace: default
spec:
  progressDeadlineSeconds: 600
  replicas: 2 #副本数
  revisionHistoryLimit: 10 # 历史记录保留的个数
  selector:
    matchLabels:
      app: nginx
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx:1.15.2
        imagePullPolicy: IfNotPresent
        name: nginx
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      restartPolicy: Always
      terminationGracePeriodSeconds: 30
kubeclt create -f nginx-deployment.yaml
kubectl get ingress

ConfigMap

一、介绍 ConfigMap

ConfigMap 是一种API对象,用来将非加密数据保存到键值对中。可以用作环境变量、命令行参数、存储卷中的配置文件

二、创建 ConfigMap

1. 命令行方式创建

echo hello > test1.txt
echo world > test2.txt
kubectl create configmap file-config --from-file=key1=test1.txt --from-file=key2=test2.txt
kubectl describe configmap file-config

'''
	Name:         file-config
	Namespace:    default
	Labels:       <none>
	Annotations:  <none>

	Data
	====
	key1:
	----
	hello

	key2:
	----
	world
'''

2. 文件夹方式创建

mkdir config
echo hello > config/test1
echo world > config/test2
kubectl create configmap dir-config --from-file=config/
kubectl describe configmap dir-config

'''
	Name:         dir-config
	Namespace:    default
	Labels:       <none>
	Annotations:  <none>

	Data
	====
	test1: # key为文件名
	----
	hello  # value为文件内容

	test2:
	----
	world

	Events:  <none>
'''

3. 键值对方式创建

kubectl create configmap literal-config --from-literal=key1=hello --from-literal=key2=world
kubectl describe configmap literal-config
'''
	Name:         literal-config
	Namespace:    default
	Labels:       <none>
	Annotations:  <none>

	Data
	====
	key1:
	----
	hello
	key2:
	----
	world
	Events:  <none>
'''

4. Yaml方式创建

#config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
 name: yml-config
data:
 key1: hello
 key2: world
kubectl create -f config.yaml
kubectl describe configmap yml-config

'''
	Name:         yml-config
	Namespace:    default
	Labels:       <none>
	Annotations:  <none>

	Data
	====
	key1:
	----
	hello
	key2:
	----
	world
	Events:  <none>
'''

三、使用 ConfigMap

Pod的使用方式:

    1. 将ConfigMap中的数据设置为容器的环境变量

    1. 将ConfigMap中的数据设置为命令行参数

    1. 使用Volume将ConfigMap作为文件或目录挂载

    1. 编写代码在Pod中运行,使用Kubernetes API 读取ConfgMap

1. 配置到容器的环境变量

# test-pod-configmap.yaml
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-configmap
spec:
  containers:
  - name: test-busybox
    image: busybox
    imagePullPolicy: IfNotPresent
    args:
      - sleep 
      - "86400"
    env:
      - name: KEY1
        valueFrom:
          configMapKeyRef:
            name: file-config
            key: key1
      - name: KEY2
        valueFrom:
          configMapKeyRef:
            name: file-config
            key: key2
[root@localhost ~]# kubectl create -f test-pod-configmap.yaml 
pod/test-pod-configmap created

[root@localhost ~]# kubectl exec -it test-pod-configmap -- /bin/sh -c 'echo ${KEY1} ${KEY2}'     
hello world

2. 设置为命令行参数

# test-pod-configmap.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-configmap
spec:
  containers:
  - name: test-busybox
    image: busybox
    imagePullPolicy: IfNotPresent
    command: [ "/bin/sh","-c","echo \"$(KEY1)$(KEY2)\"" ]
    env:
      - name: KEY1
        valueFrom:
          configMapKeyRef:
            name: my-config
            key: key1
      - name: KEY2
        valueFrom:
          configMapKeyRef:
            name: my-config
            key: key2
  restartPolicy: Never  # 从不重启Pod
[root@localhost ~]# kubectl create -f test-pod-configmap.yaml 
pod/test-pod-configmap created

[root@localhost ~]# kubectl logs -f test-pod-configmap
hello
world

3. 挂载方式

# test-pod-configmap.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-configmap
spec:
  containers:
  - name: test-busybox
    image: busybox
    imagePullPolicy: IfNotPresent
    args:
      - sleep
      - "86400"
    volumeMounts:
      - name: config-volume
        mountPath: "/config-volume"
        readOnly: true
  volumes:
    - name: config-volume
      projected:
        sources:
          - configMap:
              name: my-config
[root@localhost ~]# kubectl create -f test-pod-configmap.yaml 
pod/test-pod-configmap created

[root@localhost ~]# kubectl exec -it test-pod-configmap -- /bin/sh -c 'ls /config-volume/ && more /config-volume/key1 /config-volume/key2'
key1  key2
hello
world

注意:

  • ConfigMap必须在Pod使用它之前创建

  • 使用envFrom时,将会自动忽略无效的键

  • Pod只能使用同一个命名空间的ConfigMap

4. 热更新

  • 使用 ConfigMap 挂载的 Env 不会同步更新 (可以通过滚动更新 pod 的方式来强制重新挂载 ConfigMap 或者 先将副本数设置为 0,然后再扩容)

  • 使用 ConfigMap 挂载的 Volume 中的数据需要一段时间(实测大概10秒)才能同步更新

Secret

一、介绍 Secret

用户存储和管理一些敏感数据,比如密码、token、秘钥等敏感信息。 用户可以通过在Pod的容器里挂载 Volume 的方式或者 环境变量的方式访问到 Secret 保存的信息

二、三种类型 Secret

  • Opaque: base64加密,存储密码、密钥等;但数据也可以通过base64 –decode解码得到原始数据,加密性很弱。

  • Servuce Account: 用来访问 Kubernetes API, 由Kubernetes自动创建,并且自动挂载到Pod的 /run/secrets/kubernetes.io/serviceaccount 目录中。

  • kubernetes.io/dockerconfigjson: 用来存储私有docker registry的认证信息。

1、 Opaque类型

Opaque 类型的数据是一个 map 类型,要求value是base64编码。

# 加密
[root@localhost ~]# echo -n 'admin' | base64       
YWRtaW4=

# 解密
[root@localhost ~]# echo "YWRtaW4=" |base64 --decode
admin

注意: 创建的 Secret 对象,它里面的内容仅仅是经过了转码,而并没有被加密。在真正的生产环境中,你需要在 Kubernetes 中开启 Secret 的加密插件,增强数据的安全性。

2、 Service Account类型

Service Account 对象的作用,就是 Kubernetes 系统内置的一种“服务账户”,它是 Kubernetes 进行权限分配的对象。比如, Service Account A,可以只被允许对 Kubernetes API 进行 GET 操作,而 Service Account B,则可以有 Kubernetes API 的所有操作权限。

3、 kubernetes.io/dockerconfigjson类型

用来创建用户docker registry认证的Secret,直接使用kubectl create命令创建即可

kubectl create secret docker-registry myregistry --docker-server=DOCKER_SERVER --docker-username=DOCKER_USER --docker-password=DOCKER_PASSWORD --docker-email=DOCKER_EMAIL


kubectl get secret |grep myregistry
myregistry   kubernetes.io/dockerconfigjson   1      7d4h

如何使用?

apiVersion: v1
kind: Pod
metadata:
  name: foo
spec:
  containers:
  - name: foo
    image: <image>
  imagePullSecrets:
  - name: myregistry # 写上你的 secret 名字

三、创建 Secret

1. 文件方式创建

echo admin > username.txt
echo aaaaaaaaa > password.txt
kubectl create secret generic admin-access --from-file=./password.txt --from-file=./username.txt     
'''                      
    secret/admin-access created
'''


kubectl get secret admin-access -o yaml        
'''
	apiVersion: v1
	data:
	  password.txt: YWFhYWFhYWFhCg==  # key 默认是文件名
	  username.txt: YWRtaW4K          # key 默认是文件名
	  .....
'''

2. 文件夹方式创建

mkdir secret
echo admin > secret/username.txt
echo aaaaaaaaa > secret/password.txt
kubectl create secret generic admin-access --from-file=secret/

3. 键值对方式创建

# 自定义两个key 'username', 'password'
kubectl create secret generic admin-access --from-literal=username=admin --from-literal=password=aaaaaaaaa

4. Yaml方式创建

# secret.yaml, 首先base64加密

apiVersion: v1
kind: Secret
metadata:
  name: admin-access
type: Opaque
data:
  password: YWFhYWFhYWFh
  username: YWRtaW4=

5. 键值对+文件创建

echo admin > username.txt
kubectl create secret generic admin-access --from-file=./username.txt --from-literal=password=aaaaaaaaa

kubectl get secret admin-access -o yaml
'''
	apiVersion: v1
	data:
	  password: YWFhYWFhYWFh
	  username.txt: YWRtaW4K
'''

四、使用 Secret

1. 挂载方式

[root@localhost ~]# echo -n 'admin' | base64  
YWRtaW4=
[root@localhost ~]# echo -n 'aaaaaaaaa' | base64       
YWFhYWFhYWFh
# test-pod-secret.yaml

apiVersion: v1
kind: Secret
metadata:
  name: admin-access
type: Opaque
data:
  password: YWFhYWFhYWFh
  username: YWRtaW4=
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-secret
spec:
  containers:
  - name: test-busybox
    image: busybox
    args:
      - "sleep"
      - "86400"
    volumeMounts:
      - name: admin-access
        mountPath: "/test-volume"
        readOnly: true
  volumes:
   - name: admin-access
     secret:
       secretName: admin-access
       defaultMode: 0440
[root@localhost ~]# kubectl create -f test-pod-secret.yaml
[root@localhost ~]# kubectl exec -it test-pod-secret -- /bin/sh -c 'ls /test-volume'
password      username

2. 配置变量方式

# test-pod-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: admin-access
type: Opaque
data:
  password: YWFhYWFhYWFh
  username: YWRtaW4=
---
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-secret
spec:
  containers:
  - name: test-busybox
    image: busybox
    command: [ "/bin/sh","-c","echo $(SECRET_USERNAME) $(SECRET_PASSWORD)" ]
    env:
      - name: SECRET_USERNAME
        valueFrom:
          secretKeyRef:
            key: username
            name: admin-access
      - name: SECRET_PASSWORD
        valueFrom:
          secretKeyRef:
            key: password
            name: admin-access
  restartPolicy: Never
[root@localhost ~]# kubectl logs test-pod-secret
admin aaaaaaaaa

3. 热更新

同 ConfigMap 更新方式一有

  • 挂载 volume 方式支持热更新

  • 变量的方式除非强制更新 或者 副本变0,扩容

Volume

emptyDir

emptyDri介绍

当 Pod 被分配给节点时,首先创建 emptyDir 卷,并且只要该 Pod 在该节点上运行,该卷就会存在。正如卷的名 字所述,它最初是空的。Pod 中的容器可以读取和写入 emptyDir 卷中的相同文件,尽管该卷可以挂载到每个容 器中的相同或不同路径上。当出于任何原因从节点中删除 Pod 时, emptyDir 中的数据将被永久删除

emptyDir用法

  • 暂存空间,例如用于基于磁盘的合并排序

  • 用作长时间计算崩溃恢复时的检查点

  • Web服务器容器提供数据时,报错内容管理器容器提取的文件

apiVersion: v1
kind: Pod
metadata:
  name: test-pod-emptydir
spec:
  containers:
  - name: test-pod-emptydir
    image: busybox
    imagePullPolicy: IfNotPresent
    args:
      - sleep
      - "86400"
    volumeMounts:
    - mountPath: /cache
      name: cache-volume
  volumes:
  - name: cache-volume
    emptyDir: {}
[root@localhost ~]# kubectl exec -it test-pod-emptydir -- /bin/sh -c 'ls  /&& ls /cache'
bin    cache  dev    etc    home   proc   root   sys    tmp    usr    var

hostPath

hostPath介绍

将主机节点的文件系统中的文件或目录挂载到集群中

hostPath用法

  • 运行需要访问 Docker 内部的容器;使用 /var/lib/docker 的 hostPath

  • 在容器中运行 cAdvisor;使用 /dev/cgroups 的 hostPath

  • 允许 pod 指定给定的 hostPath 是否应该在 pod 运行之前存在,是否应该创建,以及它应该以什么形式存在

hostPath类型

hostPath注意

由于每个节点上的文件都不同,具有相同配置(例如从 podTemplate 创建的)的 pod 在不同节点上的行为可能会有所不同 当 Kubernetes 按照计划添加资源感知调度时,将无法考虑 hostPath 使用的资源 在底层主机上创建的文件或目录只能由 root 写入。您需要在特权容器中以 root 身份运行进程,或修改主机上的文件权限以便写入 hostPath 卷

apiVersion: v1
kind: Pod
metadata:
  name: test-pod-hostpath
spec:
  containers:
  - name: test-pod-hostpath
    imagePullPolicy: IfNotPresent
    image: busybox
    args:
      - sleep
      - "86400"
    volumeMounts:
    - mountPath: /data
      name: test-volume
  volumes:
  - name: test-volume
    hostPath:
      # directory location on host
      path: /data
      # this field is optional
      type: Directory
[root@localhost ~]# kubectl exec -it test-pod-hostpath -- /bin/sh -c "date > /data/index.html"            
[root@localhost ~]# cat /data/index.html 
Thu Jul 15 05:54:47 UTC 2021

Cronjob

Cronjob从名字上可以看到,它就是一个计划任务,与Linux中的crontab无异,其格式基本上都crontab一样,

现在编写一个Cronjob资源对象来执行job: Cronjob 在Kubernetes1.8版本之前使用的API版本是batch/v2alpha1, 需要在API Server启动时启用此功能:

--runtime-config=batch/v2alpha1=true

在版本>1.8后,API版本已转为batch/v1beta1,并且默认启用。

[root@localhost ~]# cat test-cronjob.yml 
apiVersion: batch/v1beta1
kind: CronJob
metadata:
  name: mycronjob
spec:
  schedule: "*/1 * * * *"
  jobTemplate:
    spec:
      template:
        metadata:
          name: mycronjob
        spec:
          containers:
          - name: hello
            image: busybox
            command: ["echo","hello k8s job"]
          restartPolicy: OnFailure

创建并查看任务状态:

[root@localhost ~]# kubectl get cronjob 
NAME        SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
mycronjob   */1 * * * *   False     0         <none>            

# 刚创建还没有活跃的工作,也没有计划任何工作

然后,每隔一分钟执行kubectl get cronjob hello 查看任务状态,发现的确是每分钟调度了一次。

[root@localhost ~]# kubectl get cronjob mycronjob
NAME        SCHEDULE      SUSPEND   ACTIVE   LAST SCHEDULE   AGE
mycronjob   */1 * * * *   False     0        61s             19m

# 可以看到在指定的时间内已成功执行了一个job,在LAST-SCHEDULE,目前有0个活动作业,意味着作业已完成或失败。

找出由CronJob资源对象创建出来的Pod

[root@localhost ~]# kubectl get pods  |grep mycronjob
mycronjob-1626345180-zsf62   0/1     Completed   0          2m33s
mycronjob-1626345240-xn5f2   0/1     Completed   0          93s
mycronjob-1626345300-x2qr4   0/1     Completed   0          33s

找到对应的Pod后,查看它的日志:

[root@localhost ~]# kubectl logs mycronjob-1626345180-zsf62
hello k8s job

如果不需要这个CronJob,删除之:

kubectl delete cronjob mycronjob

污点与容忍

Taint 在一类服务器上打上污点,让不能容忍这个污点的 Pod 不能部署在打了污点的服务器上 Master 节点不应该部署系统 Pod 之外的任何 Pod 每个节点可以打上多个污点

operator****可以定义为: Equal:表示key是否等于value,默认 Exists:表示key是否存在,此时无需定义value

tain 的 effect 定义对 Pod 排斥效果: NoSchedule:仅影响调度过程,对现存的Pod对象不产生影响; NoExecute:既影响调度过程,也影响显著的Pod对象;不容忍的Pod对象将被驱逐 PreferNoSchedule: 表示尽量不调度

打上节点污点标记

# 我这里以 test 为 key 测试
kubectl taint nodes k8s-master-01 node-role.kubernetes.io/master:NoSchedule

Pod调度到污点

如果仍然希望某个 pod 调度到 taint 节点上,则必须在 Spec 中做出Toleration定义,才能调度到该节点

# 对于 tolerations 属性的写法,其中的 key、value、effect 与 Node 的 Taint 设置需保持一致,
tolerations:
- key: "node-role.kubernetes.io/master"
  operator: "Exists"
  effect: "NoSchedule"
  • 如果 operator 的值是 Exists,则 value 属性可省略

  • 如果 operator 的值是 Equal,则表示其 key 与 value 之间的关系是 equal(等于)

  • 如果不指定 operator 属性,则默认值为 Equal

  • 空的 key 如果再配合 Exists 就能匹配所有的 key 与 value,也是是能容忍所有 node 的所有 Taints

  • 空的 effect 匹配所有的 effect

取消节点污点标记

kubectl taint nodes k8s-master-01 node-role.kubernetes.io/master-

初始化容器 init containner

优先级最高,先于其他容器启动,主要做一些初始化配置,如下载配置文件、注册信息、证书等

例如:

在初始化容器中把 init container test 写入到/work_dir/index.html下,并把/work_dir挂载到/usr/share/nginx/html, 那么当访问Nginx首页时显示的内容为init container test

apiVersion: v1
kind: Service
metadata:
  name: init-demo
spec:
  selector:
    app: init-demo
  ports:
  - port: 80
    targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: init-demo
  name: init-demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: init-demo
  template:
    metadata:
      labels:
        app: init-demo
    spec:
      initContainers:
      - name: init-container
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh"]
        args:
          [
            "-c",
            "echo 'init container test' >/work_dir/index.html",
          ]
        volumeMounts:
        - name: workdir
          mountPath: "/work_dir"
      containers:
      - image: nginx
        imagePullPolicy: IfNotPresent
        name: web
        ports:
        - containerPort: 80
        volumeMounts:
        - name: workdir
          mountPath: /usr/share/nginx/html
      volumes:
      - name: workdir
        emptyDir: {}
[root@localhost ~]# kubectl get pod -l app=init-demo
NAME                         READY   STATUS    RESTARTS   AGE
init-demo-69b66879cd-26bnz   0/1     ContainerCreating       


[root@localhost ~]# kubectl get svc init-demo
NAME        TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
init-demo   ClusterIP   10.111.69.3   <none>        80/TCP    77s


[root@localhost ~]# curl 10.111.69.3
init container test

Node 亲和性

    1. 硬亲和性: 必须满足条件 matchExpressions : 匹配表达式,这个标签可以指定一段,例如pod中定义的key为zone,operator为In(包含那些),values为 foo和bar。就是在node节点中包含foo和bar的标签中调度 matchFields : 匹配字段 和上面的意思 不过他可以不定义标签值,可以定义

[root@localhost ~]# kubectl label nodes localhost.localdomain web=right 
node/localhost.localdomain labeled

[root@localhost ~]# kubectl get no --show-labels
NAME                    STATUS   ROLES    AGE   VERSION   LABELS
localhost.localdomain   Ready    master   9d    v1.17.3   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,ingress=true,kubernetes.io/arch=amd64,kubernetes.io/hostname=localhost.localdomain,kubernetes.io/os=linux,minikube.k8s.io/commit=a48abe2ac6951dcc3dec45fd546ba4fe0bf42494,minikube.k8s.io/name=minikube,minikube.k8s.io/updated_at=2021_07_07T14_18_41_0700,minikube.k8s.io/version=v1.8.0,node-role.kubernetes.io/master=,web=right

[root@localhost ~]# cat node-affinity-1.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-hello-deployment
  namespace:
  labels:
    app: nginx-hello
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx-hello
  template:
    metadata:
      labels:
        app: nginx-hello
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: web
                operator: In
                values:
                - right
                #- bbb
      containers:
      - name: nginx-hello
        image: nginx
        ports:
        - containerPort: 80

发现按匹配的标签分配到指定的 Node 上面

[root@localhost ~]# kubectl get  pods -l app=nginx-hello -o wide
NAME                                      READY   STATUS    RESTARTS   AGE    IP            NODE                    NOMINATED NODE   READINESS GATES
nginx-hello-deployment-58b564d75c-t9p2t   1/1     Running   0          118s   172.17.0.13   localhost.localdomain   <none>           <none>
nginx-hello-deployment-58b564d75c-xrhdb   1/1     Running   0          118s   172.17.0.15   localhost.localdomain   <none>           <none>

修改匹配 Node 的 label value 指

[root@localhost ~]# kubectl delete -f  node-affinity-1.yaml
[root@localhost ~]# vim node-affinity-1.yaml
.....
                values:
                - right-no
.....


[root@localhost ~]# kubectl apply -f node-affinity-1.yaml
deployment.apps/nginx-hello-deployment created

再次查看,匹配不上回一直Pending

# web 这个标签的 value 值匹配不上,所以会Pending
[root@localhost ~]# kubectl get po -l app=nginx-hello
NAME                                      READY   STATUS    RESTARTS   AGE
nginx-hello-deployment-85cff9cfb9-jg4qg   0/1     Pending   0          22s
nginx-hello-deployment-85cff9cfb9-l6jv2   0/1     Pending   0          22s
    1. 软亲和性: 能满足最好,不满足也可以

修改 node-affinity-1.yaml

.......
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - preference:
              matchExpressions:
              - key: zone
                operator: In
                values:
                - right-no
                #- left-no
            weight: 60 #匹配相应nodeSelectorTerm相关联的权重,1-100
........

部署再次查看,就算是匹配不上Node label,还是会生成Pod

[root@localhost ~]# kubectl create -f node-affinity.yaml 
deployment.apps/nginx-hello-deployment created

[root@localhost ~]# kubectl get po -l app=nginx-hello
NAME                                      READY   STATUS    RESTARTS   AGE
nginx-hello-deployment-799b88ff47-f9ns2   1/1     Running   0          2m29s
nginx-hello-deployment-799b88ff47-hr8gv   1/1     Running   0          2m29s

Pod 亲和性

Pod亲和性场景,我们的k8s集群的节点分布在不同的区域或者不同的机房,当服务A和服务B要求部署在同一个区域或者同一机房的时候,我们就需要亲和性调度了。

labelSelector : 选择跟那组Pod亲和 namespaces : 选择哪个命名空间 topologyKey : 指定节点上的哪个键

亲和性

让两个POD标签处于一处

# cat pod-affinity.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15
        ports:
        - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-pod-affinity
  namespace:
  labels:
    app: nginx-hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-hello
  template:
    metadata:
      labels:
        app: nginx-hello
    spec:
      affinity:
        podAffinity:
          #preferredDuringSchedulingIgnoredDuringExecution:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app #标签键名,上面pod定义
                operator: In  #In表示在
                values:
                - nginx #app标签的值
            topologyKey: kubernetes.io/hostname #kubernetes.io/hostname的值一样代表pod处于同一位置     #此pod应位于同一位置(亲和力)或不位于同一位置(反亲和力),与pods匹配指定名称空间中的labelSelector,其中co-located定义为在标签值为的节点上运行,key topologyKey匹配任何选定pod的任何节点在跑
      containers:
      - name: nginx-hello
        image: nginx
        ports:
        - containerPort: 80
[root@k8s-master ~]# kubectl get  pod -o wide| grep nginx
nginx-deployment-6f6d9b887f-5mvqs                1/1     Running   0          6s    10.254.2.92    k8s-node-2   <none>    <none>
nginx-deployment-pod-affinity-5566c6d4fd-2tnrq   1/1     Running   0          6s    10.254.2.93    k8s-node-2   <none>    <none>
[root@k8s-master ~]#

反亲和性

让pod和某个pod不处于同一node,和上面相反

# cat pod-affinity.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.15
        ports:
        - containerPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment-pod-affinity
  namespace:
  labels:
    app: nginx-hello
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx-hello
  template:
    metadata:
      labels:
        app: nginx-hello
    spec:
      affinity:
        #podAffinity:
        podAntiAffinity:  #就改了这里
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app #标签键名,上面pod定义
                operator: In  #In表示在
                values:
                - nginx #app1标签的值
            topologyKey: kubernetes.io/hostname #kubernetes.io/hostname的值一样代表pod处于同一位置     #此pod应位于同一位置(亲和力)或不位于同一位置(反亲和力),与pods匹配指定名称空间中的labelSelector,其中co-located定义为在标签值为的节点上运行,key topologyKey匹配任何选定pod的任何节点在跑
      containers:
      - name: nginx-hello
        image: nginx
        ports:
        - containerPort: 80
[root@k8s-master ~]# kubectl  apply -f  a.yaml
deployment.extensions/nginx-deployment unchanged
deployment.apps/nginx-deployment-pod-affinity configured

[root@k8s-master ~]# kubectl get  pod -o wide| grep nginx
nginx-deployment-6f6d9b887f-5mvqs                1/1     Running             0          68s   10.254.2.92    k8s-node-2   <none>           <none>
nginx-deployment-pod-affinity-5566c6d4fd-2tnrq   1/1     Running             0          68s   10.254.2.93    k8s-node-2   <none>           <none>
nginx-deployment-pod-affinity-86bdf6996b-fdb8f   0/1     ContainerCreating   0          4s    <none>         k8s-node-1   <none>           <none>

[root@k8s-master ~]# kubectl get  pod -o wide| grep nginx
nginx-deployment-6f6d9b887f-5mvqs                1/1     Running   0          73s   10.254.2.92    k8s-node-2   <none>    <none>
nginx-deployment-pod-affinity-86bdf6996b-fdb8f   1/1     Running   0          9s    10.254.1.56    k8s-node-1   <none>    <none>
[root@k8s-master ~]#

临时容器 debug

服务拓扑 Topology

服务拓扑(Service Topology)可以让一个服务基于集群的 Node 拓扑进行流量路由。 例如,一个服务可以指定流量是被优先路由到一个和客户端在同一个 Node 或者在同一可用区域的端点。拓扑感知的流量路由

一个集群中,其 Node 的标签被打为其主机名,区域名和地区名。 那么就可以设置 Service 的 topologyKeys 的值

  • 只定向到同一个 Node 上的端点,Node 上没有端点存在时就失败: 配置 ["kubernetes.io/hostname"]。

  • 偏向定向到同一个 Node 上的端点,回退同一区域的端点上,然后是同一地区, 其它情况下就失败:配置 ["kubernetes.io/hostname", "topology.kubernetes.io/zone", "topology.kubernetes.io/region"]。 这或许很有用,例如,数据局部性很重要的情况下。

  • 偏向于同一区域,但如果此区域中没有可用的终结点,则回退到任何可用的终结点: 配置 ["topology.kubernetes.io/zone", "*"]。

前提条件

  • Kubernetes 1.17 或更新版本

  • Kube-proxy 以 iptables 或者 IPVS 模式运行

  • 启用端点切片

约束条件

  • 服务拓扑和 externalTrafficPolicy=Local 不兼容,但是在同一个集群的不同 Service 上是可以分别使用这两种特性的,只要不在同一个 Service 上就可以。

  • 有效的拓扑键目前只有:kubernetes.io/hostname、topology.kubernetes.io/zone 和 topology.kubernetes.io/region,但是未来会推广到其它的 Node 标签。

  • 拓扑键必须是有效的标签,并且最多指定16个

  • 通配符:"*",如果要用,则必须是拓扑键值的最后一个值

仅节点本地端点

仅路由到节点本地端点的一种服务。如果节点上不存在端点,流量则被丢弃

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  topologyKeys:
    - "kubernetes.io/hostname"

首选节点本地端点

首选节点本地端点,如果节点本地端点不存在,则回退到集群范围端点的一种服务:

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  topologyKeys:
    - "kubernetes.io/hostname"
    - "*"

仅地域或区域端点

首选地域端点而不是区域端点的一种服务。 如果以上两种范围内均不存在端点, 流量则被丢弃。

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  topologyKeys:
    - "topology.kubernetes.io/zone"
    - "topology.kubernetes.io/region"

优先选择节点本地端点、地域端点,然后是区域端点

优先选择节点本地端点,地域端点,然后是区域端点,最后才是集群范围端点的 一种服务。

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
    - protocol: TCP
      port: 80
      targetPort: 9376
  topologyKeys:
    - "kubernetes.io/hostname"
    - "topology.kubernetes.io/zone"
    - "topology.kubernetes.io/region"
    - "*"

开启服务拓扑

要启用服务拓扑功能,需要为所有 Kubernetes 组件启用 ServiceTopology 和 EndpointSlice 特性门控:

--feature-gates="ServiceTopology=true,EndpointSlice=true"

RBAC权限管理

启用 RBAC

二进制安装的

cat /etc/kubernetes/manifests/kube-apiserver.yaml
'''
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=192.168.1.243
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC    # 添加参数–authorization-mode=RBAC
'''

**kubeadm安装的

1.6版本以上默认开启 RBAC

相关资源对象

  • Rule: 规则, 是一一组属于不同 API Group 资源上的一组操作对象

  • Subject: 主题,对应在集群中尝试操作的对象,集群中定义了3种类型的主题资源:  - User Account: 这是有外部独立服务进行管理的,对于用户的管理集群内部没有一个关联的资源对象,所以用户不能通过集群内部的 API 来进行管理  - Group: 关联多个用户,集群中有一些默认创建的组,比如cluster-admin

    • Service Account: 通过Kubernetes API 来管理的一些用户帐号,和 namespace 进行关联的,适用于集群内部运行的应用程序,需要通过 API 来完成权限认证

  • Role: 角色,用于定义某个命名空间的角色的权限

  • RoleBinding: 命名空间权限,将角色中定义的权限赋予一个或者一组用户,针对命名空间执行授权。

  • ClusterRole: 集群角色,用于定义整个集群的角色的权限。

  • ClusterRoleBinding: 集群权限,将集群角色中定义的权限赋予一个或者一组用户,针对集群范围内的命名空间执行授权

RBAC的使用

参考: https://cloud.tencent.com/developer/article/1684417

Ingress

单个Service + Ingress

** YAML内容 **

# [root@k8s-master-1 test]# cat ngdemo.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: my-nginx
  template:
    metadata:
      labels:
        app: my-nginx
    spec:
      containers:
      - name: my-nginx
        image: nginx:latest
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: my-nginx
  labels:
    app: my-nginx
spec:
  ports:
  - port: 80
    protocol: TCP
    name: http
  selector:
    app: my-nginx
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-nginx
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: ngdemo.qikqiak.com
    http:
      paths:
      - path: /
        backend:
          serviceName: my-nginx
          servicePort: 80

部署

[root@k8s-master-1 test]# kubectl create -f ngdemo.yaml        
deployment.apps/my-nginx created
service/my-nginx created
ingress.extensions/my-nginx created

查看

[root@k8s-master-1 test]# kubectl get ingress,svc,deploy,po
NAME                          CLASS    HOSTS                ADDRESS        PORTS   AGE
ingress.extensions/my-nginx   <none>   ngdemo.qikqiak.com   10.96.55.190   80      70s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP   21h
service/my-nginx     ClusterIP   10.102.201.52   <none>        80/TCP    70s

NAME                       READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/my-nginx   2/2     2            2           70s

NAME                            READY   STATUS    RESTARTS   AGE
pod/my-nginx-69448bd7d9-b67ft   1/1     Running   0          70s
pod/my-nginx-69448bd7d9-gn6h4   1/1     Running   0          70s

客户端测试

[root@localhost ~]# echo "10.10.181.241 ngdemo.qikqiak.com" >>/etc/hosts   # 随便使用某一个node IP地址,<如果是高可用的话,使用vip地址>,作为域名解析地址

[root@localhost ~]# curl ngdemo.qikqiak.com
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

多个 Servcie + Ingress

Ingress-nginx 域名重定向

实现目的:通过访问一个域名重定向到指定域名或者链接 访问 xxxx.com 重定向到 www.baidu.com

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: nginx-rewrite-target-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: http://www.baidu.com
spec:
  rules:
  - host: xxxx.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx-svc
          servicePort: 80

Ingress-nginx 黑白名单

  • Annotations:只对指定的ingress生效

  • ConfigMap:全局生效

  • 黑名单可以使用ConfigMap去配置,白名单建议使用Annotations去配置。

白名单

# annotations
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: whitelist-ingress
  annotations:
 #   nginx.ingress.kubernetes.io/rewrite-target: http://www.baidu.com
    nginx.ingress.kubernetes.io/whitelist-source-range: 10.10.181.5
spec:
  rules:
  - host: test.com
    http:
      paths: # 相当于nginx的location配合,同一个host可以配置多个path /
      - path: /
        backend:
          serviceName: nginx-svc
          servicePort: 80
# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  whitelist-source-range: 10.1.10.0/24

黑名单

# ConfigMap
apiVersion: v1
kind: ConfigMap
metadata:
  name: ingress-nginx-controller
  namespace: ingress-nginx
data:
  whitelist-source-range: 10.1.10.0/24
  block-cidrs: 10.1.10.100
# annotations
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: whitelist-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/server-snippet: |-
      deny 192.168.0.1;
      deny 192.168.0.100;
      allow all;
spec:
  rules:
  - host: test.com
    http:
      paths: # 相当于nginx的location配合,同一个host可以配置多个path /
      - path: /
        backend:
          serviceName: nginx-svc
          servicePort: 80

Ingress-nginx 匹配请求头

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: whitelist-ingress
  annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/server-snippet: |-
      set $agentflag 0;

              if ($http_user_agent ~* "(iPhone)" ){
                set $agentflag 1;
              }

              if ( $agentflag = 1 ) {
                return 301 https://m.baidu.com;
              }
spec:
  rules:
  - host: test.com
    http:
      paths: # 相当于nginx的location配合,同一个host可以配置多个path /
      - path: /
        backend:
          serviceName: nginx-svc
          servicePort: 80

Ingress-nginx 速率限制

限速¶
这些注释定义了对连接和传输速率的限制。这些可以用来减轻DDoS攻击。

nginx.ingress.kubernetes.io/limit-connections:单个IP地址允许的并发连接数。超出此限制时,将返回503错误。
nginx.ingress.kubernetes.io/limit-rps:每秒从给定IP接受的请求数。突发限制设置为此限制乘以突发乘数,默认乘数为5。当客户端超过此限制时,将 返回limit-req-status-code默认值: 503。
nginx.ingress.kubernetes.io/limit-rpm:每分钟从给定IP接受的请求数。突发限制设置为此限制乘以突发乘数,默认乘数为5。当客户端超过此限制时,将 返回limit-req-status-code默认值: 503。
nginx.ingress.kubernetes.io/limit-burst-multiplier:突发大小限制速率的倍数。默认的脉冲串乘数为5,此注释将覆盖默认的乘数。当客户端超过此限制时,将 返回limit-req-status-code默认值: 503。
nginx.ingress.kubernetes.io/limit-rate-after:最初的千字节数,在此之后,对给定连接的响应的进一步传输将受到速率的限制。必须在启用代理缓冲的情况下使用此功能。
nginx.ingress.kubernetes.io/limit-rate:每秒允许发送到给定连接的千字节数。零值禁用速率限制。必须在启用代理缓冲的情况下使用此功能。
nginx.ingress.kubernetes.io/limit-whitelist:客户端IP源范围要从速率限制中排除。该值是逗号分隔的CIDR列表。
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/limit-rate: 100K
    nginx.ingress.kubernetes.io/limit-whitelist: 10.1.10.100
    nginx.ingress.kubernetes.io/limit-rps: 1
    nginx.ingress.kubernetes.io/limit-rpm: 30
spec:
  rules:
  - host: iphone.coolops.cn 
    http:
      paths:
      - path: 
        backend:
          serviceName: ng-svc
          servicePort: 80



nginx.ingress.kubernetes.io/limit-rate:限制客户端每秒传输的字节数
nginx.ingress.kubernetes.io/limit-whitelist:白名单中的IP不限速
nginx.ingress.kubernetes.io/limit-rps:单个IP每秒的连接数
nginx.ingress.kubernetes.io/limit-rpm:单个IP每分钟的连接数

Ingress-nginx 的基本认证

[root@k8s-master-1 ~]# htpasswd -b auth admin 1
Adding password for user admin

[root@k8s-master-1 ~]# more auth 
admin:$apr1$EQ60uyPr$sUhdle2tat35V6s61YIM1.

# 创建secret资源存储用户密码
[root@k8s-master-1 ~]# kubectl -n test-ns create secret generic basic-auth --from-file=auth
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    nginx.ingress.kubernetes.io/auth-secret: basic-auth
    nginx.ingress.kubernetes.io/auth-type: basic
spec:
  rules:
  - host: test.com
    http:
      paths:
      - path: 
        backend:
          serviceName: ng-svc
          servicePort: 80

Ingress-nginx SSL配置

Ingress-nginx配置了SSL,默认会自动跳转到https的网页

禁用https强制跳转

  annotations:
     nginx.ingress.kubernetes.io/ssl-redirect: "false"
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.cert -subj "/CN=test.com/O=test.com"
kubectl create secret tls ca-ceart --key tls.key --cert tls.cert -n test-ns
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    #nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - host: test.com
    http:
      paths:
      - path: 
        backend:
          serviceName: ng-svc
          servicePort: 80
  tls:
    - hosts:
        - test.com
      secretName: ca-ceart

Ingress-nginx 自定义错误页面

github地址:https://github.com/kubernetes/ingress-nginx/blob/master/docs/examples/customization/custom-errors/custom-default-backend.yaml

kubectl apply -f custom-default-backend.yaml -n ingress-nginx

修改ds配置文件,添加这个

        - --default-backend-service=ingress-nginx/nginx-errors          # ingress-nginx 名称空间, nginx-errors service名字
[root@k8s-master-1 ~]# kubectl get ds -n ingress-nginx     
NAME                       DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR                         AGE
ingress-nginx-controller   2         2         2       2            2           ingress=true,kubernetes.io/os=linux   12d

[root@k8s-master-1 ~]# kubectl -n ingress-nginx edit ds ingress-nginx-controller
[root@k8s-master-1 ~]# kubectl get cm -n ingress-nginx
NAME                              DATA   AGE
ingress-controller-leader-nginx   0      12d
ingress-nginx-controller          0      12d


# 修改对应的configmap指定要关联到默认后端服务的服务状态码,意味着如果状态码是配置项中的值,那么返回给客户端浏览器的就是默认后端服务,也可不修改,使用默认
[root@k8s-master-1 ~]# kubectl edit cm  -n ingress-nginx ingress-nginx-controller
apiVersion: v1
data:
  custom-http-errors: 403,404,500,502,503,504 # 添加此行
[root@k8s-master-2 ~]# curl test.com/test.html       
<span>The page you're looking for could not be found.</span>

ingress-nginx 配置多host指向相同后端

spec:
  rules:
  - host: foobar.com
    http: &http_rules
      paths:
      - backend:
          serviceName: foobar
          servicePort: 80
  - host: api.foobar.com
    http: *http_rules
  - host: admin.foobar.com
    http: *http_rules
  - host: status.foobar.com
    http: *http_rules

Ingress-nginx 实现灰度金丝雀发布

https://www.cnblogs.com/heian99/p/14608416.html

https://www.cnblogs.com/ssgeek/p/14149920.html#1%E3%80%81%E9%94%99%E8%AF%AF%E9%A1%B5%E9%9D%A2%E7%8A%B6%E6%80%81%E7%A0%81

Ratel 一键K8s资料管理

参考: https://github.com/dotbalo/ratel-doc

安装说明

# 集群安装配置需要两类文件: servers.yaml和集群管理的kubeconfig文件
servers.yaml是ratel的配置文件, 格式如下:
- serverName: 'xiqu'
  serverAddress: 'https://1.1.1.1:8443'
  #serverAdminUser: 'xxx'
  #serverAdminPassword: 'xxx#'
  serverAdminToken: 'null'
  serverDashboardUrl: "https://k8s.xxx.com.cn/#"
  production: 'false'
  kubeConfigPath: "/mnt/xxx.config"
  harborConfig: "HarborUrl, HarborUsername, HarborPassword, HarborEmail"
# 其中管理的方式有两种(Token暂不支持); 账号密码和kubeconfig形式, 只需配置一种即可, kubeconfig优先级高

参数解析:

  • serverName: 集群别名

  • serverAddress: Kubernetes APIServer地址

  • serverAdminUser: Kubernetes管理员账号(需要配置basic auth)

  • serverAdminPassword: Kubernetes管理员密码

  • serverAdminToken: Kubernetes管理员Token // 暂不支持

  • serverDashboardUrl: Kubernetes官方dashboard地址,1.x版本需要添加/#!,2.x需要添加/#

  • kubeConfigPath: Kubernetes kube.config路径(绝对路径)

  • harborConfig: 对于多集群管理的情况下,可能会存在不同的harbor仓库,配置此参数可以在拷贝资源的时候自动替换harbor配置 注意: kubeConfigPath 通过secret挂载到容器的/mnt目录或者其他目录

创建Secret

# 假设配置两个集群,对应的kubeconfig是test1.config和test2.config
# ratel配置文件servers.yaml内容如下:
- serverName: 'test1'
  serverAddress: 'https://1.1.1.1:8443'
  #serverAdminUser: 'xxx'
  #serverAdminPassword: 'xxx#'
  serverAdminToken: 'null'
  serverDashboardUrl: "https://k8s.test1.com.cn/#"
  production: 'false'
  kubeConfigPath: "/mnt/test1.config"
  harborConfig: "HarborUrl, HarborUsername, HarborPassword, HarborEmail"
- serverName: 'test2'
  serverAddress: 'https://1.1.1.2:8443'
  #serverAdminUser: 'xxx'
  #serverAdminPassword: 'xxx#'
  serverAdminToken: 'null'
  serverDashboardUrl: "https://k8s.test2.com.cn/#!"
  production: 'false'
  kubeConfigPath: "/mnt/test2.config"
  harborConfig: "HarborUrl, HarborUsername, HarborPassword, HarborEmail"
    
# 创建Secret: 
kubectl create secret generic ratel-config  --from-file=test1.config --from-file=test2.config --from-file=servers.yaml -n kube-system





# test1.config是 master 的权限配置

cp /root/.kube/config ratel.config

# 我的配置
- serverName: 'ratel'
  serverAddress: 'https://10.10.181.243:6443'
  #serverAdminUser: 'xxx'
  #serverAdminPassword: 'xxx#'
  serverAdminToken: 'null'
  serverDashboardUrl: "http://k8s.ratel.com/#"
  production: 'false'
  kubeConfigPath: "/mnt/ratel.config"

kubectl create secret generic ratel-config  --from-file=ratel.config  --from-file=servers.yaml -n kube-system

创建RBAC

**[root@k8s-master-1 ratel]# cat ratel-rbac.yaml **

apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    annotations:
      rbac.authorization.kubernetes.io/autoupdate: "true"
    labels:
      kubernetes.io/bootstrapping: rbac-defaults
      rbac.authorization.k8s.io/aggregate-to-edit: "true"
    name: ratel-namespace-readonly
  rules:
  - apiGroups:
    - ""
    resources:
    - namespaces
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - metrics.k8s.io
    resources:
    - pods
    verbs:
    - get
    - list
    - watch
- apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: ratel-pod-delete
  rules:
  - apiGroups:
    - ""
    resources:
    - pods
    verbs:
    - get
    - list
    - delete
- apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: ratel-pod-exec
  rules:
  - apiGroups:
    - ""
    resources:
    - pods
    - pods/log
    verbs:
    - get
    - list
  - apiGroups:
    - ""
    resources:
    - pods/exec
    verbs:
    - create
- apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    annotations:
      rbac.authorization.kubernetes.io/autoupdate: "true"
    name: ratel-resource-edit
  rules:
  - apiGroups:
    - ""
    resources:
    - configmaps
    - persistentvolumeclaims
    - services
    - services/proxy
    verbs:
    - patch
    - update
  - apiGroups:
    - apps
    resources:
    - daemonsets
    - deployments
    - deployments/rollback
    - deployments/scale
    - statefulsets
    - statefulsets/scale
    verbs:
    - patch
    - update
  - apiGroups:
    - autoscaling
    resources:
    - horizontalpodautoscalers
    verbs:
    - patch
    - update
  - apiGroups:
    - batch
    resources:
    - cronjobs
    - jobs
    verbs:
    - patch
    - update
  - apiGroups:
    - extensions
    resources:
    - daemonsets
    - deployments
    - deployments/rollback
    - deployments/scale
    - ingresses
    - networkpolicies
    verbs:
    - patch
    - update
  - apiGroups:
    - networking.k8s.io
    resources:
    - ingresses
    - networkpolicies
    verbs:
    - patch
    - update
- apiVersion: rbac.authorization.k8s.io/v1
  kind: ClusterRole
  metadata:
    name: ratel-resource-readonly
  rules:
  - apiGroups:
    - ""
    resources:
    - configmaps
    - endpoints
    - persistentvolumeclaims
    - pods
    - replicationcontrollers
    - replicationcontrollers/scale
    - serviceaccounts
    - services
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - ""
    resources:
    - bindings
    - events
    - limitranges
    - namespaces/status
    - pods/log
    - pods/status
    - replicationcontrollers/status
    - resourcequotas
    - resourcequotas/status
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - ""
    resources:
    - namespaces
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - apps
    resources:
    - controllerrevisions
    - daemonsets
    - deployments
    - deployments/scale
    - replicasets
    - replicasets/scale
    - statefulsets
    - statefulsets/scale
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - autoscaling
    resources:
    - horizontalpodautoscalers
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - batch
    resources:
    - cronjobs
    - jobs
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - extensions
    resources:
    - daemonsets
    - deployments
    - deployments/scale
    - ingresses
    - networkpolicies
    - replicasets
    - replicasets/scale
    - replicationcontrollers/scale
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - policy
    resources:
    - poddisruptionbudgets
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - networking.k8s.io
    resources:
    - networkpolicies
    - ingresses
    verbs:
    - get
    - list
    - watch
  - apiGroups:
    - metrics.k8s.io
    resources:
    - pods
    verbs:
    - get
    - list
    - watch
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: ratel-namespace-readonly-sa
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: ratel-namespace-readonly
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:serviceaccounts:kube-users
kubectl create -f ratel-rbac.yaml

部署 ratel

apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: ratel
  name: ratel
  namespace: kube-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ratel
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: ratel
    spec:
      containers:
        - command:
            - sh
            - -c
            - ./ratel -c /mnt/servers.yaml
          env:
            - name: TZ
              value: Asia/Shanghai
            - name: LANG
              value: C.UTF-8
            - name: ProRunMode
              value: prod
            - name: ADMIN_USERNAME
              value: admin
            - name: ADMIN_PASSWORD
              value: ratel_password
          image: dotbalo/ratel:latest
          imagePullPolicy: IfNotPresent
          livenessProbe:
            failureThreshold: 2
            initialDelaySeconds: 10
            periodSeconds: 60
            successThreshold: 1
            tcpSocket:
              port: 8888
            timeoutSeconds: 2
          name: ratel
          ports:
            - containerPort: 8888
              name: web
              protocol: TCP
          readinessProbe:
            failureThreshold: 2
            initialDelaySeconds: 10
            periodSeconds: 60
            successThreshold: 1
            tcpSocket:
              port: 8888
            timeoutSeconds: 2
          resources:
            limits:
              cpu: 500m
              memory: 512Mi
            requests:
              cpu: 500m
              memory: 512Mi
          volumeMounts:
            - mountPath: /mnt
              name: ratel-config
      dnsPolicy: ClusterFirst
      #imagePullSecrets:
      #  - name: docker-registry
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
      volumes:
        - name: ratel-config
          secret:
            defaultMode: 420
            secretName: ratel-config

# 需要更改的内容如下:
    ProRunMode: 区别在于dev模式打印的是debug日志, 其他模式是info级别的日志, 实际使用时应该配置为非dev
    ADMIN_USERNAME: ratel自己的管理员账号
    ADMIN_PASSWORD: ratel自己的管理员密码
    dotbalo/ratel:latest 镜像需要登录 hub.docker.com 才能下载,我这里提前下载好了,所以 imagePullSecrets 注释了
    实际使用时账号密码应满足复杂性要求,因为ratel可以直接操作所有配置的资源。
    其他无需配置, 端口配置暂不支持。

Service和Ingress配置

**[root@k8s-master-1 ratel]# cat ratel-ingress.yaml **

# 创建ratel Service的文件如下:
apiVersion: v1
kind: Service
metadata:
  labels:
    app: ratel
  name: ratel
  namespace: kube-system
spec:
  ports:
    - name: container-1-web-1
      port: 8888
      protocol: TCP
      targetPort: 8888
  selector:
    app: ratel
  type: ClusterIP
---
# 创建ratel Ingress: 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ratel
  namespace: kube-system
spec:
  rules:
  - host: k8s.ratel.com
    http:
      paths:
      - backend:
          serviceName: ratel
          servicePort: 8888
        path: /

访问ratel

注意:如果没有安装 ingress controller,需要把type: ClusterIP改成type: NodePort,然后通过主机IP+Port进行访问

通过Ingress配置的 k8s.ratel.com/ratel 访问,ratel登录页如下:

Ceph

官方网站

https://rook.io/

Rook安装要求(三选一)

  • 原始设备(无分区或格式化文件系统)

  • 原始分区(无格式化文件系统)

  • block模式下存储类可用的 PV

  • rook已经使用csi方式挂载ceph,flex方式已逐渐被淘汰。但是使用csi方式有一个大坑,就是kernel>4.10才能正常使用,否则会报很多奇怪的错误

  • ceph要求3+节点,所以我为了使用master节点,把master的污点取消了。

查看是否

[root@k8s-node-1 ~]# lsblk -f
NAME                          FSTYPE      LABEL UUID                                   MOUNTPOINT
sda                                                                                    
├─sda1                        xfs               fab93a75-415d-4b6c-a15c-309b2e231581   /boot
└─sda2                        LVM2_member       t3rXjh-g0rk-AgXK-SbLP-xCA3-vTi7-a7ucrD 
  ├─centos_k8s--node--01-root xfs               c76590d9-433e-4e0a-9ea1-27e5c34749a4   /
  └─centos_k8s--node--01-swap swap              9cb5c6cc-39a2-447a-9778-95838d3e1c05   
sdb     <新添加的硬盘>                                                                               
sr0 

Ceph部署

当前版本 v1.6.7

git clone --single-branch --branch v1.6.7 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f crds.yaml -f common.yaml -f operator.yaml
kubectl create -f cluster.yaml

查看

[root@k8s-master-1 ceph]# kubectl -n rook-ceph get pod
NAME                                                     READY   STATUS      RESTARTS   AGE
csi-cephfsplugin-999f5                                   3/3     Running     0          83m
csi-cephfsplugin-dpvvg                                   3/3     Running     0          80m
csi-cephfsplugin-fps9f                                   3/3     Running     0          83m
csi-cephfsplugin-provisioner-54bf4f5679-cdf95            6/6     Running     12         83m
csi-cephfsplugin-provisioner-54bf4f5679-pddkb            6/6     Running     0          83m
csi-rbdplugin-857w4                                      3/3     Running     0          83m
csi-rbdplugin-npdsk                                      3/3     Running     0          83m
csi-rbdplugin-provisioner-85c58fcfb4-2kqjf               6/6     Running     11         83m
csi-rbdplugin-provisioner-85c58fcfb4-8p9ss               6/6     Running     0          83m
csi-rbdplugin-th4pl                                      3/3     Running     0          80m
rook-ceph-crashcollector-k8s-master-1-74cd47fc5b-bj5tl   1/1     Running     0          61m
rook-ceph-crashcollector-k8s-node-1-5bf5bfffc4-xnqsb     1/1     Running     0          61m
rook-ceph-crashcollector-k8s-node-2-58b7675749-wp5nd     1/1     Running     0          61m
rook-ceph-mgr-a-68fd8f9f5f-fjdx6                         1/1     Running     0          61m
rook-ceph-mon-a-86b85bf688-vq8xk                         1/1     Running     0          80m
rook-ceph-mon-b-545687f449-qgz4v                         1/1     Running     0          62m
rook-ceph-mon-c-65874c6cb6-89t5t                         1/1     Running     0          62m
rook-ceph-operator-87fb6f5f4-zbsp5                       1/1     Running     0          110m
rook-ceph-osd-0-674f4bd6cd-wg4cp                         1/1     Running     0          61m
rook-ceph-osd-1-5f95df8c8d-vfzm2                         1/1     Running     0          61m
rook-ceph-osd-2-6dc765f868-4h4qb                         1/1     Running     0          61m
rook-ceph-osd-prepare-k8s-master-1-nqkbl                 0/1     Completed   0          60m
rook-ceph-osd-prepare-k8s-node-1-bttkh                   0/1     Completed   0          60m
rook-ceph-osd-prepare-k8s-node-2-mvcwk                   0/1     Completed   0          60m

Rook 工具箱

要验证集群是否处于健康状态,需使用Rook 工具箱

[root@k8s-master-1 ceph]# kubectl create -f toolbox.yaml 
deployment.apps/rook-ceph-tools created

[root@k8s-master-1 ceph]# kubectl get po -n rook-ceph -l app=rook-ceph-tools 
NAME                              READY   STATUS    RESTARTS   AGE
rook-ceph-tools-5b5bfc786-gvlzk   1/1     Running   0          4m22s

Ceph验证健康

[root@k8s-master-1 ceph]# kubectl -n rook-ceph exec -it deploy/rook-ceph-tools -- bash
[root@rook-ceph-tools-5b5bfc786-gvlzk /]# ceph status
  cluster:
    id:     1c155f76-174a-4a31-bdcf-537aaccb121e
    health: HEALTH_OK
 
  services:
    mon: 3 daemons, quorum a,b,c (age 66m)
    mgr: a(active, since 64m)
    osd: 3 osds: 3 up (since 65m), 3 in (since 65m)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 0 objects, 0 B
    usage:   3.0 GiB used, 45 GiB / 48 GiB avail
    pgs:     1 active+clean
 

创建块存储

[root@k8s-master-1 rbd]# pwd
/root/rook/cluster/examples/kubernetes/ceph/csi/rbd

[root@k8s-master-1 rbd]# kubectl create -f storageclass.yaml   # 默认回收策略 Delete
cephblockpool.ceph.rook.io/replicapool created
storageclass.storage.k8s.io/rook-ceph-block created

[root@k8s-master-1 rbd]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   2d

[root@k8s-master-1 rbd]# kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   5s

测试wordpress

[root@k8s-master-1 kubernetes]# pwd
/root/rook/cluster/examples/kubernetes


# 修改 wordpress 访问方式,默认 LoadBalancer, 若有 ingress, 修改为 type: ClusterIP, 若无 ingress,修改为 type: NodePort
# 我本地环境已有 ingress-nginx, 所有使用 type: ClusterIP
vi wordpress.yaml
'''
  type: ClusterIP
'''

# 自己创建文件添加: ingress.yaml
vi ingress.yaml
'''
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: wordpress
spec:
  rules:
  - host: k8s.wordpress.com
    http:
      paths:
      - backend:
          serviceName: wordpress
          servicePort: 80
        path: /
'''


kubectl create -f wordpress.yaml 
kubectl create -f mysql.yaml
kubectl create -f ingress.yaml 


[root@k8s-master-1 kubernetes]# ls
cassandra  ceph  ingress.yaml  mysql.yaml  nfs  README.md  wordpress.yaml

修改本机host主机,访问域名: k8s.wordpress.com

operator 部署 redis-cluster 集群

参考: https://github.com/ucloud/redis-cluster-operator

Helm 部署 Redis-cluster

安装 redis-cluster-operator

helm repo add ucloud-operator https://ucloud.github.io/redis-cluster-operator/
helm repo update

helm install --generate-name ucloud-operator/redis-cluster-operator
# helm pull ucloud-operator/redis-cluster-operator


[root@k8s-master-1 redis-cluster-operator]# kubectl get deploy -n redis-ns
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
redis-cluster-operator   1/1     1            1           138m

查看存储

[root@k8s-master-1 redis-cluster-operator]# kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   23h

部署 redis 持久化存储,动态自动分配

[root@k8s-master-1 redis-cluster-operator]# more deploy/example/persistent.yaml 
apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    redis.kun/scope: cluster-scoped
  name: example-distributedrediscluster
spec:
  image: redis:5.0.4-alpine
  # master节点数量
  masterSize: 3
  # 每个master节点的从节点数量
  clusterReplicas: 1
  storage:
    type: persistent-claim
    size: 1Gi
    # class: csi-rbd-sc
    class: rook-ceph-block
    deleteClaim: true

查看部署

[root@k8s-master-1 redis-cluster-operator]# kubectl get po,pvc,svc -n redis-ns
NAME                                          READY   STATUS    RESTARTS   AGE
pod/drc-example-distributedrediscluster-0-0   1/1     Running   0          45m
pod/drc-example-distributedrediscluster-0-1   1/1     Running   0          44m
pod/drc-example-distributedrediscluster-1-0   1/1     Running   0          45m
pod/drc-example-distributedrediscluster-1-1   1/1     Running   0          44m
pod/drc-example-distributedrediscluster-2-0   1/1     Running   0          45m
pod/drc-example-distributedrediscluster-2-1   1/1     Running   0          44m
pod/redis-cluster-operator-76596577bd-j8zth   1/1     Running   0          122m

NAME                                                                       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
persistentvolumeclaim/redis-data-drc-example-distributedrediscluster-0-0   Bound    pvc-888553ae-11e1-492a-829c-097bda010a5c   1Gi        RWO            rook-ceph-block   45m
persistentvolumeclaim/redis-data-drc-example-distributedrediscluster-0-1   Bound    pvc-9b4ba86d-f52f-47e4-998f-95c7e51b008d   1Gi        RWO            rook-ceph-block   44m
persistentvolumeclaim/redis-data-drc-example-distributedrediscluster-1-0   Bound    pvc-3b9ccdd3-39f3-4117-97ec-4f48ad62ac35   1Gi        RWO            rook-ceph-block   45m
persistentvolumeclaim/redis-data-drc-example-distributedrediscluster-1-1   Bound    pvc-b72a6905-e9f2-4ed7-ad0c-e47c6602774b   1Gi        RWO            rook-ceph-block   44m
persistentvolumeclaim/redis-data-drc-example-distributedrediscluster-2-0   Bound    pvc-48257418-5770-4d73-8516-3392e2ad7725   1Gi        RWO            rook-ceph-block   45m
persistentvolumeclaim/redis-data-drc-example-distributedrediscluster-2-1   Bound    pvc-2a6049cb-95c3-493e-b9d8-b055f5908f3e   1Gi        RWO            rook-ceph-block   44m

NAME                                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)              AGE
service/example-distributedrediscluster     ClusterIP   10.105.230.155   <none>        6379/TCP,16379/TCP   45m
service/example-distributedrediscluster-0   ClusterIP   None             <none>        6379/TCP,16379/TCP   45m
service/example-distributedrediscluster-1   ClusterIP   None             <none>        6379/TCP,16379/TCP   45m
service/example-distributedrediscluster-2   ClusterIP   None             <none>        6379/TCP,16379/TCP   45m
service/redis-cluster-operator-metrics      ClusterIP   10.109.73.251    <none>        8383/TCP,8686/TCP    117m

测试数据

进入 pod 写入一条测试数据,并模拟关闭一个 redis 节点服务器

> service/example-distributedrediscluster   是 redis 入口

[root@k8s-master-1 redis-cluster-operator]# kubectl exec -it pod/drc-example-distributedrediscluster-0-0 -n redis-ns -- /bin/sh
/data # redis-cli -c -h example-distributedrediscluster
example-distributedrediscluster:6379> CLUSTER info
cluster_state:ok
cluster_slots_assigned:16384
cluster_slots_ok:16384
cluster_slots_pfail:0
cluster_slots_fail:0
cluster_known_nodes:6
cluster_size:3
cluster_current_epoch:4
cluster_my_epoch:1
cluster_stats_messages_ping_sent:2454
cluster_stats_messages_pong_sent:2419
cluster_stats_messages_meet_sent:2
cluster_stats_messages_sent:4875
cluster_stats_messages_ping_received:2415
cluster_stats_messages_pong_received:2456
cluster_stats_messages_meet_received:4
cluster_stats_messages_received:4875
example-distributedrediscluster:6379> set age 20
-> Redirected to slot [741] located at 10.244.0.25:6379
OK
10.244.0.25:6379> exit
/data # exit

进入另一个 pod ,查看是否存在

[root@k8s-master-1 redis-cluster-operator]# kubectl exec -it pod/drc-example-distributedrediscluster-0-1 -n redis-ns -- /bin/sh
/data # redis-cli -c -h example-distributedrediscluster
example-distributedrediscluster:6379> get age
-> Redirected to slot [741] located at 10.244.0.25:6379
"20"

扩容缩容

扩容

apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    redis.kun/scope: cluster-scoped
  name: example-distributedrediscluster
spec:
  # 增加 masterSize 以触发放大。
  masterSize: 4
  ClusterReplicas: 1
  image: redis:5.0.4-alpine

缩容

apiVersion: redis.kun/v1alpha1
kind: DistributedRedisCluster
metadata:
  annotations:
    # if your operator run as cluster-scoped, add this annotations
    redis.kun/scope: cluster-scoped
  name: example-distributedrediscluster
spec:
  # 减小 masterSize 以触发缩小
  masterSize: 3
  ClusterReplicas: 1
  image: redis:5.0.4-alpine

Helm 部署 RabbitMQ

添加仓库

[root@k8s-master-1 ~]# helm repo add aliyuncs https://apphub.aliyuncs.com
"aliyuncs" has been added to your repositories

[root@k8s-master-1 ~]# helm repo update

[root@k8s-master-1 ~]# helm repo list
NAME            URL                                                   
ingress-nginx   https://kubernetes.github.io/ingress-nginx            
ucloud-operator https://ucloud.github.io/redis-cluster-operator/      
aliyuncs        https://apphub.aliyuncs.com       < RabbitMQ 需要使用这个仓库>

查看RabbitMQ版本

[root@k8s-master-1 ~]# helm search repo rabbitmq-ha          
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
aliyuncs/rabbitmq-ha    1.39.0          3.8.0           Highly available RabbitMQ cluster, the open sou...

# 若需要安装指定版本
[root@k8s-master-1 ~]# helm search repo rabbitmq-ha --versions
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
aliyuncs/rabbitmq-ha    1.39.0          3.8.0           Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.38.2          3.8.0           Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.38.1          3.8.0           Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.36.4          3.8.0           Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.36.3          3.8.0           Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.36.0          3.8.0           Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.34.1          3.7.19          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.34.0          3.7.19          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.33.0          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.32.4          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.32.3          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.32.2          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.32.0          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.31.0          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.30.0          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.29.1          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.29.0          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.28.0          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.27.2          3.7.15          Highly available RabbitMQ cluster, the open sou...
aliyuncs/rabbitmq-ha    1.27.1          3.7.12          Highly available RabbitMQ cluster, the open sou...

安装指定 RabbitMQ 版本

[root@k8s-master-1 ~]# helm pull aliyuncs/rabbitmq-ha --version=1.39.0
[root@k8s-master-1 ~]# ls
rabbitmq-ha-1.39.0.tgz
[root@k8s-master-1 ~]# tar fx rabbitmq-ha-1.39.0.tgz
[root@k8s-master-1 ~]# cd rabbitmq-ha && tree
.
├── Chart.yaml                   # 这个chart的版本信息
├── ci
│   ├── prometheus-exporter-values.yaml
│   └── prometheus-plugin-values.yaml
├── OWNERS
├── README.md
├── templates                    # 模板
│   ├── alerts.yaml
│   ├── configmap.yaml
│   ├── _helpers.tpl             # 自定义的模板或者函数
│   ├── ingress.yaml
│   ├── NOTES.txt                # 这个chart的信息
│   ├── pdb.yaml
│   ├── rolebinding.yaml
│   ├── role.yaml
│   ├── secret.yaml
│   ├── serviceaccount.yaml
│   ├── service-discovery.yaml
│   ├── servicemonitor.yaml
│   ├── service.yaml
│   └── statefulset.yaml
└── values.yaml                  # 配置全局变量或者一些参数

2 directories, 20 files
[root@k8s-master-1 rabbitmq-ha]# kubectl create ns rabbitmq-cluster

#[root@k8s-master-1 rabbitmq-ha]# helm create rabbitmq-cluster

# 开启 ingress,指定域名,指定用户,指定密码
[root@k8s-master-1 rabbitmq-ha]# helm install rabbitmq --namespace rabbitmq-cluster \
--set ingress.enabled=true,ingress.hostName=rabbitmq.akiraka.net \
--set rabbitmqUsername=aka,rabbitmqPassword=rabbitmq,managementPassword=rabbitmq,rabbitmqErlangCookie=secretcookie .


'''
	NAME: rabbitmq
	LAST DEPLOYED: Fri Jul 23 16:49:58 2021
	NAMESPACE: rabbitmq-cluster
	STATUS: deployed
	REVISION: 1
	TEST SUITE: None
	NOTES:
	** Please be patient while the chart is being deployed **

	  Credentials:

	    Username            : aka
	    Password            : $(kubectl get secret --namespace rabbitmq-cluster rabbitmq-rabbitmq-ha -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
	    Management username : management
	    Management password : $(kubectl get secret --namespace rabbitmq-cluster rabbitmq-rabbitmq-ha -o jsonpath="{.data.rabbitmq-management-password}" | base64 --decode)
	    ErLang Cookie       : $(kubectl get secret --namespace rabbitmq-cluster rabbitmq-rabbitmq-ha -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)

	  RabbitMQ can be accessed within the cluster on port 5672 at rabbitmq-rabbitmq-ha.rabbitmq-cluster.svc.cluster.local

	  To access the cluster externally execute the following commands:

	    export POD_NAME=$(kubectl get pods --namespace rabbitmq-cluster -l "app=rabbitmq-ha" -o jsonpath="{.items[0].metadata.name}")
	    kubectl port-forward $POD_NAME --namespace rabbitmq-cluster 5672:5672 15672:15672

	  To Access the RabbitMQ AMQP port:

	    amqp://127.0.0.1:5672/ 

	  To Access the RabbitMQ Management interface:

	    URL : http://127.0.0.1:15672
'''

[root@k8s-master-1 rabbitmq-ha]# kubectl get svc,pod,ingress -n rabbitmq-cluster
NAME                                     TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)                       AGE
service/rabbitmq-rabbitmq-ha             ClusterIP   10.110.25.210   <none>        15672/TCP,5672/TCP,4369/TCP   151m
service/rabbitmq-rabbitmq-ha-discovery   ClusterIP   None            <none>        15672/TCP,5672/TCP,4369/TCP   151m

NAME                         READY   STATUS    RESTARTS   AGE
pod/rabbitmq-rabbitmq-ha-0   1/1     Running   0          151m
pod/rabbitmq-rabbitmq-ha-1   1/1     Running   0          151m
pod/rabbitmq-rabbitmq-ha-2   1/1     Running   0          142m

NAME                                      CLASS    HOSTS                  ADDRESS        PORTS   AGE
ingress.extensions/rabbitmq-rabbitmq-ha   <none>   rabbitmq.akiraka.net   10.96.55.190   80      151m

浏览器访问

卸载保留历史记录

[root@k8s-master-1 rabbitmq-ha]# helm  uninstall rabbitmq -n rabbitmq-cluster --keep-history
release "rabbitmq" uninstalled

[root@k8s-master-1 rabbitmq-ha]# helm list
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

[root@k8s-master-1 rabbitmq-ha]# helm  ls
NAME    NAMESPACE       REVISION        UPDATED STATUS  CHART   APP VERSION

[root@k8s-master-1 rabbitmq-ha]# helm status rabbitmq -n rabbitmq-cluster
NAME: rabbitmq
LAST DEPLOYED: Fri Jul 23 16:49:58 2021
NAMESPACE: rabbitmq-cluster
STATUS: uninstalled
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

  Credentials:

    Username            : aka
    Password            : $(kubectl get secret --namespace rabbitmq-cluster rabbitmq-rabbitmq-ha -o jsonpath="{.data.rabbitmq-password}" | base64 --decode)
    Management username : management
    Management password : $(kubectl get secret --namespace rabbitmq-cluster rabbitmq-rabbitmq-ha -o jsonpath="{.data.rabbitmq-management-password}" | base64 --decode)
    ErLang Cookie       : $(kubectl get secret --namespace rabbitmq-cluster rabbitmq-rabbitmq-ha -o jsonpath="{.data.rabbitmq-erlang-cookie}" | base64 --decode)

  RabbitMQ can be accessed within the cluster on port 5672 at rabbitmq-rabbitmq-ha.rabbitmq-cluster.svc.cluster.local

  To access the cluster externally execute the following commands:

    export POD_NAME=$(kubectl get pods --namespace rabbitmq-cluster -l "app=rabbitmq-ha" -o jsonpath="{.items[0].metadata.name}")
    kubectl port-forward $POD_NAME --namespace rabbitmq-cluster 5672:5672 15672:15672

  To Access the RabbitMQ AMQP port:

    amqp://127.0.0.1:5672/ 

  To Access the RabbitMQ Management interface:

    URL : http://127.0.0.1:15672

升级

# 升级(改完yaml文件之后重新应用)
[root@k8s-master-1 rabbitmq-ha]# helm upgrade rabbitmq .

扩容

[root@k8s-master-1 rabbitmq-ha]# helm upgrade rabbitmq --set replicas=3 .

删除

[root@k8s-master-1 rabbitmq-ha]# helm delete rabbitmq

模拟测试

# 测试自己模板是否正常
[root@k8s-master-1 rabbitmq-ha]# helm install --dry-run rabbitmq .

Helm 部署 Zookeeper + Kafka

参考: https://docs.bitnami.com/tutorials/deploy-scalable-kafka-zookeeper-cluster-kubernetes/

准备工作

添加仓库

[root@k8s-master-1 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

[root@k8s-master-1 ~]# helm repo list
NAME            URL                                                 
bitnami         https://charts.bitnami.com/bitnami        

动态PV (Rook-ceph)

[root@k8s-master-1 zk]# kubectl get sc
NAME              PROVISIONER                  RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
rook-ceph-block   rook-ceph.rbd.csi.ceph.com   Delete          Immediate           true                   2d20h

选择版本

> --versions   查看详细版本
[root@k8s-master-1 zk]# helm search repo zookeeper
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/zookeeper               7.0.9           3.7.0           A centralized service for maintaining configura...
bitnami/kafka                   13.0.3          2.8.0           Apache Kafka is a distributed streaming platform. 

下载helm包到本地,在线安装可不用下载

# 若其他版本 --version=x.x.x
[root@k8s-master-1 zk]# helm pull bitnami/zookeeper
[root@k8s-master-1 zk]# helm pull bitnami/kafka
[root@k8s-master-1 zk]# ls
kafka-13.0.3.tgz  zookeeper-7.0.9.tgz

安装zookeeper + kafka

安装方法参考bitnami官网:https://docs.bitnami.com/tutorials/deploy-scalable-kafka-zookeeper-cluster-kubernetes/

在线安装

# 安装zookeeper,可通过-n namaspace添加名称空间,因不暴露在公网,关闭了认证(--set auth.enabled=false),并允许匿名访问,设置zookeeper副本为3
helm install zookeeper bitnami/zookeeper  \
--set replicaCount=3   \
--set auth.enabled=false   \
--set allowAnonymousLogin=true

# 安装kafka,取消自动创建zookeeper,使用刚刚创建的zookeeper,制定zookeeper的服务名称,
helm install kafka bitnami/kafka   \
--set zookeeper.enabled=false   \
--set replicaCount=3  \
--set externalZookeeper.servers=zookeeper

kafka3.x 版本在线安装

这里有个坑,kafka 最新版本可能不兼容zookeeper,一定要使用kafka与zookeeper对应的版本

# 搜索仓库
[root@node1 dynamic]# helm search repo zookeeper
NAME                            CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/zookeeper               12.3.3          3.9.1           Apache ZooKeeper provides a reliable, centraliz...
bitnami/dataplatform-bp2        12.0.5          1.0.1           DEPRECATED This Helm chart can be used for the ...
bitnami/kafka                   26.4.3          3.6.0           Apache Kafka is a distributed streaming platfor...
bitnami/schema-registry         16.2.4          7.5.2           Confluent Schema Registry provides a RESTful in...
bitnami/solr                    8.3.2           9.4.0           Apache Solr is an extremely powerful, open sour...

# 搜索指定仓库里的历史版本
[root@node1 dynamic]# helm search repo bitnami/zookeeper -l |head -n 3
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                                       
bitnami/zookeeper       12.3.3          3.9.1           Apache ZooKeeper provides a reliable, centraliz...
bitnami/zookeeper       12.3.2          3.9.1           Apache ZooKeeper provides a reliable, centraliz...


helm install zookeeper bitnami/zookeeper --version 12.3.3   \
--set replicaCount=1   \
--set auth.enabled=false   \
--set allowAnonymousLogin=true -n younamespace



helm install kafka bitnami/kafka --version 17.2.3  \
--set replicaCount=3 \
--set zookeeper.enabled=false \
--set externalZookeeper.servers=zookeeper -n younamespace

离线安装

[root@k8s-master-1 zk]# tar fx kafka-13.0.3.tgz 
[root@k8s-master-1 zk]# tar fx zookeeper-7.0.9.tgz 

zookeeper安装

[root@k8s-master-1 zk]# helm install zookeeper -n zk --set replicaCount=3  --set auth.enabled=false --set allowAnonymousLogin=true ./zookeeper
NAME: zookeeper
LAST DEPLOYED: Sat Jul 24 15:32:13 2021
NAMESPACE: zk
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

ZooKeeper can be accessed via port 2181 on the following DNS name from within your cluster:

    zookeeper.zk.svc.cluster.local

To connect to your ZooKeeper server run the following commands:

    export POD_NAME=$(kubectl get pods --namespace zk -l "app.kubernetes.io/name=zookeeper,app.kubernetes.io/instance=zookeeper,app.kubernetes.io/component=zookeeper" -o jsonpath="{.items[0].metadata.name}")
    kubectl exec -it $POD_NAME -- zkCli.sh

To connect to your ZooKeeper server from outside the cluster execute the following commands:

    kubectl port-forward --namespace zk svc/zookeeper 2181:2181 &
    zkCli.sh 127.0.0.1:2181

kafka安装

[root@k8s-master-1 zk]# helm install kafka -n zk --set zookeeper.enabled=false --set replicaCount=3  --set externalZookeeper.servers=zookeeper ./kafka       
NAME: kafka
LAST DEPLOYED: Sat Jul 24 15:32:57 2021
NAMESPACE: zk
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

Kafka can be accessed by consumers via port 9092 on the following DNS name from within your cluster:

    kafka.zk.svc.cluster.local

Each Kafka broker can be accessed by producers via port 9092 on the following DNS name(s) from within your cluster:

    kafka-0.kafka-headless.zk.svc.cluster.local:9092
    kafka-1.kafka-headless.zk.svc.cluster.local:9092
    kafka-2.kafka-headless.zk.svc.cluster.local:9092

To create a pod that you can use as a Kafka client run the following commands:

    kubectl run kafka-client --restart='Never' --image docker.io/bitnami/kafka:2.8.0-debian-10-r43 --namespace zk --command -- sleep infinity
    kubectl exec --tty -i kafka-client --namespace zk -- bash

    PRODUCER:
        kafka-console-producer.sh \
            
            --broker-list kafka-0.kafka-headless.zk.svc.cluster.local:9092,kafka-1.kafka-headless.zk.svc.cluster.local:9092,kafka-2.kafka-headless.zk.svc.cluster.local:9092 \
            --topic test

    CONSUMER:
        kafka-console-consumer.sh \
            
            --bootstrap-server kafka.zk.svc.cluster.local:9092 \
            --topic test \
            --from-beginning

检查

[root@k8s-master-1 zk]# kubectl logs -f -n zk kafka-2
'''
	....
	[2021-07-24 07:46:54,934] INFO Opening socket connection to server zookeeper/10.111.58.56:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn)
	[2021-07-24 07:46:54,942] INFO Socket connection established, initiating session, client: /10.244.0.31:51900, server: zookeeper/10.111.58.56:2181 (org.apache.zookeeper.ClientCnxn)
	[2021-07-24 07:46:55,004] INFO Session establishment complete on server zookeeper/10.111.58.56:2181, sessionid = 0x3000a3dba9d0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn)
	....
'''

测试集群

# 创建消费者
[root@k8s-master-1 zk]# kubectl exec -it -n zk kafka-0 bash
I have no name!@kafka-0:/$ kafka-topics.sh --create --zookeeper zookeeper:2181 --replication-factor 1 --partitions 1 --topic mytopics
Created topic mytopics.

# 启动消费者
I have no name!@kafka-0:/$ kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic mytopics 

新开一个窗口,进入kafka的pod,启动一个生产者,输入消息;在消费者端可以收到消息

[root@k8s-master-1 ~]# kubectl exec -it -n zk kafka-2 bash
I have no name!@kafka-2:/$ kafka-console-producer.sh --bootstrap-server kafka:9092 --topic mytopics
>name admin
>age 20
>my name is admin and age 20 old years

查看启动消费者窗口

I have no name!@kafka-0:/$ kafka-console-consumer.sh --bootstrap-server kafka:9092 --topic mytopics 
name admin
age 20
my name is admin and age 20 old years

卸载应用

[root@k8s-master-1 zk]# helm uninstall kafka -n zk   
release "kafka" uninstalled

[root@k8s-master-1 zk]# helm uninstall zookeeper -n zk      
release "zookeeper" uninstalled

[root@k8s-master-1 zk]# kubectl delete pvc,pv -n zk --all  # 尽量不要使用 --all

[root@k8s-master-1 ~]# kubectl delete ns zk
namespace "zk" deleted

扩容|缩容

扩容 Apache Kafka 到 7 个节点, 缩容修改 replicaCount 即可

helm upgrade kafka bitnami/kafka \
  --set zookeeper.enabled=false \
  --set replicaCount=7 \
  --set externalZookeeper.servers=ZOOKEEPER-SERVICE-NAME

扩容 Apache Zookeeper 部署到 5 个节点,缩容修改 replicaCount 即可

helm upgrade zookeeper bitnami/zookeeper \
  --set replicaCount=5 \
  --set auth.enabled=false \
  --set allowAnonymousLogin=true

Helm 部署 ELK

前提条件

  • Helm

  • Persistent Volumes (需要存在默认动态存储,因为ES中没有指定 storageClassName)

安装ES

[root@k8s-master-1 ~]# helm repo add elastic https://helm.elastic.co
[root@k8s-master-1 ~]# helm install  elasticsearch elastic/elasticsearch

如果没有指定默认存储的话,下载修改


# [root@k8s-master-1 ~]# helm pull elastic/elasticsearch   # 默认最新版,其他版本 --versions 查看
# [root@k8s-master-1 ~]# tar fx elasticsearch-7.13.4.tgz
# [root@k8s-master-1 ~]# vim elasticsearch/values.yaml 
# volumeClaimTemplate:
#   storageClassName: "rook-ceph-block" # 修改为你的 sc 名字
#   accessModes: [ "ReadWriteOnce" ]
#   resources:
#     requests:
#       storage: 30Gi                     # 存储大小自定义

# [root@k8s-master-1 ~]#  helm install  elasticsearch elastic/elasticsearch -f elasticsearch/values.yaml 

安装 kibana

[root@k8s-master-1 ~]# helm install kibana elastic/kibana -n elk
NAME: kibana
LAST DEPLOYED: Tue Jul 27 18:37:55 2021
NAMESPACE: elk
STATUS: deployed
REVISION: 1
TEST SUITE: None

[root@k8s-master-1 ~]# kubectl get po -l app=kibana -n elk
NAME                             READY   STATUS    RESTARTS   AGE
kibana-kibana-6f6f4f475d-c9ccl   1/1     Running   0          32s


# 查看容器内的配置
[root@k8s-master-1 elk]# kubectl -n elk  exec kibana-kibana-6f6f4f475d-c9ccl -c kibana -- cat /usr/share/kibana/config/kibana.yml 
#
# ** THIS IS AN AUTO-GENERATED FILE **
#

# Default Kibana configuration for docker target
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
monitoring.ui.container.elasticsearch.enabled: true

配置ingress

也可以下载下来,修改 value.yaml 开启 ingress

[root@k8s-master-1 ~]# more kibana-ingress.yaml 
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: kibana
spec:
  rules:
  - host: kibana.com
    http:
      paths:
      - backend:
          serviceName: kibana-kibana
          servicePort: 5601
        path: /

[root@k8s-master-1 ~]# kubectl apply -f kibana-ingress.yaml -n elk
ingress.extensions/kibana configured

安装filebeat

[root@k8s-master-1 ~]# helm install filebeat elastic/filebeat
[root@k8s-master-1 ~]# kubectl get po,pvc -n elk  
NAME                                 READY   STATUS    RESTARTS   AGE
pod/elasticsearch-master-0           1/1     Running   0          16m
pod/elasticsearch-master-1           1/1     Running   0          16m
pod/elasticsearch-master-2           1/1     Running   0          16m
pod/filebeat-filebeat-gmqtl          1/1     Running   1          3m32s
pod/filebeat-filebeat-s4db2          1/1     Running   1          3m32s
pod/filebeat-filebeat-sqhvs          1/1     Running   1          3m32s
pod/kibana-kibana-6f6f4f475d-c9ccl   1/1     Running   0          13m

NAME                                                                STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      AGE
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0   Bound    pvc-d3c5e6d6-99fc-418a-8aed-19e9fed0ff11   20Gi       RWO            rook-ceph-block   16m
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1   Bound    pvc-520a9805-afaa-4dad-b160-137efb40a001   20Gi       RWO            rook-ceph-block   16m
persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2   Bound    pvc-7b5f4534-db4b-4158-a384-36d1a519abe7   20Gi       RWO            rook-ceph-block   16m
[root@k8s-master-1 elk]# 

Helm 自定义构建 Chart

1. 创建 Helm-chart

[root@k8s-master-1 ~]# helm create mychart
Creating mychart
[root@k8s-master-1 ~]# cd mychart/
[root@k8s-master-1 mychart]# tree
.
├── charts
├── Chart.yaml                     # 存放mychart 应用描述信息
├── templates
│   ├── deployment.yaml
│   ├── _helpers.tpl
│   ├── hpa.yaml
│   ├── ingress.yaml
│   ├── NOTES.txt
│   ├── serviceaccount.yaml
│   ├── service.yaml
│   └── tests
│       └── test-connection.yaml
└── values.yaml                     # 存放应用部署信息<变量>

3 directories, 10 files

2.编写 mychart 应用描述信息

[root@k8s-master-1 mychart]# sed -i -e '/^#.*/d' Chart.yaml    
[root@k8s-master-1 mychart]# more Chart.yaml                  
apiVersion: v2
name: mychart
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: 1.16.0

3.编写应用部署信息<变量>

[root@k8s-master-1 mychart]# vi  values.yaml
''' 这里只修改了 标签,其他没动
image:
  repository: myapp
  tag: "v1"
'''

4.检查依赖和摸版配置是否正确

[root@k8s-master-1 mychart]# helm lint .
==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed

Helm 的 Harbor 仓库

  • harbor 安装的时候 默认没有helm charts的仓库

  • harbor 版本 v1.9.3, helm 版本 3.3.4

# harbor 是单独一个部署的
docker-compose stop
./install.sh  --with-chartmuseum

安装完成之后登录页面上就会有了 helm charts 了,页面上也可以直接上传charts

1、安装插件

  • 在线安装

[root@k8s-master-1 ~]# helm plugin install https://github.com/chartmuseum/helm-push
[root@k8s-master-1 ~]# helm plugin list
NAME    VERSION DESCRIPTION                      
push    0.9.0   Push chart package to ChartMuseum
  • 离线安装

[root@k8s-master-1 ~]# helm env		//获取插件目录
HELM_BIN="helm"
HELM_DEBUG="false"
HELM_KUBEAPISERVER=""
HELM_KUBECONTEXT=""
HELM_KUBETOKEN=""
HELM_NAMESPACE="default"
HELM_PLUGINS="/root/.local/share/helm/plugins"		#插件目录
HELM_REGISTRY_CONFIG="/root/.config/helm/registry.json"
HELM_REPOSITORY_CACHE="/root/.cache/helm/repository"
HELM_REPOSITORY_CONFIG="/root/.config/helm/repositories.yaml"
 
[root@k8s-master-1 ~]# mkdir -p /root/.local/share/helm/plugins/helm-push				#创建插件目录
[root@k8s-master-1 ~]# tar zxf helm-push_0.9.0_linux_amd64.tar.gz -C /root/.local/share/helm/plugins/helm-push
[root@k8s-master-1 ~]# cd /root/.local/share/helm/plugins/helm-push
[root@k8s-master-1 helm-push]# ls
bin  LICENSE  plugin.yaml
[root@k8s-master-1 helm-push]# helm push --help		#测试插件是否安装成功

2、创建 Helm-charts

[root@k8s-master-1 ~]# helm create mynginx
Creating mynginx

3、打包 Helm-charts

[root@k8s-master-1 ~]# helm package mynginx/
Successfully packaged chart and saved it to: /root/mynginx-0.1.0.tgz

4、添加 Harbor 仓库

[root@k8s-master-1 ~]# helm repo add --username=admin --password=Harbor12345 mycharts http://10.10.181.244/chartrepo/charts    
"mycharts" has been added to your repositories

[root@k8s-master-1 ~]# helm repo list
NAME            URL             
mycharts        http://10.10.181.244/chartrepo/charts  

5、上传到 Harbor 仓库

[root@k8s-master-1 ~]# helm push mynginx-0.1.0.tgz mycharts -u admin -p Harbor12345
Pushing mynginx-0.1.0.tgz to MyCharts-Nginx...
Done.

6、部署 Harbor 仓库中 Helm-charts

加–dry-run表示做调试,–debug表示输出部署过程; 如:helm install test-nginx mycharts/mynginx –debug

[root@k8s-master-1 ~]# helm repo update               # 最好更新后搜索

[root@k8s-master-1 ~]# helm search repo mynginx 
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                
mycharts/mynginx        0.1.0           1.16.0          A Helm chart for Kubernetes

# Install Helm-chart
[root@k8s-master-1 ~]# helm install test-nginx mycharts/mynginx
NAME: test-nginx
LAST DEPLOYED: Fri Jul 30 19:06:40 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mynginx,app.kubernetes.io/instance=test-nginx" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80
  
[root@k8s-master-1 ~]# helm list
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
test-nginx      default         1               2021-07-30 19:06:40.259802501 +0800 CST deployed        mynginx-0.1.0   1.16.0   

[root@k8s-master-1 ~]# kubectl get svc,po
NAME                         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
service/test-nginx-mynginx   ClusterIP   10.111.171.195   <none>        80/TCP    5m31s

NAME                                  READY   STATUS    RESTARTS   AGE
pod/test-nginx-mynginx-7857b68dbc-mrxzv   1/1     Running   0          5m32s

# Default version is 1.16.0
[root@k8s-master-1 ~]# kubectl get po test-nginx-mynginx-7857b68dbc-mrxzv -o yaml |grep "\- image"
  - image: nginx:1.16.0
  
[root@k8s-master-1 ~]# helm list
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
test-nginx      default         1               2021-07-30 19:06:40.259802501 +0800 CST deployed        mynginx-0.1.0   1.16.0  

7、滚动升级

例如我想镜像版本从 1.16.0 升级到 1.16.1,但是仓库中只有1.16.0版本,那我就再做一个上传进去

# 注意: 如果是原来的源码已经删除了,可以在 Harbor 仓库中下载下来,修改之后,在打包上传
[root@k8s-master-1 ~]# vim mynginx/Chart.yaml
apiVersion: v2
name: mynginx
description: A Helm chart for Kubernetes
type: application
version: 0.2.0          # 原来是 0.1.0,修改这个是为了上传不会覆盖原来的
appVersion: 1.16.1      # 原来是 1.16.0
[root@k8s-master-1 mynginx]# helm lint
==> Linting .
[INFO] Chart.yaml: icon is recommended

1 chart(s) linted, 0 chart(s) failed       # If right,then Package
[root@k8s-master-1 ~]# helm package mynginx
Successfully packaged chart and saved it to: /root/mynginx-0.2.0.tgz

[root@k8s-master-1 ~]# helm push mynginx-0.2.0.tgz mycharts -u admin -p Harbor12345
Pushing mynginx-0.2.0.tgz to mycharts...
Done.
[root@k8s-master-1 ~]# helm repo update
[root@k8s-master-1 ~]# helm  search repo mynginx -l
NAME                    CHART VERSION   APP VERSION     DESCRIPTION                
mycharts/mynginx        0.2.0           1.16.1          A Helm chart for Kubernetes
mycharts/mynginx        0.1.0           1.16.0          A Helm chart for Kubernetes

# Rolling update
[root@k8s-master-1 ~]# helm upgrade test-nginx mycharts/mynginx
Release "test-nginx" has been upgraded. Happy Helming!
NAME: test-nginx
LAST DEPLOYED: Fri Jul 30 19:35:06 2021
NAMESPACE: default
STATUS: deployed
REVISION: 2
NOTES:
1. Get the application URL by running these commands:
  export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=mynginx,app.kubernetes.io/instance=test-nginx" -o jsonpath="{.items[0].metadata.name}")
  echo "Visit http://127.0.0.1:8080 to use your application"
  kubectl --namespace default port-forward $POD_NAME 8080:80


[root@k8s-master-1 ~]# helm list     
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
test-nginx      default         3               2021-07-30 19:42:21.501958146 +0800 CST deployed        mynginx-0.2.0   1.16.1  

这里我们发现,镜像拉取失败,原来的还是在一直运行中,说明当新的 pod 没有运行正常之前,原来的pod不会删除掉

[root@k8s-master-1 ~]# kubectl get po
NAME                                  READY   STATUS             RESTARTS   AGE
test-nginx-mynginx-7857b68dbc-mrxzv   1/1     Running            0          38m
test-nginx-mynginx-85d5959d4d-c4q9f   0/1     ImagePullBackOff   0          2m57s

8、版本回滚

# Show history update version
[root@k8s-master-1 ~]# helm  history test-nginx
REVISION        UPDATED                         STATUS          CHART           APP VERSION     DESCRIPTION     
1               Fri Jul 30 19:06:40 2021        superseded      mynginx-0.1.0   1.16.0          Install complete
2               Fri Jul 30 19:35:06 2021        superseded      mynginx-0.1.0   1.16.0          Upgrade complete
3               Fri Jul 30 19:42:21 2021        deployed        mynginx-0.2.0   1.16.1          Upgrade complete

# Rollback
[root@k8s-master-1 ~]# helm  rollback test-nginx 1
Rollback was a success! Happy Helming!
[root@k8s-master-1 ~]# kubectl get po
NAME                                  READY   STATUS    RESTARTS   AGE
test-nginx-mynginx-7857b68dbc-22fbv   1/1     Running   0          14s

[root@k8s-master-1 ~]# helm list     # 版本从原来的 1.16.1 退回来 1.16.0
NAME            NAMESPACE       REVISION        UPDATED                                 STATUS          CHART           APP VERSION
test-nginx      default         4               2021-07-30 19:48:04.584453134 +0800 CST deployed        mynginx-0.1.0   1.16.0   

9、删除

[root@k8s-master-1 ~]# helm uninstall test-nginx      # 默认defualt,其他命名空间,需加 -n <namespace>

Helm 的 kubeapps UI

为Helm提供web UI界面管理

1. 部署kubeapps

[root@k8s-master-1 ~]# kubectl create namespace kubeapps
namespace/kubeapps created

[root@k8s-master-1 ~]# helm repo add bitnami https://charts.bitnami.com/bitnami
"bitnami" has been added to your repositories

[root@k8s-master-1 ~]# helm install kubeapps bitnami/kubeapps -n kubeapps
NAME: kubeapps
LAST DEPLOYED: Sat Jul 31 14:02:31 2021
NAMESPACE: kubeapps
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
** Please be patient while the chart is being deployed **

Tip:

  Watch the deployment status using the command: kubectl get pods -w --namespace kubeapps

Kubeapps can be accessed via port 80 on the following DNS name from within your cluster:

   kubeapps.kubeapps.svc.cluster.local

To access Kubeapps from outside your K8s cluster, follow the steps below:

1. Get the Kubeapps URL by running these commands:
   echo "Kubeapps URL: http://127.0.0.1:8080"
   kubectl port-forward --namespace kubeapps service/kubeapps 8080:80

2. Open a browser and access Kubeapps using the obtained URL.

##########################################################################################################
### WARNING: You did not provide a value for the postgresqlPassword so one has been generated randomly ###
##########################################################################################################

# Show
[root@k8s-master-1 ~]# kubectl get po,svc,sa -n kubeapps
NAME                                                              READY   STATUS    RESTARTS   AGE
pod/apprepo-kubeapps-sync-bitnami-zr4w8-jcbvr                     1/1     Running   0          76s
pod/kubeapps-55f7655468-q8vb9                                     1/1     Running   0          79s
pod/kubeapps-55f7655468-rjzgx                                     1/1     Running   0          78s
pod/kubeapps-internal-apprepository-controller-5cddb49cb5-5fvcd   1/1     Running   0          79s
pod/kubeapps-internal-assetsvc-7576d74fd8-tpbr2                   1/1     Running   0          79s
pod/kubeapps-internal-assetsvc-7576d74fd8-tv5fc                   1/1     Running   0          78s
pod/kubeapps-internal-dashboard-b59b5678c-75vb5                   1/1     Running   0          79s
pod/kubeapps-internal-dashboard-b59b5678c-lbj7h                   1/1     Running   0          78s
pod/kubeapps-internal-kubeops-6fbc776bc6-4x2rd                    1/1     Running   0          79s
pod/kubeapps-internal-kubeops-6fbc776bc6-jns7f                    1/1     Running   0          78s
pod/kubeapps-postgresql-primary-0                                 1/1     Running   0          78s
pod/kubeapps-postgresql-read-0                                    1/1     Running   0          78s

NAME                                   TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
service/kubeapps                       ClusterIP   10.100.249.243   <none>        80/TCP     79s
service/kubeapps-internal-assetsvc     ClusterIP   10.99.73.117     <none>        8080/TCP   79s
service/kubeapps-internal-dashboard    ClusterIP   10.108.86.222    <none>        8080/TCP   79s
service/kubeapps-internal-kubeops      ClusterIP   10.104.147.41    <none>        8080/TCP   79s
service/kubeapps-postgresql            ClusterIP   10.102.21.161    <none>        5432/TCP   79s
service/kubeapps-postgresql-headless   ClusterIP   None             <none>        5432/TCP   79s
service/kubeapps-postgresql-read       ClusterIP   10.107.177.77    <none>        5432/TCP   79s

NAME                                                        SECRETS   AGE
serviceaccount/default                                      1         2m22s
serviceaccount/kubeapps-internal-apprepository-controller   1         79s
serviceaccount/kubeapps-internal-kubeops                    1         79s

2.ingress-nginx对外提供服务

# kubeapps-ingress.yaml
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: kubeapps-ingress
  namespace: kubeapps
  annotations:
    kubernetes.io/ingressClass: "nginx"
spec:
  rules:
  - host: kubeapps.com
    http:
      paths: # 相当于nginx的location配合,同一个host可以配置多个path /
      - path: /
        backend:
          serviceName: kubeapps
          servicePort: 80
[root@k8s-master-1 kubeapps]# kubectl apply -f kubeapps-ingress.yaml
ingress.networking.k8s.io/kubeapps-ingress created

访问 kubeapps.com

需要token登陆,因此我们需要创建sa并为其附加 cluster-admin 的权限

kubectl create serviceaccount kubeapps-operator -n kubeapps 
kubectl create clusterrolebinding kubeapps-operator --clusterrole=cluster-admin --serviceaccount=kubeapps:kubeapps-operator

或者

命令行 + -o yaml --dry-run 得出 Yaml 内容

# rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubeapps-operator
  namespace: kubeapps
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: kubeapps-operator
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: kubeapps-operator
  namespace: kubeapps
  
[root@k8s-master-1 ~]# kubectl apply -f rbac.yaml 
serviceaccount/kubeapps-operator created
clusterrolebinding.rbac.authorization.k8s.io/kubeapps-operator configured

查看 token令牌

[root@k8s-master-1 kubeapps]# kubectl -n kubeapps describe sa kubeapps-operator 
Name:                kubeapps-operator
Namespace:           kubeapps
Labels:              <none>
Annotations:         Image pull secrets:  <none>
Mountable secrets:   kubeapps-operator-token-6j5gn
Tokens:              kubeapps-operator-token-6j5gn
Events:              <none>

[root@k8s-master-1 kubeapps]# kubectl -n kubeapps describe secrets kubeapps-operator-token-6j5gn 
Name:         kubeapps-operator-token-6j5gn
Namespace:    kubeapps
Labels:       <none>
Annotations:  kubernetes.io/service-account.name: kubeapps-operator
              kubernetes.io/service-account.uid: db6c537b-1bdc-4c4d-b99d-6a57cf8b8d82

Type:  kubernetes.io/service-account-token

Data
====
namespace:  8 bytes
token:     <这个就是登录令牌> eyJhbGciOiJSUzI1NiIsImtpZCI6IjAxU2wtNUMwQUtva2s1di00WGF0Q3dDZ0EyTlVyeURWX3FnNXhKTEtsNGMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlYXBwcyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlYXBwcy1vcGVyYXRvci10b2tlbi02ajVnbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlYXBwcy1vcGVyYXRvciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImRiNmM1MzdiLTFiZGMtNGM0ZC1iOTlkLTZhNTdjZjhiOGQ4MiIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlYXBwczprdWJlYXBwcy1vcGVyYXRvciJ9.qM4-W1asb8ERk-m8qE3Pk3ifZquHfN-GQw6A5yBdSqe_uJ8fBIv95luvfas02zYt7aiOORiwrqaiqT7fhpijPjJshcAMcK8GVwyQj5juRf_zNDHghAE77Yi7fTqPsfBzZLzTcbH95ZkWBFs0ElRVIqmcncFdtGCypdMCEVbRwyB_gvEBCLDLM1KZNW-DqYuJaigauzs2nkiIbW9VKLP-DTIhTo2btkm-d6k7yp4VcOjVl46kWU3i5JlcFbzKtSLFoSufyMotk9deI6YpOhlEaq_FYonOh6K4bR1smSQPW1E_CWC2udG46FShnjS512VLFt5r2ITMq5kPXah1xyTrnw
ca.crt:     1025 bytes

添加自己的 helm charts 仓库

3.自行探索,点点点

  • 支持创建命名空间

  • 支持自动更新|回滚

  • 管理 Helm charts 很方便

  • 其他....

Jenkins

环境信息

  • 系统版本: CentOS 7

  • 推荐的硬件配置: 1GB+可用内存 50 GB+ 可用磁盘空间

  • 软件配置: Java 8—无论是Java运行时环境(JRE)还是Java开发工具包(JDK)都可以。

Java环境

官网下载地址: http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html 需注册登录下载 也可以 rpm 方式安装,我这里采用二进制方式

# 查找已安装的版本,若是没有结果,就表示没安装
rpm -qa|grep jdk
rpm -qa|grep java


# 有的话卸载 --nodeps卸载相关依赖
rpm -e --nodeps + 版本
tar fx jdk-8u301-linux-x64.tar.gz 
mv jdk1.8.0_301 /usr/local/jdk


# vi /etc/profile
export JAVA_HOME=/usr/local/jdk
export JRE_HOME=/usr/local/jdk/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export PATH=$JAVA_HOME/bin:$JRE_HOME/bin:$PATH


source /etc/profile

[root@k8s-master-2 local]# java -version
java version "1.8.0_301"
Java(TM) SE Runtime Environment (build 1.8.0_301-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.301-b09, mixed mode)
  • yum 安装方式: centos7

yum -y update
yum install java-1.8.0-openjdk java-1.8.0-openjdk-devel.x86_64

java -version

update-alternatives --config java
'''
	Selection    Command
	-----------------------------------------------
	*+ 1           java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64/jre/bin/java)
'''

vim /etc/profile
'''
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
'''

source /etc/profile
echo $JAVA_HOME
'''
/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.191.b12-1.el7_6.x86_64
'''

Jenkins 安装方式

官方文档: https://www.jenkins.io/zh/doc/ 其他 war 包下载版本地址: http://mirrors.jenkins.io/war-stable/

docker启动

# 修改下目录权限,因为当映射到本地数据卷时, /home/jenkins-home-docker 目录拥有者为root用户,而容器中 jenkins user 的 uid 为 1000
chown -R 1000:1000  /home/jenkins-home-docker

docker run --name jenkins -p 8080:8080  -d  -v /home/jenkins-home-docker:/var/jenkins_home jenkins/jenkins:latest    

# 打开ip:8080,输入token,安装常用插件

war 方式启动

wget http://mirrors.jenkins.io/war-stable/latest/jenkins.war
java -jar jenkins.war --httpPort=8080

[root@k8s ~]# find / -name plugins (安装的插件都在这个目录下)
/root/.jenkins/plugins
/root/.jenkins/plugins/jquery3-api/plugins

rpm 方式启动

清华源下载( RPM 推荐):https://mirrors.tuna.tsinghua.edu.cn/jenkins/redhat/

wget https://mirrors.tuna.tsinghua.edu.cn/jenkins/redhat/jenkins-2.304-1.1.noarch.rpm
rpm -ivh jenkins-2.304-1.1.noarch.rpm

配置jenkins

# 修改jenkins服务的用户为root
sed -i 's@JENKINS_USER="jenkins"@JENKINS_USER="root"@g' /etc/sysconfig/jenkins

# 启动jenkins服务
systemctl start jenkins

# 启动失败: Starting Jenkins bash: /usr/bin/java: No such file or directory
vi /etc/init.d/jenkins 
candidates="
/etc/alternatives/java
/usr/lib/jvm/java-1.8.0/bin/java
/usr/lib/jvm/jre-1.8.0/bin/java
/usr/lib/jvm/java-11.0/bin/java
/usr/lib/jvm/jre-11.0/bin/java
/usr/lib/jvm/java-11-openjdk-amd64
/usr/local/jdk/bin/java        # 添加 java 运行目录,因为是二进制安装,目录是自定义的
/usr/bin/java


systemctl daemon-reload
systemctl start jenkins

# 检查服务是否启动,默认8080端口
# netstat -lntp | grep 8080
tcp6       0      0 :::8080                 :::*                    LISTEN      3118/java 

Jenkins目录介绍

#jenkins家目录,存放所以数据
/var/lib/jenkins    

#jenkins的安装目录,war包存放在这里
/usr/lib/jenkins

#主配置文件
/etc/sysconfig/jenkins  

#日志文件
/var/log/jenkins/jenkins.log

Last updated