CKA试题

  1. 准备工作
  2. 题目一
    1. 1. 获取etcd运行信息: 私钥位置、证书过期时间、是否开启客户端证书身份验证
    2. 2. 创建etcd 快照备份到/etc/etcd-snapshot.db, 并打印快照信息
  3. 题目二
    1. 3. 创建一个Pod, 包含两个容器,一个镜像是nginx:1.21.3-alpine,另一个镜像是busybox:1.31, 确保busybox容器保持运行一段时间
    2. 4. 创建一个service: p2-service, 3000转发上面pod的80
    3. 5. 在所有节点cluster1-control plan e1、cluster1-node1和cluster1-node2上找到Kube-Proxy容器,并确保它正在使用iptabes。为此,请使用命令crictl。
    4. 6. 将创建的Service p2-service所属的所有节点的iptable规则写入文件/opt/course/p2/ipables.txt
    5. 7. 最后,删除该服务,并确认已从所有节点中删除iptabes规则。
  4. 题目三
    1. 8. default ns下用httpd:2.4.41-alpine镜像创建一个叫check-ip的pod, 暴漏80端口,用clusterip类型的Service, service名叫check-ip-service,打印cluster ip
    2. 9. 将群集的服务CIDR更改为11.96.0.0/12。
  5. 题目四
    1. 10. 创建clusterrole,只允许创建deployment,daemonset,statefulset资源
    2. 11. 在现有的namespace app-team1中创建一个名为cicd-token的新ServiceAccount
    3. 12. 限于namespace app-team1,将新的ClusterRole deployment-clusterrole绑定到新的ServiceAccount cidi-token
  6. 题目五
    1. 13. 将名为ek8s-node-1的node设置为不可用,并重新调度该node上所有运行的pods
  7. 题目六
    1. 14. 现有的Kubernetes集群正在运行版本为1.20.0,仅将主节点上的所有Kubernetes控制平面和节点组件升级到版本为1.20.1
  8. 题目七
    1. 15. 首先,为运行在https://127.0.0.1:2379 上的现有etcd实例创建快照并将快照保存到 /data/backup/etcd-snapshot.db
    2. 16. 然后还原位于 /data/backup/etcd-snapshot-previous.db的现有先前快照(提供了TLS证书和密钥路径,连接etcd服务器)
  9. 题目八
    1. 17. 现有namespace my-app, 创建一个名为allow-port-from-namespace的新NetworkPolicy
  10. 题目九
    1. 18. 调整现有名为front-end的deployment资源, 添加名为http的端口,来公开当前容器nginx的TCP协议80端口, 然后创建一个名为front-end-svc且类型为NodePort的服务来暴露这个80端口
  11. 题目十
    1. 19. 在namespace ing-internal下创建一个名为pong的nginx ingress, 将名为hello的service的5678端口暴露到/hello路径下
  12. 题目十一
    1. 20. 将deployment loadbalancer扩展至5个pods
  13. 题目十二
    1. 21. 按如下要求调度一个Pod , 名称:nginx-kusc00401, image: nginx, Node selector: disk=ssd
  14. 题目十三
    1. 22. 检查有多少worker nodes已准备就绪(不包括被打上Taint: NoSchedule的节点),并将数量写入/opt/KUSC00402/kusc00402.txt
  15. 题目十四
    1. 23. 创建一个名为kucc4的pod,在pod里面分别为以下每个images单独运行一个app container(可能会有1-4个images):nginx+redis+memcached
  16. 题目十五
    1. 24. 创建名为app-data的persistent volume,容量为2Gi,访问模式为ReadWriteOnce。volume类型为hostPath,位于/srv/app-data
  17. 题目十六
    1. 25. 创建一个新的PersistentVolumeClaim
  18. 题目十七
    1. 26. 监控pod bar的日志并:提取与错误file-not-found相对应的日志行。将这些日志行写入/opt/KUTR00101/bar
  19. 题目十八
    1. 27. 用busybox Image来将名为sidecar的sidecar容器添加到现有的Pod legacy-app中。新的sidecar容器必须运行以下命令:
  20. 题目十九
    1. 28. 通过pod label app=hostnames,找到运行占用大量CPU的pod,并将占用CPU最高的pod名称写入文件/opt/KUTR00401/KUTR00401.txt(已存在)
  21. 题目二十
    1. 29. 名为wk8s-node-0的Kubernetes worker node处于NotReady状态。调查发生这种情况的原因,并采取相应措施将node恢复为Ready状态,确保所做的任何更改永久有效

准备工作

alias k='kubectl'
export do="-o yaml --dry-run=client"

题目一

Use context: kubectl config use-context k8s-c2-AC
The cluster admin asked you to find out the following information about etcd running on cluster2-controlplane1:
Server private key location
Server certificate expiration date
Is client certificate authentication enabled
Write these information into /opt/course/p1/etcd-info.txt
Finally you’re asked to save an etcd snapshot at /etc/etcd-snapshot.db on cluster2-controlplane1 and display its status.

1. 获取etcd运行信息: 私钥位置、证书过期时间、是否开启客户端证书身份验证

k get nodes
k get po -n kube-system -owide
etcd-xxx node

ssh node

cd /etc/kubernetes/manifests
cat etcd.yaml

# 私钥位置
# cat /etc/systemd/system/etcd.service | grep key-file
Server private key location: /etc/kubernetes/ssl/etcd-key.pem

# 过期时间, 从cert-file里解
#  cat /etc/systemd/system/etcd.service | grep cert-file
/etc/kubernetes/ssl/etcd.pem
openssl x509 -noout -text -in /etc/kubernetes/ssl/etcd.pem

Server certificate expiration date: Jun 17 10:22:00 2073 GMT

# 是否开启了客户端认证
cat etcd.yaml | grep client-cert-auth
Is client certificate authentication enabled:

2. 创建etcd 快照备份到/etc/etcd-snapshot.db, 并打印快照信息

# 创建快照
ETCDCTL_API=3 etcdctl snapshot save /etc/etcd-snapshot.db
# 打印快照信息
ETCDCTL_API=3 etcdctl snapshot status /etc/etcd-snapshot.db
c277e58f, 5503127, 1628, 24 MB
# HASH,   reversion, total keys, total size,

题目二

Use context: kubectl config use-context k8s-c1-H
You’re asked to confirm that kube-proxy is running correctly on all nodes. For this perform the following in Namespace project-hamster:
Create a new Pod named p2-pod with two containers, one of image nginx:1.21.3-alpine and one of image busybox:1.31. Make sure the busybox container keeps running for some time.
Create a new Service named p2-service which exposes that Pod internally in the cluster on port 3000->80.
Find the kube-proxy container on all nodes cluster1-controlplane1, cluster1-node1 and cluster1-node2 and make sure that it’s using iptables. Use command crictl for this.
Write the iptables rules of all nodes belonging the created Service p2-service into file /opt/course/p2/iptables.txt.
Finally delete the Service and confirm that the iptables rules are gone from all nodes.

3. 创建一个Pod, 包含两个容器,一个镜像是nginx:1.21.3-alpine,另一个镜像是busybox:1.31, 确保busybox容器保持运行一段时间

# 先生成个基础模板,后面好改
k run p2-pod --namespace=project-hamster --image=nginx:1.21.3-alpine -oyaml --dry-run=client > p2.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: p2-pod
  name: p2-pod
  namespace: project-hamster
spec:
  containers:
  - image: nginx:1.21.3-alpine
    name: p2-pod
    resources: {}
  - image: busybox:1.31
    name: c2
    command:
      - "sleep"
      - "1d"
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}
k create -f p2.yaml

4. 创建一个service: p2-service, 3000转发上面pod的80

k -n project-hamster expose pod p2-pod --name p2-service --port 3000 --target-port 80
k get svc -n project-hamster p2-service -oyaml
k get po,svc,ep -n project-hamster -owide

5. 在所有节点cluster1-control plan e1、cluster1-node1和cluster1-node2上找到Kube-Proxy容器,并确保它正在使用iptabes。为此,请使用命令crictl。

# 主机部署,实际要挨个ssh上去!
for i in k get nodes|grep -v NAME|awk '{print $1}'; do
    ssh $i "systemctl status kube-proxy; journalctl -u kube-proxy"
    ssh $i "systemctl cat kube-proxy"
    # 看配置文件里是用的iptables还是ipvs
    ssh $i "cat /var/lib/kube-proxy/kube-proxy-config.yaml|grep mode"
done

# containerd部署,for循环运行不了awk的, 实际要挨个ssh上去!
for i in k get nodes|grep -v NAME|awk '{print $1}'; do
    ssh $i "crictl ps|grep kube-proxy; crictl logs $(crictl ps|grep kube-proxy|awk '{print $1}')"

done
# 容器日志里会打印 Using iptables Proxier
#...
#I0913 12:53:03.096620       1 server_others.go:212] Using iptables Proxier.
#...

6. 将创建的Service p2-service所属的所有节点的iptable规则写入文件/opt/course/p2/ipables.txt

for i in k get nodes|grep -v NAME|awk '{print $1}'; do
    ssh $i "iptables-save|grep p2-service >>/opt/course/p2/ipables.txt"
done

7. 最后,删除该服务,并确认已从所有节点中删除iptabes规则。

k delete svc -n project-hamster p2-service
for i in k get nodes|grep -v NAME|awk '{print $1}'; do
    ssh $i "iptables-save | grep p2-service"
done

题目三

Use context: kubectl config use-context k8s-c2-AC
Create a Pod named check-ip in Namespace default using image httpd:2.4.41-alpine. Expose it on port 80 as a ClusterIP Service named check-ip-service. Remember/output the IP of that Service.
Change the Service CIDR to 11.96.0.0/12 for the cluster.
Then create a second Service named check-ip-service2 pointing to the same Pod to check if your settings did take effect. Finally check if the IP of the first Service has changed.

8. default ns下用httpd:2.4.41-alpine镜像创建一个叫check-ip的pod, 暴漏80端口,用clusterip类型的Service, service名叫check-ip-service,打印cluster ip

# 创建pod
k run check-ip --image=httpd:2.4.41-alpine

# 暴露服务
k expose --name check-ip-service pod check-ip --port=80 --type=ClusterIP

# 打印cluster ip
k get svc check-ip-service

9. 将群集的服务CIDR更改为11.96.0.0/12。

需要修改apiserver以及controller-manager的配置文件中–service-cluster-ip-range配置

ssh master
# apiserver
## 守护进程运行的形式,可以修改守护进程中的参数配置
systemctl cat kube-apiserver|grep service-cluster-ip-range
### --service-cluster-ip-range=10.68.0.0/16 修改为--service-cluster-ip-range=11.96.0.0/12
### 修改守护进程的配置后重启
systemctl daemon-reload
systemctl restart kube-apiserver

## pod运行的形式,修改kube-apiserver的yaml配置,重启pod
vim /etc/kubernetes/manifests/kube-apiserver.yaml
### 修改为--service-cluster-ip-range=11.96.0.0/12
### 等待pod重启
k -n kube-system get pod | grep api

# controller-manager
systemctl cat kube-controller-manager
### --service-cluster-ip-range=10.68.0.0/16 修改为--service-cluster-ip-range=11.96.0.0/12
### 修改守护进程的配置后重启
systemctl daemon-reload
systemctl restart kube-controller-manager

## pod运行的形式,修改kube-controller-manager的yaml配置,重启pod
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
### --service-cluster-ip-range=10.68.0.0/16 修改为--service-cluster-ip-range=11.96.0.0/12
k get po -n kube-system | grep controller


# 最后再看svc ip并没有发生变化
k get svc check-ip-service

# 最后新建一个check-ip-service2
k expose --name check-ip-service2 pod check-ip --port=80 --type=ClusterIP

题目四

RBAC鉴权

10. 创建clusterrole,只允许创建deployment,daemonset,statefulset资源

k create clusterrole test-clusterrole --verb=create --resource=deployment,daemonset,statefulset

11. 在现有的namespace app-team1中创建一个名为cicd-token的新ServiceAccount

k create ns app-team1
k create serviceaccount cicd-token -n app-team1

12. 限于namespace app-team1,将新的ClusterRole deployment-clusterrole绑定到新的ServiceAccount cidi-token

k create clusterrolebinding test-clusterrolebind -n app-team1 --clusterrole=test-clusterrole --serviceaccount=cicd-token

题目五

13. 将名为ek8s-node-1的node设置为不可用,并重新调度该node上所有运行的pods

# cordon 禁止调度
k cordon ek8s-node-1
# drain 排水
k drain ek8s-node-1

题目六

升级K8s版本

14. 现有的Kubernetes集群正在运行版本为1.20.0,仅将主节点上的所有Kubernetes控制平面和节点组件升级到版本为1.20.1

确保在升级之前drain主节点,并在升级后uncordon主节点

ssh ek8s-maste # 登录主节点
k drain ek8s-maste --ignore-daemonsets # 排水
yum search kubeadm # 查找kubeadm 包名
yum install kubeadm-1.20.1-0 # 安装kubeadm 1.20.1版本
kubeadm upgrade plan # 验证升级计划
kubeadm upgrade apply v1.20.1 --etcd-upgrade=false # 应用升级计划, 排除etcd,题目没有要求
kubectl uncordon ek8s-maste # uncordon主节点
yum install kubectl-1.20.1-0 kubelet-1.20.1-0 # 安装kubelet kubectl 1.20.1版本
systemctl restart kubelet # 重启kubelet
k get nodes # 查看当前节点版本信息

题目七

etcd备份与恢复

15. 首先,为运行在https://127.0.0.1:2379 上的现有etcd实例创建快照并将快照保存到 /data/backup/etcd-snapshot.db

mkdir -p /data/backup
systemctl cat etcd # 查看ca证书位置--trusted-ca-file 客户端证书--cert-file 客户端秘钥--key-file
ETCDCTL_API=3 etcdctl --endpoints="https://127.0.0.1:2379" snapshot save /data/backup/etcd-snapshot.db --cacert="/etc/kubernetes/ssl/ca.pem" --cert="/etc/kubernetes/ssl/etcd.pem" --key="/etc/kubernetes/ssl/etcd-key.pem"

16. 然后还原位于 /data/backup/etcd-snapshot-previous.db的现有先前快照(提供了TLS证书和密钥路径,连接etcd服务器)

systemctl cat etcd | grep data # 确认数据目录位置
# --data-dir=/var/lib/etcd
systemctl stop etcd # 停止etcd
mv /var/lib/etcd /var/lib/etcd-bak # 备份
ETCDCTL_API=3 etcdctl snapshot restore /data/backup/etcd-snapshot-previous.db --data-dir=/var/lib/etcd #恢复数据
chown etcd:etcd -R /var/lib/etcd # 数据目录所属etcd用户, ubuntu可能会这样, systemctl cat etcd可以看到启动用户
systemctl start etcd

题目八

题意机翻不明确
网络策略
在现有的namespace my-app中创建一个名为allow-port-from-namespace的新NetworkPolicy
确保新的NetworkPolicy允许namespace my-app中的Pods来连接到namespace big-corp中的端口8080
进一步确保新的NetworkPolicy
不允许对没有在监听端口8080的Pods的访问
不允许不来自namespace my-app中的Pods的访问

猜测题目:

17. 现有namespace my-app, 创建一个名为allow-port-from-namespace的新NetworkPolicy

只允许来自big-corp namespace下的pod访问本namespace下的pod的8080端口

# k api-resources|grep networkpo
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: allow-port-from-namespace
  namespace: my-app
spec:
  podSelector: {}
  policyTypes:
    - Ingress
  ingress:
    - from:
      - namespaceSelector:
          matchLabels:
            name: big-corp
      ports:
      - protocol: TCP
        port: 8080

题目九

题意机翻不明确
SVC 暴露应用
请重新配置现有的部署front-end以及添加名为http的端口规范来公开现有容器nginx的端口80/tcp
创建一个名为front-end-svc的新服务,以公开容器端口http
配置此服务,以通过在排定的节点上的NodePort来公开各个Pods

猜测题目:

18. 调整现有名为front-end的deployment资源, 添加名为http的端口,来公开当前容器nginx的TCP协议80端口, 然后创建一个名为front-end-svc且类型为NodePort的服务来暴露这个80端口

# k edit deployment front-end
containers:
  ports:
  - name: http
    protocol: TCP
    containerPort: 80
k expose --name=front-end-svc --type=NodePort depmoyment front-end --target-port=80 --port=80

题目十

Ingress

19. 在namespace ing-internal下创建一个名为pong的nginx ingress, 将名为hello的service的5678端口暴露到/hello路径下

k create ingress pong --namespace=ing-internal --rule="/hello*=hello:5678" --annotation="nginx.ingress.kubernetes.io/rewrite-target=/" -oyaml --dry-run=client >p19.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
  name: pong
  namespace: ing-internal
spec:
  rules:
  - http:
      paths:
      - backend:
          service:
            name: hello
            port:
              number: 5678
        path: /hello
        pathType: Prefix

题目十一

扩容Pod数量

20. 将deployment loadbalancer扩展至5个pods

# 编辑修改
k edit deployment loadbalancer

# 或命令行扩容
k scale deployment loadbalancer --replicas=5

题目十二

nodeSelector

21. 按如下要求调度一个Pod , 名称:nginx-kusc00401, image: nginx, Node selector: disk=ssd

#k run nginx-kusc00401 --image=nginx --dry-run=client -oyaml > p21.yaml
# vim p21.yaml # 添加nodeselector字段
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: nginx-kusc00401
  name: nginx-kusc00401
spec:
  containers:
  - image: nginx
    name: nginx-kusc00401
    resources: {}
  nodeSelector:
    disk: ssd
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

题目十三

统计准备就绪节点数量

22. 检查有多少worker nodes已准备就绪(不包括被打上Taint: NoSchedule的节点),并将数量写入/opt/KUSC00402/kusc00402.txt

k describe nodes $(k get nodes|grep "Ready"|awk '{print $1}') | grep Taints | grep -vc NoSchedule > /opt/KUSC00402/kusc00402.txt

题目十四

Pod配置多容器

23. 创建一个名为kucc4的pod,在pod里面分别为以下每个images单独运行一个app container(可能会有1-4个images):nginx+redis+memcached

# k run kucc4 --image nginx --dry-run=client -oyaml > p23.yaml
# vim p23.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    run: kucc4
  name: kucc4
spec:
  containers:
  - image: nginx
    name: nginx
    resources: {}
  - image: redis
    name: redis
    resources: {}
  - image: memcached
    name: memcached
    resources: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always
status: {}

题目十五

创建 PV

24. 创建名为app-data的persistent volume,容量为2Gi,访问模式为ReadWriteOnce。volume类型为hostPath,位于/srv/app-data

apiVersion: v1
kind: PersistentVolume
metadata:
  name: app-data
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 2Gi
  hostPath:
    path: "/srv/appdata"

题目十六

Pod使用PVC

25. 创建一个新的PersistentVolumeClaim

名称:pv-volume
Class:csi-hostpath-sc
容量:10Mi

创建一个新的Pod,此Pod将作为volume挂载到PersistenVolumeClaim
名称:web-server
image:nginx
挂载路径:/usr/share/nginx/html

配置新的Pod,以对volume具有ReadWriteOnce权限
最后,使用kubectl edit或者kubectl patch将PersistenVolumeClaim的容量扩展为70Mi,并记录此更改。

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pv-volume
spec:
  storageClassName: csi-hostpath-sc
  resources:
    requests:
      storage: 10Mi
  accessModes:
    - ReadWriteOnce
---
apiVersion: v1
kind: Pods
metadata:
  name: web-server
spec:
  containers:
    - image: nginx
      name: nginx
      volumeMounts:
        mountPath: /usr/share/nginx/html
        name: data
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: pv-volume
kubectl edit pvc pv-volume --save-config

题目十七

获取Pod错误日志

26. 监控pod bar的日志并:提取与错误file-not-found相对应的日志行。将这些日志行写入/opt/KUTR00101/bar

k logs bar | grep file-not-found > /opt/KUTR00101/bar

题目十八

给Pod增加一个容器(边车) sidecar

27. 用busybox Image来将名为sidecar的sidecar容器添加到现有的Pod legacy-app中。新的sidecar容器必须运行以下命令:

/bin/sh -c tail -n+1 -f /var/log/legacy-app.log
使用安装在/var/log的Volume,使日志文件legacy-app.log可用于sidecar容器

# k run legacy-app --image=busybox --dry-run=client -oyaml > p27.yaml # 然后编辑
apiVersion: v1
kind: Pod
metadata:
  labels:
    run: legacy-app
  name: legacy-app
spec:
  containers:
  - image: busybox
    name: legacy-app
    command:
      - "/bin/sh"
      - "-c"
      - >
        while true;do echo $(date) >> /var/log/legacy-app.log;sleep 1;done
    volumeMounts:
    - name: logdir
      mountPath: /var/log
    resources: {}
  - image: busybox
    name: sidecar
    command:
      - "/bin/sh"
      - "-c"
      - "tail -n+1 -f /var/log/legacy-app.log"
    volumeMounts:
    - name: logdir
      mountPath: /var/log
    resources: {}
  volumes:
    - name: logdir
      emptyDir: {}
  dnsPolicy: ClusterFirst
  restartPolicy: Always

题目十九

统计使用CPU最高的Pod

需要集群安装metrics-server

28. 通过pod label app=hostnames,找到运行占用大量CPU的pod,并将占用CPU最高的pod名称写入文件/opt/KUTR00401/KUTR00401.txt(已存在)

k top pods -l app=hostnames --sort-by=cpu -A |head -2 | tail -1 | awk '{print $2}' > /opt/KUTR00401/KUTR00401.txt

题目二十

节点NotReady处理

29. 名为wk8s-node-0的Kubernetes worker node处于NotReady状态。调查发生这种情况的原因,并采取相应措施将node恢复为Ready状态,确保所做的任何更改永久有效

k get nodes
k describe node wk8s-node-0
ssh wk8s-node-0
systemctl status kubelet
systemctl enable --now kubelet

转载请注明来源, 欢迎对文章中的引用来源进行考证, 欢迎指出任何有错误或不够清晰的表达, 可以邮件至 chinaops666@gmail.com
相册