top of page

Mock QS CKA

Updated: 5 days ago

ree


QS-3---Solve this question on: ssh cka3962


There are two Pods named o3db-* in Namespace project-h800. The Project H800 management asked you to scale these down to one replica to save resources.


Goal: Scale down the o3db-* pods in namespace project-h800 to 1 replica.



Solution:


Check what’s running


kubectl get pods -n project-h800


You should see two pods with names like:

o3db-xxxxxxx
o3db-yyyyyyy

These are usually controlled by a Deployment, StatefulSet, or ReplicaSet.


Find the controller


kubectl get deploy,statefulset,replicaset -n project-h800


Look for one with a name starting with o3db.


Scale down to 1 replica


If it’s a Deployment:


kubectl scale deployment o3db --replicas=1 -n project-h800


If it is a Statefull set:


kubectl scale statefulset o3db --replicas=1 -n project-h800


Verify


kubectl get pods -n project-h800



Q-4

Solve this question on: ssh cka2556


Check all available Pods in the Namespace project-c13 and find the names of those that would probably be terminated first if the nodes run out of resources (cpu or memory).

Write the Pod names into /opt/course/4/pods-terminated-first.txt.



Steps to solve on cka2556


1. Check all pods in the namespace


kubectl get pods -n project-c13 -o wide


Inspect QoS class for each pod


kubectl get pod -n project-c13 -o custom-columns=NAME:.metadata.name,QOS:.status.qosClass


Example output:


NAME QOS

pod1 BestEffort

pod2 Guaranteed

pod3 Burstable


 Identify those with BestEffort QoS

These will be terminated first.


Save them into the file:


kubectl get pod -n project-c13 --field-selector=status.qosClass=BestEffort -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}{end}' > /opt/course/4/pods-terminated-first.txt


Verify file


cat /opt/course/4/pods-terminated-first.txt




Q-5

Solve this question on: ssh cka5774

Previously the application api-gateway used some external autoscaler which should now be replaced with a HorizontalPodAutoscaler (HPA). The application has been deployed to Namespaces api-gateway-staging and api-gateway-prod like this:

kubectl kustomize /opt/course/5/api-gateway/staging | kubectl apply -f -

kubectl kustomize /opt/course/5/api-gateway/prod | kubectl apply -f -

Using the Kustomize config at /opt/course/5/api-gateway do the following:

1. Remove the ConfigMap horizontal-scaling-config completely

2. Add HPA named api-gateway for the Deployment api-gateway with min 2 and max 4 replicas. It should scale at 50% average CPU utilisation

3. In prod the HPA should have max 6 replicas

4. Apply your changes for staging and prod so they're reflected in the cluster




1. Understand the Kustomize structure


First, check the directory:


ls -R /opt/course/5/api-gateway

You’ll probably see:


/base /overlays/staging /overlays/prod

or something similar.


2. Remove the ConfigMap horizontal-scaling-config


Find where it’s declared (likely in base/kustomization.yaml or base/configmap.yaml):


grep -R "horizontal-scaling-config" /opt/course/5/api-gateway

Remove it from:

  • The resources: list in kustomization.yaml

  • And delete the actual file if it’s only for this purpose.


3. Create HPA manifest in base

We’ll create /opt/course/5/api-gateway/base/hpa.yaml:


apiVersion: autoscaling/v2

kind: HorizontalPodAutoscaler

metadata:

name: api-gateway

spec:

scaleTargetRef:

apiVersion: apps/v1

kind: Deployment

name: api-gateway

minReplicas: 2

maxReplicas: 4

metrics:

- type: Resource

resource:

name: cpu

target:

type: Utilization

averageUtilization: 50


4. Add HPA to base/kustomization.yaml


resources: - deployment.yaml - service.yaml - hpa.yaml

(Ensure hpa.yaml is listed under resources:)



5. Override for prod (maxReplicas = 6)

In /opt/course/5/api-gateway/prod/kustomization.yaml, add a patch:


patches:

- target:

kind: HorizontalPodAutoscaler

name: api-gateway

patch: |-

- op: replace

path: /spec/maxReplicas

value: 6


apiVersion: autoscaling/v2

kind: HorizontalPodAutoscaler

metadata:

name: api-gateway

spec:

maxReplicas: 6


patchesStrategicMerge: - hpa-patch.yaml

6. Apply for staging and prod


kubectl kustomize /opt/course/5/api-gateway/staging | kubectl apply -f - kubectl kustomize /opt/course/5/api-gateway/prod | kubectl apply -f -

7. Verify


kubectl get hpa -n api-gateway-staging kubectl get hpa -n api-gateway-prod

Check that staging has maxReplicas=4 and prod has maxReplicas=6.




Q-6

Solve this question on: ssh cka7968

Create a new PersistentVolume named safari-pv. It should have a capacity of 2Gi, accessMode ReadWriteOnce, hostPath /Volumes/Data and no storageClassName defined.

Next create a new PersistentVolumeClaim in Namespace project-t230 named safari-pvc . It should request 2Gi storage, accessMode ReadWriteOnce and should not define a storageClassName. The PVC should bound to the PV correctly.

Finally create a new Deployment safari in Namespace project-t230 which mounts that volume at /tmp/safari-data. The Pods of that Deployment should be of image httpd:2-alpine.



1. Create the PersistentVolume

File: safari-pv.yaml

apiVersion: v1 kind: PersistentVolume metadata: name: safari-pv spec: capacity: storage: 2Gi accessModes: - ReadWriteOnce hostPath: path: /Volumes/Data persistentVolumeReclaimPolicy: Retain storageClassName: ""   # no storage class

Apply it:

kubectl apply -f safari-pv.yaml

2. Create the PersistentVolumeClaim

File: safari-pvc.yaml

apiVersion: v1 kind: PersistentVolumeClaim metadata: name: safari-pvc namespace: project-t230 spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi storageClassName: ""   # no storage class

Apply it:



kubectl apply -f safari-pvc.yaml

Check it bound:

kubectl get pvc -n project-t230

Status should be Bound to safari-pv.

3. Create the Deployment

File: safari-deploy.yaml

apiVersion: apps/v1 kind: Deployment metadata: name: safari namespace: project-t230 spec: replicas: 1 selector: matchLabels: app: safari template: metadata: labels: app: safari spec: containers: - name: safari image: httpd:2-alpine volumeMounts: - name: safari-storage mountPath: /tmp/safari-data volumes: - name: safari-storage persistentVolumeClaim: claimName: safari-pvc

Apply it:

kubectl apply -f safari-deploy.yaml

4. Verify everything

kubectl get pv safari-pv kubectl get pvc safari-pvc -n project-t230 kubectl describe pod -n project-t230 -l app=safari

Check that:

  • PV is Bound

  • PVC is Bound

  • Pod mounts /tmp/safari-data



Q-7

Solve this question on: ssh cka5774

The metrics-server has been installed in the cluster. Write two bash scripts which use kubectl:

1. Script /opt/course/7/node.sh should show resource usage of Nodes

2. Script /opt/course/7/pod.sh should show resource usage of Pods and their containers



1. Script for Node resource usage

Path: /opt/course/7/node.sh

bash

CopyEdit

#!/bin/bash kubectl top nodes

2. Script for Pod & container resource usage

Path: /opt/course/7/pod.sh

bash

CopyEdit

#!/bin/bash kubectl top pods --containers --all-namespaces

3. Make them executable

bash

CopyEdit

chmod +x /opt/course/7/node.sh /opt/course/7/pod.sh

4. Test

bash

CopyEdit

/opt/course/7/node.sh /opt/course/7/pod.sh

  • First script should display CPU & Memory usage for each node.

  • Second script should display CPU & Memory usage for each pod and its containers.



Q-8

Solve this question on: ssh cka3962

Your coworker notified you that node cka3962-node1 is running an older Kubernetes version and is not even part of the cluster yet.

1. Update the node's Kubernetes to the exact version of the controlplane

2. Add the node to the cluster using kubeadm

ℹ️ You can connect to the worker node using ssh cka3962-node1 from cka3962




1. Check control plane Kubernetes version

On the control plane node (cka3962):

bash

CopyEdit

kubectl get nodes kubectl version --short

The Server Version (under Control Plane) is the exact version you must match.Example:

pgsql

CopyEdit

Server Version: v1.29.2

2. SSH into the worker node

bash

CopyEdit

ssh cka3962-node1

3. Upgrade / install correct kubeadm, kubelet, kubectl version

On the worker node:

bash

CopyEdit

# Example: if version is v1.29.2 VERSION=1.29.2-1.1 sudo apt-get update sudo apt-get install -y kubeadm=$VERSION kubelet=$VERSION kubectl=$VERSION sudo apt-mark hold kubeadm kubelet kubectl

Note:On CentOS/RHEL use yum install -y kubeadm-$VERSION kubelet-$VERSION kubectl-$VERSION with version format like 1.29.2-0.

Restart kubelet:

bash

CopyEdit

sudo systemctl daemon-reload sudo systemctl restart kubelet

4. Get the join command from control plane

Back on the control plane (cka3962):

bash

CopyEdit

kubeadm token create --print-join-command

Example output:

sql

CopyEdit

kubeadm join 10.0.0.10:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:1234567890abcdef...

5. Run the join command on worker

On cka3962-node1:

bash

CopyEdit

sudo kubeadm join <API_SERVER>:6443 --token <TOKEN> --discovery-token-ca-cert-hash sha256:<HASH>

6. Verify the node joined

On control plane:

bash

CopyEdit

kubectl get nodes -o wide

You should now see cka3962-node1 in Ready state with the same Kubernetes version.


Q-9

Solve this question on: ssh cka9412

There is ServiceAccount secret-reader in Namespace project-swan. Create a Pod of image nginx:1-alpine named api-contact which uses this ServiceAccount.

Exec into the Pod and use curl to manually query all Secrets from the Kubernetes Api.

Write the result into file /opt/course/9/result.json.



1. Create the Pod with the given ServiceAccount

bash

CopyEdit

kubectl run api-contact \ --image=nginx:1-alpine \ --restart=Never \ --serviceaccount=secret-reader \ -n project-swan

Verify:

bash

CopyEdit

kubectl get pod api-contact -n project-swan

Wait until it’s Running.

2. Exec into the Pod

bash

CopyEdit

kubectl exec -it api-contact -n project-swan -- sh

3. Get API server address & token inside the Pod

Inside the Pod:

sh

CopyEdit

APISERVER="https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT" TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

4. Query all Secrets via API

Still inside the Pod:

sh

CopyEdit

apk add --no-cache curl curl --cacert $CACERT -H "Authorization: Bearer $TOKEN" \ $APISERVER/api/v1/secrets

5. Save the output to file on control plane

Option A — Save inside pod, then kubectl cp:

sh

CopyEdit

curl --cacert $CACERT -H "Authorization: Bearer $TOKEN" \ $APISERVER/api/v1/secrets > /tmp/result.json exit kubectl cp project-swan/api-contact:/tmp/result.json /opt/course/9/result.json

Option B — Directly output to stdout and redirect on control plane:

bash

CopyEdit

kubectl exec api-contact -n project-swan -- sh -c \ 'apk add --no-cache curl >/dev/null && \ curl --cacert /var/run/secrets/kubernetes.io/serviceaccount/ca.crt \ -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \ https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT/api/v1/secrets' \ > /opt/course/9/result.json

6. Verify

bash

CopyEdit

cat /opt/course/9/result.json | jq .

Should show JSON list of Secrets from all namespaces your ServiceAccount can read.



Q-10

Solve this question on: ssh cka3962

Create a new ServiceAccount processor in Namespace project-hamster. Create a Role and RoleBinding, both named processor as well. These should allow the new SA to only create Secrets and ConfigMaps in that Namespace.



1. Create the ServiceAccount

bash

CopyEdit

kubectl create serviceaccount processor -n project-hamster

2. Create the Role (only create Secrets & ConfigMaps)

processor-role.yaml

yaml

CopyEdit

apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: processor namespace: project-hamster rules: - apiGroups: [""] resources: ["secrets", "configmaps"] verbs: ["create"]

Apply it:

bash

CopyEdit

kubectl apply -f processor-role.yaml

3. Create the RoleBinding

processor-rolebinding.yaml

yaml

CopyEdit

apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: processor namespace: project-hamster subjects: - kind: ServiceAccount name: processor namespace: project-hamster roleRef: kind: Role name: processor apiGroup: rbac.authorization.k8s.io

Apply it:

bash

CopyEdit

kubectl apply -f processor-rolebinding.yaml

4. Verify

bash

CopyEdit

kubectl describe role processor -n project-hamster kubectl describe rolebinding processor -n project-hamster

And test by impersonating:

bash

CopyEdit

kubectl auth can-i create secrets --as=system:serviceaccount:project-hamster:processor -n project-hamster kubectl auth can-i create configmaps --as=system:serviceaccount:project-hamster:processor -n project-hamster



Q-11

Solve this question on: ssh cka2556

Use Namespace project-tiger for the following. Create a DaemonSet named ds-important with image httpd:2-alpine and labels id=ds-important and uuid=18426a0b-5f59-4e10-923f-c0e078e82462. The Pods it creates should request 10 millicore cpu and 10 mebibyte memory. The Pods of that DaemonSet should run on all nodes, also controlplanes.




1. Create the DaemonSet manifest

ds-important.yaml

yaml

CopyEdit

apiVersion: apps/v1 kind: DaemonSet metadata: name: ds-important namespace: project-tiger labels: id: ds-important uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 spec: selector: matchLabels: id: ds-important template: metadata: labels: id: ds-important uuid: 18426a0b-5f59-4e10-923f-c0e078e82462 spec: tolerations: - key: "node-role.kubernetes.io/control-plane" operator: "Exists" effect: "NoSchedule" - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule" containers: - name: httpd image: httpd:2-alpine resources: requests: cpu: "10m" memory: "10Mi"

2. Apply it

bash

CopyEdit

kubectl apply -f ds-important.yaml

3. Verify it runs on all nodes

bash

CopyEdit

kubectl get pods -n project-tiger -o wide kubectl get nodes

You should see one pod per node, including control plane nodes.

Why tolerations are needed

Normally, DaemonSets skip control plane nodes because they have NoSchedule taints.By adding:

yaml

CopyEdit

tolerations: - key: "node-role.kubernetes.io/control-plane" operator: "Exists" effect: "NoSchedule" - key: "node-role.kubernetes.io/master" operator: "Exists" effect: "NoSchedule"

you allow scheduling on them.


Q-12

Solve this question on: ssh cka2556

Implement the following in Namespace project-tiger:

• Create a Deployment named deploy-important with 3 replicas

• The Deployment and its Pods should have label id=very-important

• First container named container1 with image nginx:1-alpine

• Second container named container2 with image google/pause

• There should only ever be one Pod of that Deployment running on one worker node, use topologyKey: kubernetes.io/hostname for this

ℹ️ Because there are two worker nodes and the Deployment has three replicas the result should be that the third Pod won't be scheduled. In a way this scenario simulates the behaviour of a DaemonSet, but using a Deployment with a fixed number of replicas


1. Create the Deployment manifest

deploy-important.yaml

yaml

CopyEdit

apiVersion: apps/v1 kind: Deployment metadata: name: deploy-important namespace: project-tiger labels: id: very-important spec: replicas: 3 selector: matchLabels: id: very-important template: metadata: labels: id: very-important spec: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: id operator: In values: - very-important topologyKey: kubernetes.io/hostname containers: - name: container1 image: nginx:1-alpine - name: container2 image: google/pause

2. Apply it

bash

CopyEdit

kubectl apply -f deploy-important.yaml

3. Verify scheduling behaviour

bash

CopyEdit

kubectl get pods -n project-tiger -o wide

Expected:

  • 2 Pods scheduled (one per worker node)

  • 1 Pod stuck in Pending because of the anti-affinity rule.

Why this works

  • podAntiAffinity with topologyKey: kubernetes.io/hostname prevents more than one matching Pod (same label id=very-important) from being placed on the same node.

  • Since there are only two worker nodes, and we asked for 3 replicas, the third can’t be scheduled.




Q-13

Solve this question on: ssh cka7968

The team from Project r500 wants to replace their Ingress (networking.k8s.io) with a Gateway Api (gateway.networking.k8s.io) solution. The old Ingress is available at /opt/course/13/ingress.yaml.

Perform the following in Namespace project-r500 and for the already existing Gateway:

1. Create a new HTTPRoute named traffic-director which replicates the routes from the old Ingress

2. Extend the new HTTPRoute with path /auto which redirects to mobile if the User-Agent is exactly mobile and to desktop otherwise

The existing Gateway is reachable at http://r500.gateway:30080 which means your implementation should work for these commands:

curl r500.gateway:30080/desktop

curl r500.gateway:30080/mobile

curl r500.gateway:30080/auto -H "User-Agent: mobile"

curl r500.gateway:30080/auto



1. Understand the old Ingress

First, inspect the given Ingress:

bash

CopyEdit

cat /opt/course/13/ingress.yaml

This will show you the old HTTP rules (paths, services, ports).You’ll replicate those in the new HTTPRoute.

2. Create the base HTTPRoute

Example (adjust based on the Ingress paths & services you saw):

yaml

CopyEdit

apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: traffic-director namespace: project-r500 spec: parentRefs: - name: <existing-gateway-name> rules: - matches: - path: type: PathPrefix value: /desktop backendRefs: - name: desktop-service port: 80 - matches: - path: type: PathPrefix value: /mobile backendRefs: - name: mobile-service port: 80

Replace desktop-service and mobile-service with actual service names from the Ingress.

3. Add the /auto rule with User-Agent logic

Gateway API supports header-based matching, so we can route /auto requests differently depending on User-Agent.

Add two extra rules:

yaml

CopyEdit

    - matches: - path: type: PathPrefix value: /auto headers: - name: User-Agent value: mobile backendRefs: - name: mobile-service port: 80 - matches: - path: type: PathPrefix value: /auto backendRefs: - name: desktop-service port: 80

4. Full example manifest

traffic-director.yaml:

yaml

CopyEdit

apiVersion: gateway.networking.k8s.io/v1 kind: HTTPRoute metadata: name: traffic-director namespace: project-r500 spec: parentRefs: - name: r500-gateway   # use the actual Gateway name rules: - matches: - path: type: PathPrefix value: /desktop backendRefs: - name: desktop-service port: 80 - matches: - path: type: PathPrefix value: /mobile backendRefs: - name: mobile-service port: 80 - matches: - path: type: PathPrefix value: /auto headers: - name: User-Agent value: mobile backendRefs: - name: mobile-service port: 80 - matches: - path: type: PathPrefix value: /auto backendRefs: - name: desktop-service port: 80

5. Apply

bash

CopyEdit

kubectl apply -f traffic-director.yaml

6. Test

bash

CopyEdit

curl r500.gateway:30080/desktop curl r500.gateway:30080/mobile curl r500.gateway:30080/auto -H "User-Agent: mobile" curl r500.gateway:30080/auto

The /auto endpoint should serve mobile content when the User-Agent is exactly mobile, and desktop otherwise.


Q-14

Solve this question on: ssh cka9412

Perform some tasks on cluster certificates:

1. Check how long the kube-apiserver server certificate is valid using openssl or cfssl. Write the expiration date into /opt/course/14/expiration. Run the kubeadm command to list the expiration dates and confirm both methods show the same one

2. Write the kubeadm command that would renew the kube-apiserver certificate into /opt/course/14/kubeadm-renew-certs.sh



1. Find kube-apiserver server certificate

On control plane:

bash

CopyEdit

ls -l /etc/kubernetes/pki/apiserver.crt

That’s the cert file we’ll inspect.

2. Check expiration with openssl

bash

CopyEdit

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -enddate

Example output:

ini

CopyEdit

notAfter=Mar 10 12:34:56 2026 GMT

Write just the date to the file:

bash

CopyEdit

openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -enddate | cut -d= -f2 > /opt/course/14/expiration

3. Confirm with kubeadm

bash

CopyEdit

kubeadm certs check-expiration | grep 'apiserver '

The expiration date shown here should match what you wrote in /opt/course/14/expiration.

4. Write kubeadm renew command

The renewal command for only the kube-apiserver cert:

bash

CopyEdit

echo "kubeadm certs renew apiserver" > /opt/course/14/kubeadm-renew-certs.sh chmod +x /opt/course/14/kubeadm-renew-certs.sh

5. (Optional) Test renewal command

If you were to run it:

bash

CopyEdit

sudo kubeadm certs renew apiserver

Then restart the kube-apiserver Pod (it’ll happen automatically if you docker ps / crictl ps shows container restart after file change).


Q-15

Solve this question on: ssh cka7968

There was a security incident where an intruder was able to access the whole cluster from a single hacked backend Pod.

To prevent this create a NetworkPolicy called np-backend in Namespace project-snake. It should allow the backend-* Pods only to:

• Connect to db1-* Pods on port 1111

• Connect to db2-* Pods on port 2222

Use the app Pod labels in your policy.

ℹ️ All Pods in the Namespace run plain Nginx images. This allows simple connectivity tests like: k -n project-snake exec POD_NAME -- curl POD_IP:PORT

ℹ️ For example, connections from backend-* Pods to vault-* Pods on port 3333 should no longer work



1. Understand requirements

We need a NetworkPolicy np-backend in namespace project-snake that:

  • Selects only backend- Pods* (likely app=backend label)

  • Allows outbound (egress) traffic only to:

    • db1-* Pods on port 1111

    • db2-* Pods on port 2222

  • Everything else should be denied.

2. Check Pod labels

bash

CopyEdit

kubectl -n project-snake get pods --show-labels

You should see something like:

CopyEdit

backend-1 app=backend backend-2 app=backend db1-main app=db1 db2-main app=db2 vault-1 app=vault

3. Create the NetworkPolicy manifest

np-backend.yaml

yaml

CopyEdit

apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: np-backend namespace: project-snake spec: podSelector: matchLabels: app: backend policyTypes: - Egress egress: - to: - podSelector: matchLabels: app: db1 ports: - protocol: TCP port: 1111 - to: - podSelector: matchLabels: app: db2 ports: - protocol: TCP port: 2222

4. Apply the policy

bash

CopyEdit

kubectl apply -f np-backend.yaml

5. Test

First, get backend pod name:

bash

CopyEdit

BACKEND=$(kubectl -n project-snake get pods -l app=backend -o name | head -n 1)

Get target IPs:

bash

CopyEdit

IP_DB1=$(kubectl -n project-snake get pod -l app=db1 -o jsonpath='{.items[0].status.podIP}') IP_DB2=$(kubectl -n project-snake get pod -l app=db2 -o jsonpath='{.items[0].status.podIP}') IP_VAULT=$(kubectl -n project-snake get pod -l app=vault -o jsonpath='{.items[0].status.podIP}')

Check allowed:

bash

CopyEdit

kubectl -n project-snake exec $BACKEND -- curl -s $IP_DB1:1111 kubectl -n project-snake exec $BACKEND -- curl -s $IP_DB2:2222

Check blocked:

bash

CopyEdit

kubectl -n project-snake exec $BACKEND -- curl -s --max-time 3 $IP_VAULT:3333

Vault should fail.



1. Backup existing CoreDNS config

Check which ConfigMap CoreDNS uses (usually coredns in kube-system):

bash

CopyEdit

kubectl -n kube-system get configmap coredns -o yaml > /opt/course/16/coredns_backup.yaml

This way you can restore later with:

bash

CopyEdit

kubectl -n kube-system apply -f /opt/course/16/coredns_backup.yaml

2. Edit the CoreDNS config

Open the ConfigMap for editing:

bash

CopyEdit

kubectl -n kube-system edit configmap coredns

Find the Corefile section — it will look something like:

txt

CopyEdit

.:53 { errors health ready kubernetes cluster.local in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa } ... }

3. Add the custom-domain zone

We need DNS resolution for .custom-domain to work exactly like .cluster.local.That means we duplicate the kubernetes plugin line to include both domains:

txt

CopyEdit

kubernetes cluster.local custom-domain in-addr.arpa ip6.arpa { pods insecure fallthrough in-addr.arpa ip6.arpa }

Important: The custom-domain must be placed in the same kubernetes block, not as a separate server block, so both domains are handled identically.

4. Save and reload CoreDNS

After saving the ConfigMap, restart the CoreDNS pods so they reload the config:

bash

CopyEdit

kubectl -n kube-system rollout restart deployment coredns

Wait until pods are ready:

bash

CopyEdit

kubectl -n kube-system get pods -l k8s-app=kube-dns

5. Test

Run a busybox pod for testing:

bash

CopyEdit

kubectl run test-dns --image=busybox:1 --restart=Never --command -- sleep 3600

Check resolution:

bash

CopyEdit

kubectl exec test-dns -- nslookup kubernetes.default.svc.cluster.local kubectl exec test-dns -- nslookup kubernetes.default.svc.custom-domain

Both should return an IP.


Q-17

Solve this question on: ssh cka2556

In Namespace project-tiger create a Pod named tigers-reunite of image httpd:2-alpine with labels pod=container and container=pod. Find out on which node the Pod is scheduled. Ssh into that node and find the containerd container belonging to that Pod.


Using command crictl:

1. Write the ID of the container and the info.runtimeType into /opt/course/17/pod-container.txt

2. Write the logs of the container into /opt/course/17/pod-container.log

ℹ️ You can connect to a worker node using ssh cka2556-node1 or ssh cka2556-node2 from cka2556

CKA Simulator A Kubernetes 1.33


1. Create Namespace & Pod


kubectl create ns project-tiger kubectl run tigers-reunite \ -n project-tiger \ --image=httpd:2-alpine \ --labels=pod=container,container=pod

2. Find the node where the Pod is running

bash

CopyEdit

kubectl -n project-tiger get pod tigers-reunite -o wide

Example output:

nginx

CopyEdit

NAME              READY STATUS RESTARTS AGE IP NODE tigers-reunite 1/1     Running 0          15s   10.42.0.5  cka2556-node1

Let’s assume it’s on cka2556-node1.

3. SSH into the node

bash

CopyEdit

ssh cka2556-node1

4. Find the containerd container ID

First, list all containers with crictl:

bash

CopyEdit

sudo crictl ps --name tigers-reunite

Example:

sql

CopyEdit

CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID abcd1234ef56... httpd:2-alpine 20s ago Running tigers-reunite 0         789xyz...

Here, the container ID is the first column (abcd1234ef56...).

5. Get runtime type

bash

CopyEdit

sudo crictl inspect abcd1234ef56 | grep runtimeType

Example output:

json

CopyEdit

"runtimeType": "io.containerd.runc.v2"

6. Save container ID and runtime type

bash

CopyEdit

echo "abcd1234ef56 io.containerd.runc.v2" | sudo tee /opt/course/17/pod-container.txt

7. Save container logs

bash

CopyEdit

sudo crictl logs abcd1234ef56 | sudo tee /opt/course/17/pod-container.log

8. Exit back to control plane

bash

CopyEdit

exit













Comentários

Avaliado com 0 de 5 estrelas.
Ainda sem avaliações

Adicione uma avaliação
bottom of page