새소식

Recent Study/udemy - CKA with Practice Tests

KodeKloud - Test Record (section 3 & 4)

  • -

1. Manual Scheduling

[ 1 ] (Pod의 삭제와 생성을 동시에)

 

 > kubectl replace --force -f nginx.yaml

 

 

2. Label & Selector

[ 1 ] We have deployed a number of PODs. They are labelled with tier, env and bu. How many PODs exist in the dev environment (env)? Use selectors to filter the output

 

> kubectl get pods --selector env=dev | wc -l
> kubectl get all  --selector  env=prod

(wc - l 을 추가하면 해당 개수를 나타내 줄 수 있음, 헤더 포함 / 해당 env=prod 의 객체들 전부 뽑기)

 

 

 

3. Taint & Toleration

[ 1 ] Create another pod named bee with the nginx image, which has a toleration set to the taint mortein.

 

Image name : nginx

Key : spray

Value : mortein

Effect : NoSchedule

Status : Running

 

> kubectl run bee --image=nginx --dry-run=client -o yaml > bee.yaml

> vi bee.yaml (containers와 같은 위치에 tolerations 추가)

> kubectl create -f bee.yaml

 

 

 

4. Node Affinity

[ 1 ] Which nodes can the pods for the blue deployment be placed on? check Taints

 

> kubectl describe node node01 | grep Taints

 

 

5. Resource Limits

 

[ 1 ] The elephant pod runs a process that consumes 15Mi of memory. Increase the limit of the elphant pod to 20Mi.

Delete and recreate the pod if required. Do not modify anything other than the required fields.

 

> kubectl edit pod elephant (실행중인 pod는 변경불가 이므로)

> kubectl replace --force -f /tmp/kubectl-edit-2226462188.yaml

 

 

 

6. DaemonSets

[ 1 ] On how many nodes are the pods scheduled by the DaemonSet kube-proxy?

 

> kubectl describe daemonset kube-proxy --namespace=kube-system

 

 

 

[ 2 ] Deploy a DaemonSet for FluentD Logging. Use the given specifications.

Name: elasticsearch

Namespace: kube-system

Image: registry.k8s.io/fluentd-elasticsearch:1.20

 

> kubectl create deployment elasticsearch -n kube-system --image=registry.k8s.io/fluentd-elasticsearch:1.20 --dry-run=client -o yaml > fluentd.yaml

> vi fluentd.yaml (필요한 것 찾아 수정 후 적용)

 

 

 

7. Static Pods

[ 1 ] Create a static pod named static-busybox that uses the busybox image and the command sleep 1000

 

> kubectl run static-busybox --image=busybox restart=Never --dry-run=client -o yaml --command -- sleep 1000 > static-busybox.yaml

 

> cp static-busybox.yaml /etc/kubernetes/manifests/

 

[ 2 ] We just created a new static pod named static-greenbox. Find it and delete it. This question is a bit tricky. But if you use the knowledge you gained in the previous questions in this lab, you should be able to find the answer to it.

 

> kubectl get pods

> ls /etc/kubernetes/manifests/

> kubectl get nodes -o wide (해당 노드의 내부 주소를 봐야함)

> ssh 내부주소 -> yes (하면 내부로 들어오게 됨)

> ls /etc/kubernetes/manifests/ 

> cat /var/lib/kubelet/config.yaml (이를 확인하면 staticPodPath의 위치가 있습니다.)

> cd /etc/just-to-mess-with-you (진입 후)

> rm greenbox.yaml (삭제 후 exit)

 

 

8. Multiple Schedulers

[ 1 ] Let's create a configmap that the new scheduler will employ using the concept of ConfigMap as a volume.
We have already given a configMap definition file called my-scheduler-configmap.yaml at /root/ path that will create a configmap with name my-scheduler-config using the content of file /root/my-scheduler-config.yaml.

 

> kubectl create configmap my-scheduler-config --from-file=/root/my-scheduler-config.yaml -n kube-system
configmap/my-scheduler-config created

> kubectl get configmap my-scheduler-config -n kube-system

 

 

[ 2 ] Deploy an additional scheduler to the cluster following the given specification. Use the manifest file provided at /root/my-scheduler.yaml. Use the same image as used by the default kubernetes scheduler.

 

> cat /root/my-scheduler.yaml

> kubectl get pods -A

> kubectl describe pod kube-scheduler-controlplane -n kube-system | grep Image

> vi my-scheduler.yaml

> kubectl create -f my-scheduler.yaml

 

 

[ 3 ] A POD definition file is given. Use it to create a POD with the new custom scheduler.

File is located at /root/nginx-pod.yaml

 

> vi /root/nginx-pod.yaml ( schedulerName: my-scheduler )

> kubectl create -f nginx-pod.yaml

 

 

 

9. Monitoring & Logging

[ 1 ] Monitoring을 위해 Git에 있는 레포지터리를 사용했고, 이 내에서 존재하는 .yaml 파일로 객체를 생성. 그리고 아래 명령어를 통해 cpu, memory 같은 metric 체크를 할 수 있었다.

> kubectl top node

> kubectl top pod

 

[ 2 ] 해당 Pod의 Log를 체크할 때

 

> kubectl logs webapp-1

Contents

포스팅 주소를 복사했습니다

이 글이 도움이 되었다면 공감 부탁드립니다.