Skip to content

Useful Commands

What ?

Various commands I keep forgetting 🙂

Deployments

Restart deployments / pods

I keep seeing this on various forums, and I did have to also look for way to restart pod. Since there is no "restart" command. Ok, ok, there are rolling restarts, but if you use that, it first creates new pod in deployment and then terminate the old one. Maybe that's what you need, but what if your deployment have one pod and that pod uses persistent storage (that can be mounted only to one pod) ? Well your rolling restart will be stuck on creating new pod because the storage is not available...

Recently my docker-registry start returning error that it can't reach storage, don't know what happened, but restart seemed like "go to" option.

# Check
root@control01:~# kubectl get pods -n docker-registry
NAME                        READY   STATUS    RESTARTS   AGE
registry-7895c5bf6d-nkhk2   1/1     Running   0          3h11m

root@control01:~# kubectl scale --replicas=0 deployment registry -n docker-registry
# wait till finish
root@control01:~# kubectl scale --replicas=1 deployment registry -n docker-registry

In essence, we told to Kubernetes to scale the pods of deployments to 0, and then back to 1. That's how you restart pod in Kubernetes.

How do I delete what I created ?

Simple if you did not change names / labels in the config you can do the same as kubectl create just with:

kubectl delete -f <file>

How do I change configuration ?

Change the YAML file (keep the labels the same, or it will create another instance) and do:

kubectl apply -f <filename>

This will apply the new settings, in most cases it can create a new instance of the service, wait till it's ready then kills the old instance...bit depends.

Pods

How do I copy stuff inside and out of container ?

Well, it's quite easy, but keep in mind that if you copy data into a pod and the target path is not on persistent storage, the changes will disappear anytime the pod moves.

Copy files from container

On your client machine:

kubectl cp <pod_name>:/<file>/<location> <local_destination>
# For example
kubectl cp magento-poc-5699f5f968-fn8jm:/etc/nginx/nginx.conf /home/vlado/

Copy files to the container

Same as above, just in reverse

kubectl cp <local_file>  <pod_name>:/<file_destination>/
# For example:
kubectl cp /home/vlado/magento_www.gzip magento-poc-5699f5f968-fn8jm:/var/www/html/

Evicted pods

There is a mechanism inside Kubernetes that will kick pods if its RAM or CPU is reaching limits. You can see something like this in kubectl get pods

root@control01:~# kubectl get pods
NAME                                        READY   STATUS              RESTARTS   AGE
echo1-68949fd997-f9hxp                      1/1     Running             0          25h
echo1-68949fd997-qj2nv                      1/1     Running             0          25h
echo2-84fcb44c98-hjccw                      1/1     Running             0          25h
ingress-nginx-controller-569cfbd456-brfgd   1/1     Running             0          24h
magento-poc-6c8d8b8545-8vmtw                1/1     Running             0          25m
mysql-9fd744889-5vjc4                       0/1     ContainerCreating   0          3m16s
mysql-9fd744889-6hxgb                       0/1     Evicted             0          3m17s
mysql-9fd744889-d4n8f                       0/1     Evicted             0          5m47s
mysql-9fd744889-fdwzz                       0/1     Evicted             0          3m20s
mysql-9fd744889-fljzk                       0/1     Evicted             0          3m19s
mysql-9fd744889-kbcfl                       0/1     Evicted             0          3m16s
mysql-9fd744889-lcpb2                       0/1     Evicted             0          3m17s
mysql-9fd744889-lx6z6                       0/1     Evicted             0          3m20s
mysql-9fd744889-m242j                       0/1     Evicted             0          3m19s
mysql-9fd744889-m9dxr                       0/1     Evicted             0          3m20s
mysql-9fd744889-q2wkv                       0/1     Evicted             0          3m20s
mysql-9fd744889-tq9nv                       0/1     Evicted             0          3m18s
mysql-9fd744889-wmbrg                       0/1     Evicted             0          25m
mysql-client                                0/1     Completed           0          142m

In most cases it's a RAM issue, and the node kicked (evicted) the container, you can see something like this if you try to describe the evicted node.

root@control01:~# kubectl describe pods mysql-d64dbb968-jlc8n
Name:           mysql-d64dbb968-jlc8n
.
.
.
Events:
  Type     Reason            Age    From               Message
  ----     ------            ----   ----               -------
  Warning  FailedScheduling  3m37s  default-scheduler  0/2 nodes are available: 2 Insufficient memory.
  Warning  FailedScheduling  3m37s  default-scheduler  0/2 nodes are available: 2 Insufficient memory.

This will also taint the node where it happened (describe specific node) and prevents other pods to run there. Manual action is needed!

Clear taint

Removing the taint is easy, just do:

kubectl taint nodes <name_of_node> node.kubernetes.io/memory-pressure:NoSchedule-

This will allow running pods on the node again.

Remove evicted pods

kubectl get pod | grep Evicted | awk '{print $1}' | xargs kubectl delete pod

Comments