What happens when a Kubernetes pod is OOM killed


What do you see if a pod is killed because memory on the node is insufficient?

I read that Kubernetes will kill some pods if the memory on a node is not enough for all the workloads. What will you see in that case?

If the pod killed belongs to a deployment, there will be a new pod listed with a more recent “Age” compared to the other pods? Or it will be the same pod with “Restarts”?

In the first case, will I also see a pod in status terminated or similar or the killed pod completely disappears?


If a Kubernetes worker node is under memory pressure, it may start to kill some pods on the node.

The pods that will be killed first are determined based on specific rules (e.g. based on the QoS, Kubernetes will first remove Best Effort pods, then Burstable pods).

When a pod is evicted you will see that it is terminated with “Evicted” reason. Basically the pod is still present and you can see it for debugging, but it’s not running.

Also, if the pod was managed by a Deployment / ReplicaSet or similar, you will see a new pod with a more recent “Age”.

Note that in any case it is a best practice to always set the memory limits on containers / pods. In this case, if the container exceeds the limit that you manually set for it, it will get OOM killed. The pod remains, the container is restarted (you won’t see a new pod, just a restart).