How to Restart a Deployment in Kubernetes

In Kubernetes, restarting a deployment is a common way to refresh application pods, apply updated configurations, or recover from a stuck pod without changing the deployment definition itself. Here's everything you need to know about restarting deployments in Kubernetes.

Why Restart a Deployment?

There are several scenarios where restarting a deployment is useful:

  • When we want to apply ConfigMaps or Secrets. If we change ConfigMaps or Secrets, pods don't automatically pick up these changes. This is a really good reason to restart a pod.
  • Restart a pod that got stuck or on an unstable state: If pods are failing, stuck in a crash loop, or misbehaving, restarting can bring them back to a healthy state.
  • Force application refresh: In cases where you need to reload app code, even if the image tag hasn’t changed.
How to restart:

Chances are that you just don't have Pods running by themself and that you have a Replicaset and a deployment making sure that your pods are running. To restart all the pods that a deployment is managing using kubectl, you need to type this command:

kubectl rollout restart deployment <deployment-name>

Example:

kubectl rollout restart deployment my-app

This command triggers a rolling restart—terminating old pods and gradually spinning up new ones while maintaining availability. What Happens Behind the Scenes? The rollout restart command updates the deployment’s spec.template.metadata.annotations field with a new timestamp. This forces Kubernetes to treat the deployment as updated, triggering the rolling update strategy defined in your deployment.

Other Ways to Trigger a Restart

While kubectl rollout restart is the cleanest approach, here are some alternatives:

1. Patch the Deployment with a Dummy Change

kubectl patch deployment <deployment-name> -p \
"{\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"kubectl.kubernetes.io/restartedAt\":\"$(date +%s)\"}}}}}"

This manually simulates what rollout restart does under the hood.

2. Update an Environment Variable

Changing an env var forces a new pod template hash, triggering a rollout:

env:
- name: FORCE_RESTART
value: "timestamp-$(date +%s)"

3. Delete Pods (Not Recommended for Rollouts have downtime)

kubectl delete pod -l app=<app-label>

This will force the deployment to recreate the pods, but it's not a rolling restart and may cause downtime.

🚀 Download K8Studio for better Kubernetes management!Download