Skip to content

fix: rollout restart to avoid stale pod spec on redeploying with manifests#141

Merged
vigneshrajsb merged 1 commit intomainfrom
fix-k8s-manifest-apply
Mar 24, 2026
Merged

fix: rollout restart to avoid stale pod spec on redeploying with manifests#141
vigneshrajsb merged 1 commit intomainfrom
fix-k8s-manifest-apply

Conversation

@vigneshrajsb
Copy link
Contributor

@vigneshrajsb vigneshrajsb commented Mar 24, 2026

What

Adds a kubectl rollout restart after kubectl apply in the manifest deploy job.

Problem

When redeploying a manifest service (e.g. after nuking and recreating a dependency like a database), kubectl apply is a no-op at the pod level if the pod template spec hasn't changed — same image, same env vars. Kubernetes sees the existing ReplicaSet already satisfies the desired state and skips pod replacement entirely.

This is a problem when the pod template is unchanged but a dependency has changed — specifically, init containers that seed or migrate a database never re-run because no new pod is ever created.

Fix

After kubectl apply, if the deployment exists, unconditionally run kubectl rollout restart before waiting on kubectl rollout status. This patches the restartedAt annotation on the pod template, guaranteeing new pods are always created and init containers always execute on every manifest deploy.

If the pod template spec did change (new image, new env), this results in a second rolling update — harmless, just slightly redundant.

Test plan

  • Redeploy a manifest service where the pod template is unchanged — verify new pods are created and init container runs
  • Redeploy a manifest service where the image changed — verify rollout still completes successfully (no double-failure)

@vigneshrajsb vigneshrajsb requested a review from a team as a code owner March 24, 2026 01:25
@vigneshrajsb vigneshrajsb merged commit 9e9c384 into main Mar 24, 2026
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants