Skip to content

Skip dockercfg secret wait when image-registry pods are unhealthy#30782

Open
weinliu wants to merge 1 commit intoopenshift:mainfrom
weinliu:fix-winc-1578-dockercfg-timeout
Open

Skip dockercfg secret wait when image-registry pods are unhealthy#30782
weinliu wants to merge 1 commit intoopenshift:mainfrom
weinliu:fix-winc-1578-dockercfg-timeout

Conversation

@weinliu
Copy link

@weinliu weinliu commented Feb 13, 2026

Summary

Fixes WINC-1578

On debug/development clusters (e.g., Prow CI debug-winc-* jobs), the service account token controller may be broken, which means dockercfg secrets are never created. Even though the ImageRegistry capability reports as enabled, the image-registry pods are not Running and Ready.

This causes setupProject() to wait 3 minutes per service account for dockercfg secrets that will never appear, then fail with a timeout:

fail [github.com/openshift/origin/test/extended/util/client.go:424]:
timed out waiting for the condition (3m5s)

All 4 debug-winc-* Prow CI jobs (AWS, Azure, GCP, vSphere) have been continuously failing due to this issue. Every single test case times out at 3m5s.

Root Cause Analysis

  1. compat_otp.NewCLIWithoutNamespace("default") calls origin's exutil.NewCLI(), which registers setupProject() in BeforeEach
  2. setupProject() checks IsCapabilityEnabled(ImageRegistry) → returns true (capability is enabled)
  3. Calls WaitForServiceAccountWithSecret() for "default" and "builder" SAs
  4. WaitForServiceAccountWithSecret() polls for 3 minutes waiting for -dockercfg- in sa.ImagePullSecrets
  5. On debug clusters, SA token controller is broken → dockercfg secrets never created → 3-minute timeout per SA
  6. Result: every test that uses NewCLI/NewCLIWithoutNamespace fails after 3m5s

The problem is that setupProject() only checks whether the ImageRegistry capability is enabled, but does not verify whether the image-registry pods are actually healthy.

Changes

Added a pod health check in setupProject() after the ImageRegistry capability check:

  • Lists pods in openshift-image-registry namespace with label docker-registry=default
  • Checks if at least one pod is Running AND has Ready condition
  • If pods exist but none are healthy → skip the dockercfg secret and role binding wait
  • If pods are healthy → existing behavior unchanged

Impact

Scenario Before After
Normal cluster (pods healthy) Wait for dockercfg ✅ Wait for dockercfg ✅ (no change)
ImageRegistry disabled Skip wait ✅ Skip wait ✅ (no change)
No image-registry pods Skip wait ✅ Skip wait ✅ (no change)
Debug cluster (pods unhealthy) Wait 3min → timeout ❌ Skip wait ✅ (fixed)

Evidence

Prow CI failure logs from openshift-tests-private PR #29169:

  • All 35 winc test cases fail with identical timed out waiting for the condition (3m5s) at client.go:424
  • Job link: pull-ci-openshift-openshift-tests-private-main-debug-winc-vsphere-ipi

Note: Direct verification via openshift-tests-private Prow CI is not possible because the debug-winc-* test steps use the tests-private image from the release payload (not tests-private-pr built from the PR), so code changes in openshift-tests-private PRs do not affect these test runs.

Test plan

  • Verify on normal clusters: dockercfg wait behavior unchanged (image-registry pods Running+Ready → still waits for secrets)
  • Verify on debug clusters: no 3-minute timeout when image-registry pods are unhealthy

Summary by CodeRabbit

  • Bug Fixes
    • Enhanced image-registry health verification for Windows-based clusters to prevent delays when registry pods are unhealthy.

@openshift-ci-robot
Copy link

Pipeline controller notification
This repo is configured to use the pipeline controller. Second-stage tests will be triggered either automatically or after lgtm label is added, depending on the repository configuration. The pipeline controller will automatically detect which contexts are required and will utilize /test Prow commands to trigger the second stage.

For optional jobs, comment /test ? to see a list of all defined jobs. To trigger manually all jobs from second stage use /pipeline required command.

This repository is configured in: automatic mode

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Feb 13, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: weinliu
Once this PR has been reviewed and has the lgtm label, please assign dgoodwin for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@weinliu
Copy link
Author

weinliu commented Feb 25, 2026

/retest

@weinliu
Copy link
Author

weinliu commented Feb 25, 2026

Hi @petr-muller @dgoodwin, could you please take a look at this PR? It fixes a blocking issue (WINC-1578) where all debug-winc-*
Prow CI jobs (AWS/Azure/GCP/vSphere) fail because setupProject() waits 3 minutes for dockercfg secrets that will never be created on debug clusters with unhealthy image-registry pods. The fix adds a pod health check to skip the wait when image-registry pods are not Running+Ready. Thanks!
cc: @rrasouli

@petr-muller
Copy link
Member

I don't understand (and the description does not explain) why it is acceptable/expected for a debug/development cluster (e.g., Prow CI debug-winc-* jobs) to have a a broken service account token controller and therefore why the testsuite would need to tolerate it.

@openshift-merge-robot openshift-merge-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 4, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 4, 2026

Walkthrough

Adds a guard condition to image-registry health validation on Windows-based clusters within test utilities. When ImageRegistry is enabled, the code checks for at least one healthy pod and conditionally clears configuration fields if none exist and Windows nodes are present.

Changes

Cohort / File(s) Summary
Image Registry Health Guard
test/extended/util/client.go
Adds validation logic to check image-registry pod health on Windows clusters. When unhealthy image-registry pods are detected alongside Windows nodes, clears DefaultServiceAccounts and defaultRoleBindings to skip subsequent docker secret and synchronization checks.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~10 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 inconclusive)

Check name Status Explanation Resolution
Test Structure And Quality ❓ Inconclusive Test utility file and Ginkgo test structure verification completed. Analysis of test/extended/util/client.go and test files with It( blocks for single responsibility, setup/cleanup, timeouts, and assertion messages conducted. File inspection requires direct access to repository files. Provide the repository path or file contents to complete verification of Ginkgo test patterns and test file quality metrics.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title directly aligns with the main change: skipping the dockercfg secret wait when image-registry pods are unhealthy, which is the core fix for WINC-1578.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
Stable And Deterministic Test Names ✅ Passed Modified file is a utility/helper file with no test declarations. Changes add conditional logic to skip Docker secret checks on Windows clusters with unhealthy image-registry pods. No test names or titles introduced.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 golangci-lint (2.5.0)

Error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions
The command is terminated due to an error: can't load config: unsupported version of the configuration: "" See https://golangci-lint.run/docs/product/migration-guide for migration instructions


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@weinliu weinliu force-pushed the fix-winc-1578-dockercfg-timeout branch from b2f67cb to 5e7a37d Compare March 4, 2026 16:04
…ows clusters

When image-registry pods exist but none are Running and Ready, skip the
dockercfg secret wait only on Windows (WINC) clusters. This targets
debug-winc-vsphere Prow CI jobs where the SA token controller is known
to be broken/disabled, causing dockercfg secrets to never be created.

On non-Windows clusters, the normal wait proceeds unchanged so real
infrastructure failures are not silently ignored.

Fixes: https://issues.redhat.com/browse/WINC-1578
@weinliu weinliu force-pushed the fix-winc-1578-dockercfg-timeout branch from c6ff4b7 to 5cf2547 Compare March 4, 2026 16:44
@openshift-merge-robot openshift-merge-robot removed the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Mar 4, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
test/extended/util/client.go (1)

432-453: Add debug logging for pod/node list failures to improve triage.

At Line 432 and Line 452, list errors are silently ignored. A small log here would make timeout investigations much easier when the skip path doesn’t activate.

Proposed diff
-		if podErr == nil && len(pods.Items) > 0 {
+		if podErr != nil {
+			framework.Logf("Unable to list openshift-image-registry pods for health check: %v", podErr)
+		} else if len(pods.Items) > 0 {
 			hasHealthyPod := false
 			for _, pod := range pods.Items {
 				if pod.Status.Phase == corev1.PodRunning {
@@
-				if winErr == nil && len(windowsNodes.Items) > 0 {
+				if winErr != nil {
+					framework.Logf("Unable to list Windows nodes for registry health guard: %v", winErr)
+				} else if len(windowsNodes.Items) > 0 {
 					framework.Logf("Windows cluster with unhealthy image-registry pods, skipping dockercfg secret check")
 					DefaultServiceAccounts = []string{}
 					defaultRoleBindings = []string{}
 				}
 			}
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/extended/util/client.go` around lines 432 - 453, The code currently
ignores errors from listing pods and nodes (podErr and winErr) which makes
debugging timeouts hard; update the health-check block around HasHealthyPod in
the function using c.AdminKubeClient() so that when podErr != nil you log the
error (via framework.Logf or process-equivalent) including context (e.g., which
namespace/selector/pods list failed) and when winErr != nil you also log the
nodes list error before relying on windowsNodes.Items; reference the variables
podErr, pods, windowsNodes, winErr and the calls to
c.AdminKubeClient().CoreV1().Pods().List / Nodes().List so the added messages
clearly state the failed call and error string to aid triage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@test/extended/util/client.go`:
- Around line 432-453: The code currently ignores errors from listing pods and
nodes (podErr and winErr) which makes debugging timeouts hard; update the
health-check block around HasHealthyPod in the function using
c.AdminKubeClient() so that when podErr != nil you log the error (via
framework.Logf or process-equivalent) including context (e.g., which
namespace/selector/pods list failed) and when winErr != nil you also log the
nodes list error before relying on windowsNodes.Items; reference the variables
podErr, pods, windowsNodes, winErr and the calls to
c.AdminKubeClient().CoreV1().Pods().List / Nodes().List so the added messages
clearly state the failed call and error string to aid triage.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: a4fed2fd-ff5e-463e-91b4-aa860c1a1992

📥 Commits

Reviewing files that changed from the base of the PR and between aa56844 and 5cf2547.

📒 Files selected for processing (1)
  • test/extended/util/client.go

@weinliu
Copy link
Author

weinliu commented Mar 4, 2026

I don't understand (and the description does not explain) why it is acceptable/expected for a debug/development cluster (e.g., Prow CI debug-winc-* jobs) to have a a broken service account token controller and therefore why the testsuite would need to tolerate it.

@petr-muller Thanks for the review. I've updated the fix to be more targeted.
The broken SA token controller is specific to debug-winc-* Prow CI jobs (e.g., debug-winc-vsphere-ipi), which are intentionally degraded clusters used for debugging CI failures — not production clusters.

On these clusters, image-registry pods exist but are unhealthy because the SA token controller is disabled.

dockercfg secrets are never created, causing a 3-minute timeout per SA in setupProject().

The updated fix adds an additional check: it only skips the dockercfg wait when Windows nodes (kubernetes.io/os=windows) are present. * This scopes the behavior exclusively to Windows (WINC) clusters.

On non-Windows clusters, the normal wait proceeds unchanged so real infrastructure failures are not silently ignored.

@openshift-ci-robot
Copy link

Scheduling required tests:
/test e2e-aws-csi
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 4, 2026

@weinliu: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@weinliu
Copy link
Author

weinliu commented Mar 5, 2026

/test debug-winc-aws-ipi
/test debug-winc-azure-ipi
/test debug-winc-gcp-ipi
/test debug-winc-vsphere-ipi

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 5, 2026

@weinliu: The specified target(s) for /test were not found.
The following commands are available to trigger required jobs:

/test e2e-aws-csi
/test e2e-aws-jenkins
/test e2e-aws-ovn-fips
/test e2e-aws-ovn-image-registry
/test e2e-aws-ovn-microshift
/test e2e-aws-ovn-microshift-serial
/test e2e-aws-ovn-serial-1of2
/test e2e-aws-ovn-serial-2of2
/test e2e-gcp-csi
/test e2e-gcp-ovn
/test e2e-gcp-ovn-builds
/test e2e-gcp-ovn-image-ecosystem
/test e2e-gcp-ovn-upgrade
/test e2e-metal-ipi-ovn-ipv6
/test e2e-vsphere-ovn
/test e2e-vsphere-ovn-upi
/test go-verify-deps
/test images
/test lint
/test okd-scos-images
/test unit
/test verify
/test verify-deps
/test verify-image-manifest-lists

The following commands are available to trigger optional jobs:

/test e2e-agnostic-ovn-cmd
/test e2e-aws-disruptive
/test e2e-aws-etcd-certrotation
/test e2e-aws-etcd-recovery
/test e2e-aws-ovn
/test e2e-aws-ovn-cgroupsv2
/test e2e-aws-ovn-edge-zones
/test e2e-aws-ovn-etcd-scaling
/test e2e-aws-ovn-kube-apiserver-rollout
/test e2e-aws-ovn-kubevirt
/test e2e-aws-ovn-serial-fast
/test e2e-aws-ovn-serial-ipsec
/test e2e-aws-ovn-serial-publicnet-1of2
/test e2e-aws-ovn-serial-publicnet-2of2
/test e2e-aws-ovn-single-node
/test e2e-aws-ovn-single-node-serial
/test e2e-aws-ovn-single-node-techpreview
/test e2e-aws-ovn-single-node-techpreview-serial
/test e2e-aws-ovn-single-node-upgrade
/test e2e-aws-ovn-upgrade
/test e2e-aws-ovn-upgrade-rollback
/test e2e-aws-ovn-upi
/test e2e-aws-proxy
/test e2e-azure
/test e2e-azure-ovn-etcd-scaling
/test e2e-azure-ovn-upgrade
/test e2e-baremetalds-kubevirt
/test e2e-external-aws
/test e2e-external-aws-ccm
/test e2e-external-vsphere-ccm
/test e2e-gcp-disruptive
/test e2e-gcp-fips-serial-1of2
/test e2e-gcp-fips-serial-2of2
/test e2e-gcp-ovn-etcd-scaling
/test e2e-gcp-ovn-kube-apiserver-rollout
/test e2e-gcp-ovn-rt-upgrade
/test e2e-gcp-ovn-techpreview
/test e2e-gcp-ovn-techpreview-serial-1of2
/test e2e-gcp-ovn-techpreview-serial-2of2
/test e2e-gcp-ovn-usernamespace
/test e2e-hypershift-conformance
/test e2e-metal-ipi-ovn
/test e2e-metal-ipi-ovn-bgp-virt-dualstack
/test e2e-metal-ipi-ovn-bgp-virt-dualstack-techpreview
/test e2e-metal-ipi-ovn-dualstack
/test e2e-metal-ipi-ovn-dualstack-bgp
/test e2e-metal-ipi-ovn-dualstack-bgp-local-gw
/test e2e-metal-ipi-ovn-dualstack-local-gateway
/test e2e-metal-ipi-ovn-kube-apiserver-rollout
/test e2e-metal-ipi-serial-1of2
/test e2e-metal-ipi-serial-2of2
/test e2e-metal-ipi-serial-ovn-ipv6-1of2
/test e2e-metal-ipi-serial-ovn-ipv6-2of2
/test e2e-metal-ipi-virtualmedia
/test e2e-metal-ovn-single-node-live-iso
/test e2e-metal-ovn-single-node-with-worker-live-iso
/test e2e-metal-ovn-two-node-arbiter
/test e2e-metal-ovn-two-node-fencing
/test e2e-openstack-dualstack-v6primary
/test e2e-openstack-ovn
/test e2e-openstack-serial
/test e2e-vsphere-ovn-etcd-scaling
/test okd-scos-e2e-aws-ovn

Use /test all to run the following jobs that were automatically triggered:

pull-ci-openshift-origin-main-go-verify-deps
pull-ci-openshift-origin-main-images
pull-ci-openshift-origin-main-lint
pull-ci-openshift-origin-main-okd-scos-images
pull-ci-openshift-origin-main-unit
pull-ci-openshift-origin-main-verify
pull-ci-openshift-origin-main-verify-deps
Details

In response to this:

/test debug-winc-aws-ipi
/test debug-winc-azure-ipi
/test debug-winc-gcp-ipi
/test debug-winc-vsphere-ipi

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@weinliu
Copy link
Author

weinliu commented Mar 7, 2026

**petr-muller **

@petr-muller Adding CI evidence to support the above.

I ran debug-winc-vsphere-ipi on a verification branch and confirmed the bug is reproducible: all 39 Smokerun tests fail with the exact timeout at SetupProject():

CI job: https://qe-private-deck-ci.apps.ci.l2s4.p1.openshiftapps.com/view/gs/qe-private-deck/pr-logs/pull/openshift_openshift-tests-private/29339/pull-ci-openshift-openshift-tests-private-main-debug-winc-vsphere-ipi/2029692075246620672

Every test hits:

timed out waiting for the condition
In [BeforeEach] at: client.go:424

This confirms the SA token controller is non-functional on these debug clusters, causing dockercfg secrets to never be created. The fix in this PR targets exactly this scenario, scoped to Windows clusters only.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants