| periodic-ci-openshift-release-main-ci-4.12-upgrade-from-stable-4.11-e2e-azure-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact | |||
| #2031474817374359552 | junit | 27 minutes ago | |
Mar 10 23:39:02.623 E clusterversion/version changed Failing to True: MultipleErrors: Multiple errors are preventing progress:\n* Cluster operator openshift-apiserver is updating versions\n* Cluster operator openshift-controller-manager is not available Mar 10 23:40:07.372 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 23:40:07.372 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 23:41:00.434 E ns/openshift-cluster-csi-drivers pod/azure-file-csi-driver-node-tf6pw node/ci-op-31hf0hbw-69edd-5zl5k-worker-eastus22-59mvc uid/6cfa7651-04a1-4dc7-a066-4306997d53e2 container/csi-driver reason/ContainerExit code/137 cause/ContainerStatusUnknown The container could not be located when the pod was deleted. The container used to be Running | |||
| #2031474817374359552 | junit | 27 minutes ago | |
Mar 10 23:40:07.372 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout 1 tests failed during this blip (2026-03-10 23:40:07.372565211 +0000 UTC to 2026-03-10 23:40:07.372565211 +0000 UTC): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] | |||
| periodic-ci-openshift-release-main-ci-4.12-upgrade-from-stable-4.11-e2e-aws-ovn-upgrade (all) - 5 runs, 60% failed, 167% of failures match = 100% impact | |||
| #2031475211219505152 | junit | About an hour ago | |
Mar 10 22:45:06.423 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-67b9dfb668-p7swt node/ip-10-0-177-164.us-west-2.compute.internal uid/b394cf64-899c-41ec-b174-2118f6392345 container/aws-ebs-csi-driver-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 10 22:46:20.179 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 22:46:20.179 - 4s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 22:47:15.512 E ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-223-149.us-west-2.compute.internal uid/885c5bd2-e5cf-4165-acf9-bb42565427e8 container/alertmanager-proxy reason/ContainerExit code/2 cause/Error 2026/03/10 22:42:00 provider.go:129: Defaulting client-id to system:serviceaccount:openshift-monitoring:alertmanager-main\n2026/03/10 22:42:00 provider.go:134: Defaulting client-secret to service account token /var/run/secrets/kubernetes.io/serviceaccount/token\n2026/03/10 22:42:00 provider.go:358: Delegation of authentication and authorization to OpenShift is enabled for bearer tokens and client certificates.\n2026/03/10 22:42:00 oauthproxy.go:203: mapping path "/" => upstream "http://localhost:9093/"\n2026/03/10 22:42:00 oauthproxy.go:230: OAuthProxy configured for Client ID: system:serviceaccount:openshift-monitoring:alertmanager-main\n2026/03/10 22:42:00 oauthproxy.go:240: Cookie settings: name:_oauth_proxy secure(https):true httponly:true expiry:168h0m0s domain:<default> samesite: refresh:disabled\n2026/03/10 22:42:00 http.go:107: HTTPS: listening on [::]:9095\nI0310 22:42:00.393140 1 dynamic_serving_content.go:130] Starting serving::/etc/tls/private/tls.crt::/etc/tls/private/tls.key\n | |||
| #2031475211219505152 | junit | About an hour ago | |
Mar 10 22:46:20.179 - 4s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout | |||
| #2031385728725815296 | junit | 7 hours ago | |
Mar 10 16:58:27.540 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-67b9dfb668-5zcv2 node/ip-10-0-201-247.us-west-1.compute.internal uid/4c0fbb13-02df-4f30-b3ed-40cfa2cbbe11 container/aws-ebs-csi-driver-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 10 16:59:44.247 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 16:59:44.247 - 4s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 17:00:39.898 E ns/openshift-cloud-network-config-controller pod/cloud-network-config-controller-f6566469d-tgpn2 node/ip-10-0-137-84.us-west-1.compute.internal uid/9a63d687-68ad-4db4-8641-03864d54c791 container/controller reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
| #2031385728725815296 | junit | 7 hours ago | |
Mar 10 16:59:44.247 - 4s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout | |||
| #2031287763445223424 | junit | 14 hours ago | |
Mar 10 09:48:32.189 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-controller-6bb858ddd6-g75rz node/ip-10-0-167-159.ec2.internal uid/cd166e80-6501-4271-b15b-c3110773aedc container/csi-snapshotter reason/ContainerExit code/2 cause/Error Mar 10 09:48:37.336 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 09:48:37.336 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 09:48:39.452 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-node-njvhw node/ip-10-0-224-120.ec2.internal uid/8887bab2-9668-40d7-aa61-76be595c4623 container/csi-driver reason/ContainerExit code/2 cause/Error | |||
| #2031287763445223424 | junit | 14 hours ago | |
Mar 10 10:26:49.421 E ns/openshift-dns pod/dns-default-r7v6v node/ip-10-0-174-81.ec2.internal uid/1da08d99-efb1-4ce0-941d-b1dcd1a9c52b container/kube-rbac-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 10 10:26:51.352 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" Mar 10 10:26:51.352 - 48ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" Mar 10 10:26:51.458 E ns/openshift-e2e-loki pod/loki-promtail-49kgh node/ip-10-0-174-81.ec2.internal uid/1a8642b3-497b-4f56-b900-e65255ce79f5 container/oauth-proxy reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
| #2031287763445223424 | junit | 14 hours ago | |
Mar 10 09:48:37.336 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 10:26:51.352 - 48ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" | |||
| #2031232638257205248 | junit | 18 hours ago | |
Mar 10 06:14:46.189 E ns/openshift-cluster-csi-drivers pod/aws-ebs-csi-driver-operator-67b9dfb668-s8g5n node/ip-10-0-177-119.us-west-1.compute.internal uid/94eed146-2b22-4ad8-a2f9-bb46bcf65428 container/aws-ebs-csi-driver-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 10 06:16:21.352 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 06:16:21.352 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 06:17:03.262 E ns/openshift-network-operator pod/network-operator-5c88dc4978-swhbq node/ip-10-0-165-61.us-west-1.compute.internal uid/cc2bf053-44f6-4add-b6d2-fc2a33fab2c0 container/network-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
| #2031232638257205248 | junit | 18 hours ago | |
Mar 10 06:16:21.352 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout 1 tests failed during this blip (2026-03-10 06:16:21.352367518 +0000 UTC to 2026-03-10 06:16:21.352367518 +0000 UTC): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] | |||
| #2031183816151797760 | junit | 21 hours ago | |
Mar 10 02:57:31.668 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-db4695c87-bdf67 node/ip-10-0-171-212.us-west-2.compute.internal uid/98f30c40-0f22-47ba-8a4a-9af4fb587ece container/snapshot-controller reason/ContainerExit code/2 cause/Error
Mar 10 02:58:40.472 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout
Mar 10 02:58:40.472 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout
Mar 10 02:58:44.086 E ns/openshift-cloud-network-config-controller pod/cloud-network-config-controller-6f755cc65-9pxdw node/ip-10-0-136-252.us-west-2.compute.internal uid/8f80dd79-0a48-4184-a083-89a9ebac4421 container/controller reason/ContainerExit code/1 cause/Error node workqueue\nI0310 02:08:54.812877 1 node_controller.go:146] Setting annotation: 'cloud.network.openshift.io/egress-ipconfig: [{"interface":"eni-09681a01b0273890b","ifaddr":{"ipv4":"10.0.192.0/18"},"capacity":{"ipv4":14,"ipv6":15}}]' on node: ip-10-0-203-71.us-west-2.compute.internal\nI0310 02:08:54.846962 1 controller.go:160] Dropping key 'ip-10-0-203-71.us-west-2.compute.internal' from the node workqueue\nI0310 02:08:54.902420 1 controller.go:182] Assigning key: ip-10-0-203-71.us-west-2.compute.internal to node workqueue\nI0310 02:08:54.902582 1 controller.go:160] Dropping key 'ip-10-0-203-71.us-west-2.compute.internal' from the node workqueue\nI0310 02:08:59.579502 1 controller.go:182] Assigning key: ip-10-0-203-71.us-west-2.compute.internal to node workqueue\nI0310 02:08:59.579677 1 controller.go:160] Dropping key 'ip-10-0-203-71.us-west-2.compute.internal' from the node workqueue\nI0310 02:09:17.084227 1 controller.go:182] Assigning key: ip-10-0-203-71.us-west-2.compute.internal to node workqueue\nI0310 02:09:17.084263 1 controller.go:160] Dropping key 'ip-10-0-203-71.us-west-2.compute.internal' from the node workqueue\nI0310 02:10:00.964544 1 controller.go:182] Assigning key: ip-10-0-203-71.us-west-2.compute.internal to node workqueue\nI0310 02:10:00.964683 1 controller.go:160] Dropping key 'ip-10-0-203-71.us-west-2.compute.internal' from the node workqueue\nI0310 02:10:01.501468 1 controller.go:182] Assigning key: ip-10-0-203-71.us-west-2.compute.internal to node workqueue\nI0310 02:10:01.501598 1 controller.go:160] Dropping key 'ip-10-0-203-71.us-west-2.compute.internal' from the node workqueue\nI0310 02:58:42.789219 1 controller.go:104] Shutting down node workers\nI0310 02:58:42.789221 1 controller.go:104] Shutting down cloud-private-ip-config workers\nI0310 02:58:42.789226 1 controller.go:104] Shutting down secret workers\nI0310 02:58:42.794584 1 main.go:161] Stopped leading, sending SIGTERM and shutting down controller\n
| |||
| #2031183816151797760 | junit | 21 hours ago | |
Mar 10 03:30:38.017 - 3s E clusteroperator/authentication condition/Available status/False reason/APIServerDeploymentAvailable: deployment/openshift-oauth-apiserver: could not be retrieved
Mar 10 03:30:39.050 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: an error on the server ("Internal Server Error: \"/apis/packages.operators.coreos.com/v1\": Post \"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 172.30.0.1:443: connect: connection refused") has prevented the request from succeeding
Mar 10 03:30:39.050 - 80ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: an error on the server ("Internal Server Error: \"/apis/packages.operators.coreos.com/v1\": Post \"https://172.30.0.1:443/apis/authorization.k8s.io/v1/subjectaccessreviews?timeout=10s\": dial tcp 172.30.0.1:443: connect: connection refused") has prevented the request from succeeding
Mar 10 03:30:52.109 E clusteroperator/etcd condition/Degraded status/True reason/ClusterMemberController_SyncError::EtcdEndpoints_ErrorUpdatingEtcdEndpoints::EtcdMembers_UnhealthyMembers::NodeController_MasterNodesReady changed: ClusterMemberControllerDegraded: unhealthy members found during reconciling members\nEtcdEndpointsDegraded: EtcdEndpointsController can't evaluate whether quorum is safe: etcd cluster has quorum of 2 and 2 healthy members which is not fault tolerant: [{Member:ID:4074424128636643037 name:"ip-10-0-220-31.us-west-2.compute.internal" peerURLs:"https://10.0.220.31:2380" clientURLs:"https://10.0.220.31:2379" Healthy:true Took:2.540959ms Error:<nil>} {Member:ID:5353059919024207858 name:"ip-10-0-136-252.us-west-2.compute.internal" peerURLs:"https://10.0.136.252:2380" clientURLs:"https://10.0.136.252:2379" Healthy:true Took:2.707353ms Error:<nil>} {Member:ID:10857471464443629969 name:"ip-10-0-171-212.us-west-2.compute.internal" peerURLs:"https://10.0.171.212:2380" clientURLs:"https://10.0.171.212:2379" Healthy:false Took: Error:create client failure: failed to make etcd client for endpoints [https://10.0.171.212:2379]: context deadline exceeded}]\nEtcdMembersDegraded: 2 of 3 members are available, ip-10-0-171-212.us-west-2.compute.internal is unhealthy\nNodeControllerDegraded: The master nodes not ready: node "ip-10-0-171-212.us-west-2.compute.internal" not ready since 2026-03-10 03:29:02 +0000 UTC because NodeStatusUnknown (Kubelet stopped posting node status.)
| |||
| #2031183816151797760 | junit | 21 hours ago | |
Mar 10 03:32:46.000 - 1s E ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new reason/DisruptionBegan ns/openshift-image-registry route/test-disruption-new disruption/image-registry connection/new stopped responding to GET requests over new connections: Get "https://test-disruption-new-openshift-image-registry.apps.ci-op-gqdl5xlh-0667b.XXXXXXXXXXXXXXXXXXXXXX/healthz": read tcp 10.128.125.56:48068->34.208.213.200:443: read: connection reset by peer Mar 10 03:32:46.498 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" Mar 10 03:32:46.498 - 82ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" Mar 10 03:32:47.000 - 1s E disruption/service-load-balancer-with-pdb connection/new reason/DisruptionBegan disruption/service-load-balancer-with-pdb connection/new stopped responding to GET requests over new connections: Get "http://ac58e0d6c122d437e9ea78de46b97fad-2121401200.us-west-2.elb.amazonaws.com:80/echo?msg=Hello": read tcp 10.128.125.56:48736->34.212.189.226:80: read: connection reset by peer | |||
| periodic-ci-openshift-release-main-ci-4.12-upgrade-from-stable-4.11-e2e-azure-sdn-upgrade (all) - 8 runs, 88% failed, 71% of failures match = 63% impact | |||
| #2031475211278225408 | junit | 2 hours ago | |
Mar 10 22:19:08.326 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-k93qnywc-a0b82-lxbk6-worker-eastus3-rqm6l uid/cb360e7c-429c-4f5c-b9f0-3fd50dd01dca container/alertmanager reason/ContainerExit code/1 cause/Error ts=2026-03-10T22:18:49.746Z caller=main.go:231 level=info msg="Starting Alertmanager" version="(version=0.24.0, branch=release-4.12, revision=ff0bec9fc4cc6b0309546f37b810f57504a303b0)"\nts=2026-03-10T22:18:49.747Z caller=main.go:232 level=info build_context="(go=go1.19.13 X:strictfipsruntime, user=root@5d9c1349fa19, date=20260309-08:54:29)"\nts=2026-03-10T22:18:49.889Z caller=cluster.go:680 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s\nts=2026-03-10T22:18:49.984Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/config_out/alertmanager.env.yaml\nts=2026-03-10T22:18:49.985Z caller=coordinator.go:118 level=error component=configuration msg="Loading configuration file failed" file=/etc/alertmanager/config_out/alertmanager.env.yaml err="open /etc/alertmanager/config_out/alertmanager.env.yaml: no such file or directory"\nts=2026-03-10T22:18:49.985Z caller=cluster.go:689 level=info component=cluster msg="gossip not settled but continuing anyway" polls=0 elapsed=95.431548ms\n Mar 10 22:20:41.278 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 22:20:41.278 - 6s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 22:21:19.613 E ns/openshift-cluster-csi-drivers pod/azure-disk-csi-driver-node-v9b9h node/ci-op-k93qnywc-a0b82-lxbk6-worker-eastus2-4ttp6 uid/262bac64-f62d-47a5-9707-6f244c96b032 container/csi-liveness-probe reason/ContainerExit code/2 cause/Error | |||
| #2031475211278225408 | junit | 2 hours ago | |
Mar 10 22:20:41.278 - 6s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout 1 tests failed during this blip (2026-03-10 22:20:41.278409622 +0000 UTC to 2026-03-10 22:20:41.278409622 +0000 UTC): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] | |||
| #2031414414934020096 | junit | 6 hours ago | |
Mar 10 18:23:30.369 E ns/openshift-kube-storage-version-migrator pod/migrator-6f85b765fb-4n27f node/ci-op-676q07dk-a0b82-shjqh-master-1 uid/9ad7afdf-0755-4f8d-b60d-69076b62d806 container/migrator reason/ContainerExit code/2 cause/Error owcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+; use flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration\nW0310 17:22:44.527933 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+; use flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration\nW0310 17:22:44.533170 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+; use flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration\nW0310 17:22:44.547996 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+; use flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration\nW0310 17:22:44.559721 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+; use flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration\nW0310 17:22:44.570156 1 warnings.go:70] flowcontrol.apiserver.k8s.io/v1beta1 PriorityLevelConfiguration is deprecated in v1.23+, unavailable in v1.26+; use flowcontrol.apiserver.k8s.io/v1beta2 PriorityLevelConfiguration\nI0310 17:22:44.603536 1 kubemigrator.go:127] flowcontrol-prioritylevel-storage-version-migration: migration succeeded\nW0310 17:28:49.146464 1 reflector.go:436] k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167: watch of *v1alpha1.StorageVersionMigration ended with: very short watch: k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167: Unexpected watch close - watch lasted less than a second and no items received\nW0310 17:35:03.552190 1 reflector.go:436] k8s.io/client-go@v0.21.0/tools/cache/reflector.go:167: watch of *v1alpha1.StorageVersionMigration ended with: an error on the server ("unable to decode an event from the watch stream: http2: client connection lost") has prevented the request from succeeding\n
Mar 10 18:24:20.413 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout
Mar 10 18:24:20.413 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout
Mar 10 18:24:36.486 E ns/openshift-cluster-csi-drivers pod/azure-disk-csi-driver-node-576zh node/ci-op-676q07dk-a0b82-shjqh-worker-westus-rf2kv uid/fad2355c-5c3d-4ae0-826d-db845d972c0a container/csi-driver reason/ContainerExit code/2 cause/Error
| |||
| #2031414414934020096 | junit | 6 hours ago | |
Mar 10 18:24:20.413 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout | |||
| #2031312388568911872 | junit | 13 hours ago | |
Mar 10 11:31:11.186 E ns/openshift-cluster-storage-operator pod/csi-snapshot-controller-db4695c87-ct25h node/ci-op-8rwwhxi7-a0b82-2hb4d-master-0 uid/ac295351-6013-4515-9748-2de22499391b container/snapshot-controller reason/ContainerExit code/2 cause/Error Mar 10 11:33:07.385 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 11:33:07.385 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 11:33:39.297 E ns/openshift-cluster-csi-drivers pod/azure-disk-csi-driver-node-d84gb node/ci-op-8rwwhxi7-a0b82-2hb4d-master-0 uid/a9e6602d-1d35-4bb5-b281-e2ee0d7a4cb1 container/csi-liveness-probe reason/ContainerExit code/2 cause/Error | |||
| #2031312388568911872 | junit | 13 hours ago | |
Mar 10 11:33:07.385 - 5s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout 1 tests failed during this blip (2026-03-10 11:33:07.385120764 +0000 UTC to 2026-03-10 11:33:07.385120764 +0000 UTC): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] | |||
| #2031236151414624256 | junit | 18 hours ago | |
Mar 10 06:30:07.122 E ns/openshift-cluster-csi-drivers pod/azure-disk-csi-driver-operator-d8958c6d9-kz89p node/ci-op-hh1315lh-a0b82-g4qbj-master-0 uid/3e11843d-7817-4145-a2d6-ac63f73c121c container/azure-disk-csi-driver-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 10 06:32:16.262 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 06:32:16.262 - 13s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 06:32:23.620 E ns/openshift-cluster-csi-drivers pod/azure-file-csi-driver-node-cmp48 node/ci-op-hh1315lh-a0b82-g4qbj-worker-eastus2-rwwtz uid/33d32b82-03e8-41e9-b806-0ad6ed049314 container/csi-driver reason/ContainerExit code/2 cause/Error | |||
| #2031236151414624256 | junit | 18 hours ago | |
Mar 10 06:32:16.262 - 13s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout 1 tests failed during this blip (2026-03-10 06:32:16.262988895 +0000 UTC to 2026-03-10 06:32:16.262988895 +0000 UTC): [sig-arch][Feature:ClusterUpgrade] Cluster should remain functional during upgrade [Disruptive] [Serial] | |||
| #2031183816197935104 | junit | 21 hours ago | |
Mar 10 03:04:24.789 E ns/openshift-monitoring pod/node-exporter-mgtcv node/ci-op-k1idf811-a0b82-6qmxx-master-0 uid/11ecdce2-b9bf-4e5c-8c37-dc388fbbc157 container/node-exporter reason/ContainerExit code/143 cause/Error 21:17.524Z caller=node_exporter.go:115 level=info collector=netstat\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=nfs\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=nfsd\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=nvme\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=os\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=powersupplyclass\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=pressure\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=rapl\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=schedstat\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=sockstat\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=softnet\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=stat\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=tapestats\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=textfile\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=thermal_zone\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=time\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=timex\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=udp_queues\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=uname\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=vmstat\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=xfs\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:115 level=info collector=zfs\nts=2026-03-10T02:21:17.524Z caller=node_exporter.go:199 level=info msg="Listening on" address=127.0.0.1:9100\nts=2026-03-10T02:21:17.524Z caller=tls_config.go:195 level=info msg="TLS is disabled." http2=false\n Mar 10 03:04:45.344 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 03:04:45.344 - 26s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout Mar 10 03:04:55.962 E ns/openshift-monitoring pod/alertmanager-main-0 node/ci-op-k1idf811-a0b82-6qmxx-worker-westus-qh2pl uid/93f44e46-19f3-4b31-b6ae-70f7d0e67457 container/alertmanager reason/ContainerExit code/1 cause/Error ts=2026-03-10T03:04:26.749Z caller=main.go:231 level=info msg="Starting Alertmanager" version="(version=0.24.0, branch=release-4.12, revision=ff0bec9fc4cc6b0309546f37b810f57504a303b0)"\nts=2026-03-10T03:04:26.749Z caller=main.go:232 level=info build_context="(go=go1.19.13 X:strictfipsruntime, user=root@5d9c1349fa19, date=20260309-08:54:29)"\nts=2026-03-10T03:04:26.830Z caller=cluster.go:680 level=info component=cluster msg="Waiting for gossip to settle..." interval=2s\nts=2026-03-10T03:04:26.932Z caller=coordinator.go:113 level=info component=configuration msg="Loading configuration file" file=/etc/alertmanager/config_out/alertmanager.env.yaml\nts=2026-03-10T03:04:26.932Z caller=coordinator.go:118 level=error component=configuration msg="Loading configuration file failed" file=/etc/alertmanager/config_out/alertmanager.env.yaml err="open /etc/alertmanager/config_out/alertmanager.env.yaml: no such file or directory"\nts=2026-03-10T03:04:26.932Z caller=cluster.go:689 level=info component=cluster msg="gossip not settled but continuing anyway" polls=0 elapsed=102.554822ms\n | |||
| #2031183816197935104 | junit | 21 hours ago | |
Mar 10 03:04:45.344 - 26s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallCheckFailed, message: install timeout | |||
| periodic-ci-openshift-release-main-ci-4.14-upgrade-from-stable-4.13-e2e-azure-sdn-upgrade (all) - 6 runs, 100% failed, 17% of failures match = 17% impact | |||
| #2031469205366247424 | junit | 2 hours ago | |
Mar 10 22:06:36.976 - 115ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: InstallComponentFailed, message: install strategy failed: could not create service packageserver-service: services "packageserver-service" already exists | |||
| periodic-ci-openshift-release-main-ci-4.14-upgrade-from-stable-4.13-e2e-aws-ovn-upgrade (all) - 3 runs, 67% failed, 50% of failures match = 33% impact | |||
| #2031448219178766336 | junit | 3 hours ago | |
Mar 10 21:11:05.994 - 83ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" | |||
| periodic-ci-openshift-release-main-ci-4.13-upgrade-from-stable-4.12-e2e-aws-ovn-upgrade (all) - 2 runs, 50% failed, 100% of failures match = 50% impact | |||
| #2031363359089102848 | junit | 9 hours ago | |
Mar 10 15:37:56.686 E ns/openshift-multus pod/cni-sysctl-allowlist-ds-bzdmj node/ip-10-0-158-61.us-west-1.compute.internal uid/c8ae2ed6-cc14-4bc8-acff-4d5be4ef8d15 container/kube-multus-additional-cni-plugins reason/ContainerExit code/137 cause/Error Mar 10 15:39:02.616 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" Mar 10 15:39:02.616 - 41ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" Mar 10 15:39:48.740 E clusteroperator/machine-config condition/Degraded status/True reason/RequiredPoolsFailed changed: Unable to apply 4.13.0-0.ci-2026-03-10-133224: error during syncRequiredMachineConfigPools: [timed out waiting for the condition, pool master has not progressed to latest configuration: controller version mismatch for rendered-master-3297a981f6feb8a1ad69a04af5b5b961 expected a57ca291235422f8466ac53496e9d192eecfe1e4 has ec8a839096d23e263bc4612b0c93ebb3543db33f: 2 (ready 2) out of 3 nodes are updating to latest configuration rendered-master-268090af89fd9581c567f3a1aede685c, retrying] | |||
| #2031363359089102848 | junit | 9 hours ago | |
Mar 10 15:39:02.616 - 41ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" | |||
| periodic-ci-openshift-release-main-ci-4.13-upgrade-from-stable-4.12-e2e-azure-sdn-upgrade (all) - 6 runs, 83% failed, 20% of failures match = 17% impact | |||
| #2031310077515796480 | junit | 13 hours ago | |
Mar 10 12:10:44.344 E ns/openshift-machine-config-operator pod/machine-config-controller-59b8c495f6-wzjf7 node/ci-op-4krdsmqb-41e3b-j8x58-master-0 uid/d9a37a63-573d-4de3-9930-33de8130cd74 container/oauth-proxy reason/ContainerExit code/2 cause/Error
Mar 10 12:10:47.214 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1"
Mar 10 12:10:47.214 - 98ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1"
Mar 10 12:10:54.613 E ns/openshift-kube-storage-version-migrator-operator pod/kube-storage-version-migrator-operator-64bcb579c4-kz456 node/ci-op-4krdsmqb-41e3b-j8x58-master-0 uid/c31fb0a1-777e-4c93-b61f-9727f72fda84 container/kube-storage-version-migrator-operator reason/ContainerExit code/1 cause/Error atus":"False","type":"Progressing"},{"lastTransitionTime":"2026-03-10T12:03:00Z","message":"All is well","reason":"AsExpected","status":"True","type":"Available"},{"lastTransitionTime":"2026-03-10T10:35:55Z","message":"All is well","reason":"AsExpected","status":"True","type":"Upgradeable"}]}}\nI0310 12:03:00.933364 1 event.go:285] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"openshift-kube-storage-version-migrator-operator", Name:"kube-storage-version-migrator-operator", UID:"e4b52bbe-cbb4-48e0-b17f-1b7d5bd0b85f", APIVersion:"apps/v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'OperatorStatusChanged' Status for clusteroperator/kube-storage-version-migrator changed: Progressing changed from True to False ("All is well"),Available changed from False to True ("All is well")\nI0310 12:10:53.447562 1 cmd.go:100] Received SIGTERM or SIGINT signal, shutting down controller.\nI0310 12:10:53.447658 1 base_controller.go:167] Shutting down StatusSyncer_kube-storage-version-migrator ...\nI0310 12:10:53.447668 1 base_controller.go:167] Shutting down KubeStorageVersionMigratorStaticResources ...\nI0310 12:10:53.447672 1 base_controller.go:145] All StatusSyncer_kube-storage-version-migrator post start hooks have been terminated\nI0310 12:10:53.447688 1 base_controller.go:167] Shutting down StaticConditionsController ...\nI0310 12:10:53.447724 1 base_controller.go:167] Shutting down RemoveStaleConditionsController ...\nI0310 12:10:53.447745 1 base_controller.go:167] Shutting down KubeStorageVersionMigrator ...\nI0310 12:10:53.447764 1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0310 12:10:53.447760 1 genericapiserver.go:597] "[graceful-termination] pre-shutdown hooks completed" name="PreShutdownHooksStopped"\nI0310 12:10:53.447793 1 genericapiserver.go:497] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nW0310 12:10:53.447816 1 builder.go:109] graceful termination failed, controllers failed with error: stopped\n
| |||
| #2031310077515796480 | junit | 13 hours ago | |
Mar 10 12:10:47.214 - 98ms E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: APIServiceInstallFailed, message: APIService install failed: forbidden: User "system:anonymous" cannot get path "/apis/packages.operators.coreos.com/v1" | |||
| periodic-ci-openshift-release-main-ci-4.12-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 0% failed, 100% of runs match | |||
| #2031157977443995648 | junit | 23 hours ago | |
Mar 10 01:29:58.471 E ns/e2e-k8s-sig-apps-job-upgrade-7499 pod/foo-hqx8c node/ip-10-0-182-160.ec2.internal uid/42148847-ed9e-4c5f-ab3e-e22df2f55f3f container/c reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) Mar 10 01:29:58.524 E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersionNotSucceeded changed: ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: ComponentUnhealthy, message: apiServices not installed Mar 10 01:29:58.524 - 1s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: ComponentUnhealthy, message: apiServices not installed Mar 10 01:29:58.582 E ns/openshift-authentication-operator pod/authentication-operator-84649b8649-mgfzd node/ip-10-0-182-160.ec2.internal uid/bb8fca49-0972-443a-a2e1-5368b4258694 container/authentication-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar) | |||
| #2031157977443995648 | junit | 23 hours ago | |
Mar 10 01:29:58.524 - 1s E clusteroperator/operator-lifecycle-manager-packageserver condition/Available status/False reason/ClusterServiceVersion openshift-operator-lifecycle-manager/packageserver observed in phase Failed with reason: ComponentUnhealthy, message: apiServices not installed | |||
Found in 0.05% of runs (0.24% of failures) across 32109 total runs and 7390 jobs (21.04% failed) in 396ms - clear search | chart view - source code located on github