Job:
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-from-stable-4.8-e2e-aws-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1816428011545694208build-log41 hours ago
Jul 25 12:16:38.579 I ns/openshift-marketplace pod/certified-operators-94wxd node/ip-10-0-183-83.ec2.internal container/registry-server reason/Ready
Jul 25 12:16:38.590 I ns/openshift-marketplace pod/certified-operators-94wxd node/ip-10-0-183-83.ec2.internal reason/GracefulDelete duration/1s
Jul 25 12:16:39.000 I ns/openshift-marketplace pod/certified-operators-94wxd node/ip-10-0-183-83.ec2.internal container/registry-server reason/Killing
Jul 25 12:16:40.507 I ns/openshift-marketplace pod/certified-operators-94wxd node/ip-10-0-183-83.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Jul 25 12:16:48.289 I ns/openshift-marketplace pod/certified-operators-94wxd node/ip-10-0-183-83.ec2.internal reason/Deleted
Jul 25 12:17:15.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-225.ec2.internal node/ip-10-0-137-225 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 25 12:17:15.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-137-225.ec2.internal node/ip-10-0-137-225 reason/TerminationStoppedServing Server has stopped listening
Jul 25 12:17:59.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-225.ec2.internal node/ip-10-0-137-225.ec2.internal container/kube-controller-manager-recovery-controller reason/Created
Jul 25 12:17:59.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-225.ec2.internal node/ip-10-0-137-225.ec2.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe82481094d454493a7738877f477b790d683b41edb1124fe4c4a53c51ae1252
Jul 25 12:17:59.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-225.ec2.internal node/ip-10-0-137-225.ec2.internal container/kube-controller-manager-recovery-controller reason/Started
Jul 25 12:17:59.760 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-137-225.ec2.internal node/ip-10-0-137-225.ec2.internal container/kube-controller-manager-recovery-controller reason/ContainerExit code/0 cause/Completed
#1816428011545694208build-log41 hours ago
Jul 25 12:21:49.198 I ns/openshift-marketplace pod/redhat-operators-2r6br node/ip-10-0-183-83.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Jul 25 12:21:57.656 I ns/openshift-marketplace pod/redhat-marketplace-884ds node/ip-10-0-131-246.ec2.internal reason/Deleted
Jul 25 12:21:58.289 I ns/openshift-marketplace pod/redhat-operators-2r6br node/ip-10-0-183-83.ec2.internal reason/Deleted
Jul 25 12:22:47.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c (18 times)
Jul 25 12:22:50.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c (19 times)
Jul 25 12:23:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-61.ec2.internal node/ip-10-0-133-61 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 25 12:23:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-61.ec2.internal node/ip-10-0-133-61 reason/TerminationStoppedServing Server has stopped listening
Jul 25 12:23:45.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c (20 times)
Jul 25 12:24:11.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-61.ec2.internal node/ip-10-0-133-61 reason/TerminationGracefulTerminationFinished All pending requests processed
Jul 25 12:24:23.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-133-61.ec2.internal node/ip-10-0-133-61.ec2.internal container/setup reason/Pulling image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22
Jul 25 12:24:23.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c (21 times)
#1816428011545694208build-log41 hours ago
Jul 25 12:27:53.000 W ns/openshift-network-diagnostics node/ip-10-0-183-83.ec2.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.156.52:443: dial tcp 172.30.156.52:443: connect: connection refused
Jul 25 12:27:53.000 I ns/openshift-network-diagnostics node/ip-10-0-183-83.ec2.internal reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000510535s: kubernetes-apiserver-service-cluster: tcp connection to 172.30.156.52:443 succeeded
Jul 25 12:27:54.000 W ns/openshift-network-diagnostics node/ip-10-0-183-83.ec2.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-133-61: failed to establish a TCP connection to 10.0.133.61:6443: dial tcp 10.0.133.61:6443: connect: connection refused
Jul 25 12:27:54.000 I ns/openshift-network-diagnostics node/ip-10-0-183-83.ec2.internal reason/ConnectivityRestored roles/worker Connectivity restored after 59.999438363s: kubernetes-apiserver-endpoint-ip-10-0-133-61: tcp connection to 10.0.133.61:6443 succeeded
Jul 25 12:28:45.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c (39 times)
Jul 25 12:29:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-209.ec2.internal node/ip-10-0-216-209 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 25 12:29:13.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-216-209.ec2.internal node/ip-10-0-216-209 reason/TerminationStoppedServing Server has stopped listening
Jul 25 12:29:45.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c (40 times)
Jul 25 12:29:53.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-216-209.ec2.internal node/ip-10-0-216-209.ec2.internal container/kube-scheduler-recovery-controller reason/Created
Jul 25 12:29:53.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-216-209.ec2.internal node/ip-10-0-216-209.ec2.internal container/kube-scheduler-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:258478726c54a7e6ea106d65b04dbd5b25a6d0c5427dad81ad2bcf16eb4d6a82
Jul 25 12:29:53.000 I ns/openshift-kube-scheduler pod/openshift-kube-scheduler-ip-10-0-216-209.ec2.internal node/ip-10-0-216-209.ec2.internal container/kube-scheduler-recovery-controller reason/Started
#1816428011545694208build-log41 hours ago
Jul 25 12:37:08.378 W ns/openshift-apiserver pod/apiserver-7b96d77789-95gxr reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-7b96d77789-95gxr
Jul 25 12:37:08.410 I ns/openshift-apiserver pod/apiserver-7b96d77789-95gxr node/ reason/DeletedBeforeScheduling
Jul 25 12:37:08.412 W ns/openshift-apiserver pod/apiserver-5bddc48d84-gbppf reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 25 12:37:08.418 I ns/openshift-apiserver pod/apiserver-5bddc48d84-gbppf node/ reason/Created
Jul 25 12:37:12.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Jul 25 12:37:15.000 I ns/openshift-apiserver pod/apiserver-c77669589-gfcbf node/apiserver-c77669589-gfcbf reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 25 12:37:15.000 I ns/openshift-apiserver pod/apiserver-c77669589-gfcbf node/apiserver-c77669589-gfcbf reason/TerminationStoppedServing Server has stopped listening
Jul 25 12:37:16.000 W ns/openshift-apiserver pod/apiserver-c77669589-gfcbf node/ip-10-0-216-209.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.28:8443/healthz": dial tcp 10.130.0.28:8443: connect: connection refused\nbody: \n
Jul 25 12:37:16.000 W ns/openshift-apiserver pod/apiserver-c77669589-gfcbf node/ip-10-0-216-209.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.28:8443/healthz": dial tcp 10.130.0.28:8443: connect: connection refused\nbody: \n
Jul 25 12:37:16.000 W ns/openshift-apiserver pod/apiserver-c77669589-gfcbf node/ip-10-0-216-209.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.28:8443/healthz": dial tcp 10.130.0.28:8443: connect: connection refused
Jul 25 12:37:16.000 W ns/openshift-apiserver pod/apiserver-c77669589-gfcbf node/ip-10-0-216-209.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.28:8443/healthz": dial tcp 10.130.0.28:8443: connect: connection refused
#1816428011545694208build-log41 hours ago
Jul 25 12:38:32.169 I ns/openshift-apiserver pod/apiserver-5bddc48d84-gbppf node/ip-10-0-216-209.ec2.internal container/openshift-apiserver reason/Ready
Jul 25 12:38:32.207 I ns/openshift-apiserver pod/apiserver-c77669589-b7vg4 node/ip-10-0-137-225.ec2.internal reason/GracefulDelete duration/70s
Jul 25 12:38:32.277 W ns/openshift-apiserver pod/apiserver-5bddc48d84-kc59j reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 25 12:38:32.280 I ns/openshift-apiserver pod/apiserver-5bddc48d84-kc59j node/ reason/Created
Jul 25 12:38:33.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Jul 25 12:38:42.000 I ns/openshift-apiserver pod/apiserver-c77669589-b7vg4 node/apiserver-c77669589-b7vg4 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 25 12:38:42.000 I ns/openshift-apiserver pod/apiserver-c77669589-b7vg4 node/apiserver-c77669589-b7vg4 reason/TerminationStoppedServing Server has stopped listening
Jul 25 12:38:44.000 W ns/openshift-apiserver pod/apiserver-c77669589-b7vg4 node/ip-10-0-137-225.ec2.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused\nbody: \n
Jul 25 12:38:44.000 W ns/openshift-apiserver pod/apiserver-c77669589-b7vg4 node/ip-10-0-137-225.ec2.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused\nbody: \n
Jul 25 12:38:44.000 W ns/openshift-apiserver pod/apiserver-c77669589-b7vg4 node/ip-10-0-137-225.ec2.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused
Jul 25 12:38:44.000 W ns/openshift-apiserver pod/apiserver-c77669589-b7vg4 node/ip-10-0-137-225.ec2.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.52:8443/healthz": dial tcp 10.128.0.52:8443: connect: connection refused
#1816428011545694208build-log41 hours ago
Jul 25 12:40:05.767 W clusteroperator/machine-api condition/Progressing status/False changed:
Jul 25 12:40:05.767 I clusteroperator/machine-api versions: operator 4.8.57 -> 4.9.59
Jul 25 12:40:06.434 W ns/openshift-apiserver pod/apiserver-5bddc48d84-ppwd7 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 25 12:40:06.434 W ns/openshift-apiserver pod/apiserver-5bddc48d84-ppwd7 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 25 12:40:06.457 I ns/openshift-machine-api pod/machine-api-controllers-77dc7b5c7f-fkwjw node/ip-10-0-133-61.ec2.internal reason/Deleted
Jul 25 12:40:13.000 I ns/openshift-apiserver pod/apiserver-c77669589-n86xb node/apiserver-c77669589-n86xb reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 25 12:40:13.000 I ns/openshift-apiserver pod/apiserver-c77669589-n86xb node/apiserver-c77669589-n86xb reason/TerminationStoppedServing Server has stopped listening
Jul 25 12:40:16.855 - 59s   I alert/KubeContainerWaiting ns/openshift-machine-api pod/machine-api-controllers-bdb5d4fb8-vc4cm container/kube-rbac-proxy-machine-mtrc ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="kube-rbac-proxy-machine-mtrc", namespace="openshift-machine-api", pod="machine-api-controllers-bdb5d4fb8-vc4cm", prometheus="openshift-monitoring/k8s", severity="warning"}
Jul 25 12:40:16.855 - 59s   I alert/KubeContainerWaiting ns/openshift-machine-api pod/machine-api-controllers-bdb5d4fb8-vc4cm container/kube-rbac-proxy-machineset-mtrc ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="kube-rbac-proxy-machineset-mtrc", namespace="openshift-machine-api", pod="machine-api-controllers-bdb5d4fb8-vc4cm", prometheus="openshift-monitoring/k8s", severity="warning"}
Jul 25 12:40:16.855 - 59s   I alert/KubeContainerWaiting ns/openshift-machine-api pod/machine-api-controllers-bdb5d4fb8-vc4cm container/kube-rbac-proxy-mhc-mtrc ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="kube-rbac-proxy-mhc-mtrc", namespace="openshift-machine-api", pod="machine-api-controllers-bdb5d4fb8-vc4cm", prometheus="openshift-monitoring/k8s", severity="warning"}
Jul 25 12:40:16.855 - 59s   I alert/KubeContainerWaiting ns/openshift-machine-api pod/machine-api-controllers-bdb5d4fb8-vc4cm container/machine-controller ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="machine-controller", namespace="openshift-machine-api", pod="machine-api-controllers-bdb5d4fb8-vc4cm", prometheus="openshift-monitoring/k8s", severity="warning"}
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-ovirt-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1815955997131280384build-log3 days ago
Jul 24 04:15:44.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9fd240107948c0eb6cdff3955f45d26328a1cc1e630b58d43fb548830536102,registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c (16 times)
Jul 24 04:15:44.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9fd240107948c0eb6cdff3955f45d26328a1cc1e630b58d43fb548830536102,registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c (16 times)
Jul 24 04:15:44.374 I ns/openshift-etcd pod/installer-7-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 container/installer reason/ContainerExit code/0 cause/Completed
Jul 24 04:15:45.000 W ns/openshift-etcd pod/etcd-quorum-guard-c4b6885f-65gjg node/ovirt12-fkrdf-master-2 reason/Unhealthy Readiness probe failed:
Jul 24 04:15:45.000 W ns/openshift-etcd pod/etcd-quorum-guard-c4b6885f-65gjg node/ovirt12-fkrdf-master-2 reason/Unhealthy Readiness probe failed:
Jul 24 04:15:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-1 node/ovirt12-fkrdf-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Jul 24 04:15:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-1 node/ovirt12-fkrdf-master-1 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Jul 24 04:15:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-1 node/ovirt12-fkrdf-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 04:15:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-1 node/ovirt12-fkrdf-master-1 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 04:15:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-1 node/ovirt12-fkrdf-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 04:15:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-1 node/ovirt12-fkrdf-master-1 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 04:15:46.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9fd240107948c0eb6cdff3955f45d26328a1cc1e630b58d43fb548830536102,registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c (17 times)
#1815955997131280384build-log3 days ago
Jul 24 04:18:46.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/OperatorStatusChanged Status for clusteroperator/etcd changed: Progressing changed from True to False ("NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found"),Available message changed from "StaticPodsAvailable: 3 nodes are active; 1 nodes are at revision 6; 2 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ovirt12-fkrdf-master-1 is unhealthy" to "StaticPodsAvailable: 3 nodes are active; 3 nodes are at revision 7\nEtcdMembersAvailable: 2 of 3 members are available, ovirt12-fkrdf-master-1 is unhealthy"
Jul 24 04:18:46.494 - 59s   I alert/PodDisruptionBudgetAtLimit ns/openshift-etcd ALERTS{alertname="PodDisruptionBudgetAtLimit", alertstate="pending", namespace="openshift-etcd", poddisruptionbudget="etcd-quorum-guard", prometheus="openshift-monitoring/k8s", severity="warning"}
Jul 24 04:18:46.494 - 59s   I alert/PodDisruptionBudgetAtLimit ns/openshift-kube-apiserver ALERTS{alertname="PodDisruptionBudgetAtLimit", alertstate="pending", namespace="openshift-kube-apiserver", poddisruptionbudget="kube-apiserver-guard-pdb", prometheus="openshift-monitoring/k8s", severity="warning"}
Jul 24 04:18:46.729 W clusteroperator/etcd condition/Progressing status/False reason/AsExpected changed: NodeInstallerProgressing: 3 nodes are at revision 7\nEtcdMembersProgressing: No unstarted etcd members found
Jul 24 04:18:50.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (18 times)
Jul 24 04:18:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-0 node/ovirt12-fkrdf-master-0 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Jul 24 04:18:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-0 node/ovirt12-fkrdf-master-0 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 04:18:51.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-0 node/ovirt12-fkrdf-master-0 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 04:18:51.983 I ns/openshift-etcd pod/installer-2-ovirt12-fkrdf-master-0 node/ovirt12-fkrdf-master-0 reason/DeletedAfterCompletion
Jul 24 04:18:53.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-0 node/ovirt12-fkrdf-master-0 reason/TerminationGracefulTerminationFinished All pending requests processed
Jul 24 04:18:53.494 - 59s   I alert/KubeDeploymentReplicasMismatch ns/openshift-etcd container/kube-rbac-proxy-main ALERTS{alertname="KubeDeploymentReplicasMismatch", alertstate="pending", container="kube-rbac-proxy-main", deployment="etcd-quorum-guard", endpoint="https-main", job="kube-state-metrics", namespace="openshift-etcd", prometheus="openshift-monitoring/k8s", service="kube-state-metrics", severity="warning"}
#1815955997131280384build-log3 days ago
Jul 24 04:21:20.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (11 times)
Jul 24 04:21:20.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (11 times)
Jul 24 04:21:20.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (11 times)
Jul 24 04:21:25.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (12 times)
Jul 24 04:21:25.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (12 times)
Jul 24 04:21:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Jul 24 04:21:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/AfterShutdownDelayDuration The minimal shutdown duration of 1m10s finished
Jul 24 04:21:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 04:21:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 04:21:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 04:21:43.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 04:21:45.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ovirt12-fkrdf-master-2 node/ovirt12-fkrdf-master-2 reason/TerminationGracefulTerminationFinished All pending requests processed
#1815955997131280384build-log3 days ago
Jul 24 04:37:30.777 - 17s   W clusteroperator/csi-snapshot-controller condition/Progressing status/True reason/CSISnapshotWebhookControllerProgressing: desired generation 2, current generation 1
Jul 24 04:37:30.800 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-79587966b6-zcfxn node/ovirt12-fkrdf-master-0 reason/GracefulDelete duration/30s
Jul 24 04:37:30.817 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-7ff74969-5z2kb node/ovirt12-fkrdf-master-2 reason/Scheduled
Jul 24 04:37:30.824 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-7ff74969-5z2kb node/ reason/Created
Jul 24 04:37:31.000 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-79587966b6-zcfxn node/ovirt12-fkrdf-master-0 container/webhook reason/Killing
Jul 24 04:37:31.000 I ns/openshift-apiserver pod/apiserver-746c86cc45-4zqvg node/apiserver-746c86cc45-4zqvg reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Jul 24 04:37:31.000 I ns/openshift-apiserver pod/apiserver-746c86cc45-4zqvg node/apiserver-746c86cc45-4zqvg reason/TerminationStoppedServing Server has stopped listening
Jul 24 04:37:32.000 I ns/openshift-ingress-canary pod/ingress-canary-qvbm8 node/ovirt12-fkrdf-worker-msmdm container/serve-healthcheck-canary reason/Killing
Jul 24 04:37:32.000 I ns/openshift-cluster-storage-operator pod/csi-snapshot-webhook-7ff74969-5z2kb reason/AddedInterface Add eth0 [10.130.0.72/23] from openshift-sdn
Jul 24 04:37:32.000 I ns/openshift-cluster-storage-operator deployment/csi-snapshot-controller-operator reason/CustomResourceDefinitionUpdated Updated CustomResourceDefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io because it changed
Jul 24 04:37:32.000 W ns/openshift-apiserver pod/apiserver-746c86cc45-4zqvg node/ovirt12-fkrdf-master-2 reason/ProbeError Readiness probe error: Get "https://10.130.0.37:8443/readyz": dial tcp 10.130.0.37:8443: connect: connection refused\nbody: \n
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-aws-upgrade-infra (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1815925546459074560build-log3 days ago
Jul 24 02:39:56.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9fd240107948c0eb6cdff3955f45d26328a1cc1e630b58d43fb548830536102,registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c (42 times)
Jul 24 02:39:57.000 W ns/openshift-etcd pod/etcd-quorum-guard-c4b6885f-lhtgh node/ip-10-0-177-113.us-west-2.compute.internal reason/Unhealthy Readiness probe failed:  (7 times)
Jul 24 02:39:59.000 I ns/openshift-etcd-operator deployment/etcd-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9fd240107948c0eb6cdff3955f45d26328a1cc1e630b58d43fb548830536102,registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c (43 times)
Jul 24 02:40:02.000 W ns/openshift-etcd pod/etcd-quorum-guard-c4b6885f-lhtgh node/ip-10-0-177-113.us-west-2.compute.internal reason/Unhealthy Readiness probe failed:  (8 times)
Jul 24 02:40:03.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/UnhealthyEtcdMember unhealthy members: ip-10-0-177-113.us-west-2.compute.internal (2 times)
Jul 24 02:40:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-113.us-west-2.compute.internal node/ip-10-0-177-113 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Jul 24 02:40:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-113.us-west-2.compute.internal node/ip-10-0-177-113 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 02:40:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-113.us-west-2.compute.internal node/ip-10-0-177-113 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 02:40:07.000 W ns/openshift-etcd pod/etcd-quorum-guard-c4b6885f-lhtgh node/ip-10-0-177-113.us-west-2.compute.internal reason/Unhealthy Readiness probe failed:  (9 times)
Jul 24 02:40:09.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-177-113.us-west-2.compute.internal node/ip-10-0-177-113 reason/TerminationGracefulTerminationFinished All pending requests processed
Jul 24 02:40:10.000 I ns/openshift-etcd pod/etcd-ip-10-0-177-113.us-west-2.compute.internal node/ip-10-0-177-113.us-west-2.compute.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:aaf64581465fcab92dd584a85f6bd826af868e9f60a628be91dcf53688e1793c
#1815925546459074560build-log3 days ago
Jul 24 02:45:07.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SawCompletedJob Saw completed job: collect-profiles-28696485, status: Complete
Jul 24 02:45:07.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SawCompletedJob Saw completed job: collect-profiles-28696485, status: Complete
Jul 24 02:45:07.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SuccessfulDelete Deleted job collect-profiles-28696440
Jul 24 02:45:07.000 I ns/openshift-operator-lifecycle-manager cronjob/collect-profiles reason/SuccessfulDelete Deleted job collect-profiles-28696440
Jul 24 02:45:07.149 I ns/openshift-operator-lifecycle-manager pod/collect-profiles-28696440-2kpsw node/ip-10-0-164-201.us-west-2.compute.internal reason/DeletedAfterCompletion
Jul 24 02:45:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-64.us-west-2.compute.internal node/ip-10-0-245-64 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Jul 24 02:45:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-64.us-west-2.compute.internal node/ip-10-0-245-64 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Jul 24 02:45:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-64.us-west-2.compute.internal node/ip-10-0-245-64 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 02:45:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-64.us-west-2.compute.internal node/ip-10-0-245-64 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 02:45:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-64.us-west-2.compute.internal node/ip-10-0-245-64 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 02:45:39.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-64.us-west-2.compute.internal node/ip-10-0-245-64 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 02:45:41.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-245-64.us-west-2.compute.internal node/ip-10-0-245-64 reason/TerminationGracefulTerminationFinished All pending requests processed
#1815925546459074560build-log3 days ago
Jul 24 02:49:42.609 I ns/openshift-marketplace pod/community-operators-b5f64 node/ip-10-0-191-254.us-west-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Jul 24 02:49:42.727 I ns/openshift-marketplace pod/community-operators-b5f64 node/ip-10-0-191-254.us-west-2.compute.internal reason/Deleted
Jul 24 02:49:44.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (41 times)
Jul 24 02:50:06.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-139-158.us-west-2.compute.internal node/ip-10-0-139-158.us-west-2.compute.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (31 times)
Jul 24 02:50:45.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (42 times)
Jul 24 02:51:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-158.us-west-2.compute.internal node/ip-10-0-139-158 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Jul 24 02:51:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-158.us-west-2.compute.internal node/ip-10-0-139-158 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 24 02:51:16.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-158.us-west-2.compute.internal node/ip-10-0-139-158 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 24 02:51:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-158.us-west-2.compute.internal node/ip-10-0-139-158 reason/TerminationGracefulTerminationFinished All pending requests processed
Jul 24 02:51:20.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-158.us-west-2.compute.internal node/ip-10-0-139-158.us-west-2.compute.internal container/kube-apiserver reason/Killing
Jul 24 02:51:27.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-139-158.us-west-2.compute.internal node/ip-10-0-139-158.us-west-2.compute.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8
#1815925546459074560build-log3 days ago
Jul 24 03:07:01.000 I ns/openshift-monitoring statefulset/alertmanager-main reason/SuccessfulCreate create Pod alertmanager-main-0 in StatefulSet alertmanager-main successful
Jul 24 03:07:01.000 I ns/openshift-monitoring statefulset/alertmanager-main reason/SuccessfulCreate create Pod alertmanager-main-1 in StatefulSet alertmanager-main successful
Jul 24 03:07:01.000 I ns/openshift-monitoring statefulset/alertmanager-main reason/SuccessfulCreate create Pod alertmanager-main-2 in StatefulSet alertmanager-main successful
Jul 24 03:07:01.000 I ns/openshift-monitoring statefulset/prometheus-k8s reason/SuccessfulCreate create Pod prometheus-k8s-0 in StatefulSet prometheus-k8s successful
Jul 24 03:07:01.000 I ns/openshift-operator-lifecycle-manager replicaset/olm-operator-7b98c589bb reason/SuccessfulDelete Deleted pod: olm-operator-7b98c589bb-5jmfp
Jul 24 03:07:01.043 E ns/openshift-console-operator pod/console-operator-6f97f996df-mktrk node/ip-10-0-139-158.us-west-2.compute.internal container/console-operator reason/ContainerExit code/1 cause/Error eShutdownHooksFinished' All pre-shutdown hooks have been finished\nI0724 03:06:55.123956       1 genericapiserver.go:355] "[graceful-termination] shutdown event" name="ShutdownInitiated"\nI0724 03:06:55.123968       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-mktrk", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStart' Received signal to terminate, becoming unready, but keeping serving\nI0724 03:06:55.123983       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-mktrk", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0724 03:06:55.124014       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0724 03:06:55.124021       1 base_controller.go:167] Shutting down UnsupportedConfigOverridesController ...\nI0724 03:06:55.124027       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-mktrk", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0724 03:06:55.124031       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0724 03:06:55.124038       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0724 03:06:55.124042       1 base_controller.go:167] Shutting down LoggingSyncer ...\nI0724 03:06:55.124050       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0724 03:06:55.124060       1 base_controller.go:167] Shutting down ResourceSyncController ...\nW0724 03:06:55.124071       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jul 24 03:07:01.189 I ns/openshift-operator-lifecycle-manager pod/olm-operator-7b98c589bb-5jmfp node/ip-10-0-245-64.us-west-2.compute.internal reason/GracefulDelete duration/30s
Jul 24 03:07:01.385 I ns/openshift-machine-api pod/cluster-autoscaler-operator-8554968d85-wfnvn node/ip-10-0-245-64.us-west-2.compute.internal container/kube-rbac-proxy reason/ContainerExit code/0 cause/Completed
Jul 24 03:07:01.385 I ns/openshift-machine-api pod/cluster-autoscaler-operator-8554968d85-wfnvn node/ip-10-0-245-64.us-west-2.compute.internal container/cluster-autoscaler-operator reason/ContainerExit code/0 cause/Completed
Jul 24 03:07:01.430 I ns/openshift-console-operator pod/console-operator-6f97f996df-mktrk node/ip-10-0-139-158.us-west-2.compute.internal reason/Deleted
Jul 24 03:07:01.456 I ns/openshift-machine-api pod/cluster-autoscaler-operator-8554968d85-wfnvn node/ip-10-0-245-64.us-west-2.compute.internal reason/Deleted
#1815925546459074560build-log3 days ago
Jul 24 03:07:17.000 I ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-243-214.us-west-2.compute.internal container/prom-label-proxy reason/Created
Jul 24 03:07:17.000 I ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-243-214.us-west-2.compute.internal container/prom-label-proxy reason/Killing
Jul 24 03:07:17.000 I ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-243-214.us-west-2.compute.internal container/prom-label-proxy reason/Started
Jul 24 03:07:17.000 I ns/openshift-monitoring pod/prometheus-k8s-1 node/ip-10-0-243-214.us-west-2.compute.internal container/prometheus reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-07-23-030300@sha256:3609ad787a542819ea75b7988220414acde6ef3e49497c0fd75ada8eaf5f51b6
Jul 24 03:07:17.000 I ns/openshift-monitoring pod/prometheus-k8s-1 reason/AddedInterface Add eth0 [10.130.2.24/23] from openshift-sdn
Jul 24 03:07:17.000 I ns/openshift-apiserver pod/apiserver-645c65f657-vpwvh node/apiserver-645c65f657-vpwvh reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Jul 24 03:07:17.000 I ns/openshift-apiserver pod/apiserver-645c65f657-vpwvh node/apiserver-645c65f657-vpwvh reason/TerminationStoppedServing Server has stopped listening
Jul 24 03:07:17.965 I ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-243-214.us-west-2.compute.internal container/alertmanager reason/Ready
Jul 24 03:07:17.965 I ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-243-214.us-west-2.compute.internal container/prom-label-proxy reason/Ready
Jul 24 03:07:17.965 I ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-243-214.us-west-2.compute.internal container/alertmanager-proxy reason/Ready
Jul 24 03:07:17.965 I ns/openshift-monitoring pod/alertmanager-main-1 node/ip-10-0-243-214.us-west-2.compute.internal container/config-reloader reason/Ready
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-e2e-aws-upgrade (all) - 4 runs, 100% failed, 50% of failures match = 50% impact
#1815533853792538624build-log.txt.gz4 days ago
Jul 23 00:37:58.000 I ns/openshift-kube-apiserver pod/installer-8-ip-10-0-158-71.us-west-2.compute.internal reason/StaticPodInstallerCompleted Successfully installed revision 8
Jul 23 00:37:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Jul 23 00:37:59.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Jul 23 00:37:59.721 I ns/openshift-kube-apiserver pod/installer-8-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71.us-west-2.compute.internal container/installer reason/ContainerExit code/0 cause/Completed
Jul 23 00:38:10.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.3360450055762665 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-158-71.us-west-2.compute.internal=0.0063840000000000025,etcd-ip-10-0-165-182.us-west-2.compute.internal=0.007952222222222218,etcd-ip-10-0-197-226.us-west-2.compute.internal=0.003838333333333332. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Jul 23 00:41:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 23 00:41:29.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71 reason/TerminationStoppedServing Server has stopped listening
Jul 23 00:42:09.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Jul 23 00:42:09.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe82481094d454493a7738877f477b790d683b41edb1124fe4c4a53c51ae1252
Jul 23 00:42:09.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
Jul 23 00:42:09.576 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-158-71.us-west-2.compute.internal node/ip-10-0-158-71.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/ContainerExit code/0 cause/Completed
#1815533853792538624build-log.txt.gz4 days ago
Jul 23 00:47:04.978 I ns/openshift-marketplace pod/community-operators-q9sw2 node/ip-10-0-179-155.us-west-2.compute.internal container/registry-server reason/Ready
Jul 23 00:47:04.978 I ns/openshift-marketplace pod/community-operators-q9sw2 node/ip-10-0-179-155.us-west-2.compute.internal reason/GracefulDelete duration/1s
Jul 23 00:47:05.000 I ns/openshift-marketplace pod/community-operators-q9sw2 node/ip-10-0-179-155.us-west-2.compute.internal container/registry-server reason/Killing
Jul 23 00:47:06.435 I ns/openshift-marketplace pod/community-operators-q9sw2 node/ip-10-0-179-155.us-west-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Jul 23 00:47:07.508 I ns/openshift-marketplace pod/community-operators-q9sw2 node/ip-10-0-179-155.us-west-2.compute.internal reason/Deleted
Jul 23 00:47:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-165-182.us-west-2.compute.internal node/ip-10-0-165-182 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 23 00:47:12.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-165-182.us-west-2.compute.internal node/ip-10-0-165-182 reason/TerminationStoppedServing Server has stopped listening
Jul 23 00:47:48.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-165-182.us-west-2.compute.internal node/ip-10-0-165-182.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Jul 23 00:47:48.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-165-182.us-west-2.compute.internal node/ip-10-0-165-182.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe82481094d454493a7738877f477b790d683b41edb1124fe4c4a53c51ae1252
Jul 23 00:47:48.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-165-182.us-west-2.compute.internal node/ip-10-0-165-182.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
Jul 23 00:47:48.000 I ns/openshift-kube-controller-manager-operator namespace/openshift-kube-controller-manager-operator reason/OperatorStatusChanged Status for clusteroperator/kube-controller-manager changed: Degraded message changed from "NodeControllerDegraded: All master nodes are ready" to "StaticPodsDegraded: pod/kube-controller-manager-ip-10-0-165-182.us-west-2.compute.internal container \"kube-controller-manager-recovery-controller\" is terminated: Completed: \nNodeControllerDegraded: All master nodes are ready"
#1815533853792538624build-log.txt.gz4 days ago
Jul 23 00:51:48.319 I ns/openshift-marketplace pod/redhat-marketplace-jjnwp node/ip-10-0-179-155.us-west-2.compute.internal container/registry-server reason/Ready
Jul 23 00:51:48.389 I ns/openshift-marketplace pod/redhat-marketplace-jjnwp node/ip-10-0-179-155.us-west-2.compute.internal reason/GracefulDelete duration/1s
Jul 23 00:51:50.050 I ns/openshift-marketplace pod/redhat-marketplace-jjnwp node/ip-10-0-179-155.us-west-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Jul 23 00:51:51.054 I ns/openshift-marketplace pod/redhat-marketplace-jjnwp node/ip-10-0-179-155.us-west-2.compute.internal reason/Deleted
Jul 23 00:52:00.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-234457@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (36 times)
Jul 23 00:52:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-226.us-west-2.compute.internal node/ip-10-0-197-226 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 23 00:52:56.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-197-226.us-west-2.compute.internal node/ip-10-0-197-226 reason/TerminationStoppedServing Server has stopped listening
Jul 23 00:52:58.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-234457@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (37 times)
Jul 23 00:53:40.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-197-226.us-west-2.compute.internal node/ip-10-0-197-226.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Jul 23 00:53:40.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-197-226.us-west-2.compute.internal node/ip-10-0-197-226.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe82481094d454493a7738877f477b790d683b41edb1124fe4c4a53c51ae1252
Jul 23 00:53:40.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-197-226.us-west-2.compute.internal node/ip-10-0-197-226.us-west-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
#1815533853792538624build-log.txt.gz4 days ago
Jul 23 01:01:54.004 W ns/openshift-apiserver pod/apiserver-54d95c4786-4lsrg reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-54d95c4786-4lsrg
Jul 23 01:01:54.045 W ns/openshift-apiserver pod/apiserver-85fcdf9cfc-w49dl reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 23 01:01:54.167 I ns/openshift-apiserver pod/apiserver-54d95c4786-4lsrg node/ reason/DeletedBeforeScheduling
Jul 23 01:01:54.221 I ns/openshift-apiserver pod/apiserver-85fcdf9cfc-w49dl node/ reason/Created
Jul 23 01:01:57.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 5, desired generation is 6." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Jul 23 01:02:00.000 I ns/openshift-apiserver pod/apiserver-7d56bf7cdb-24v96 node/apiserver-7d56bf7cdb-24v96 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 23 01:02:00.000 I ns/openshift-apiserver pod/apiserver-7d56bf7cdb-24v96 node/apiserver-7d56bf7cdb-24v96 reason/TerminationStoppedServing Server has stopped listening
Jul 23 01:02:06.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-24v96 node/ip-10-0-165-182.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.29:8443/healthz": dial tcp 10.129.0.29:8443: connect: connection refused\nbody: \n
Jul 23 01:02:06.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-24v96 node/ip-10-0-165-182.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.29:8443/healthz": dial tcp 10.129.0.29:8443: connect: connection refused\nbody: \n
Jul 23 01:02:06.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-24v96 node/ip-10-0-165-182.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.29:8443/healthz": dial tcp 10.129.0.29:8443: connect: connection refused
Jul 23 01:02:06.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-24v96 node/ip-10-0-165-182.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.29:8443/healthz": dial tcp 10.129.0.29:8443: connect: connection refused
#1815533853792538624build-log.txt.gz4 days ago
Jul 23 01:03:25.213 I ns/openshift-apiserver pod/apiserver-7d56bf7cdb-2tlrd node/ip-10-0-158-71.us-west-2.compute.internal reason/GracefulDelete duration/70s
Jul 23 01:03:25.283 I ns/openshift-apiserver pod/apiserver-85fcdf9cfc-hh27x node/ reason/Created
Jul 23 01:03:26.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Jul 23 01:03:35.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-2tlrd node/ip-10-0-158-71.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.61:8443/healthz": dial tcp 10.128.0.61:8443: connect: connection refused\nbody: \n
Jul 23 01:03:35.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-2tlrd node/ip-10-0-158-71.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.61:8443/healthz": dial tcp 10.128.0.61:8443: connect: connection refused\nbody: \n
Jul 23 01:03:35.000 I ns/openshift-apiserver pod/apiserver-7d56bf7cdb-2tlrd node/apiserver-7d56bf7cdb-2tlrd reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 23 01:03:35.000 I ns/openshift-apiserver pod/apiserver-7d56bf7cdb-2tlrd node/apiserver-7d56bf7cdb-2tlrd reason/TerminationStoppedServing Server has stopped listening
Jul 23 01:03:35.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-2tlrd node/ip-10-0-158-71.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.61:8443/healthz": dial tcp 10.128.0.61:8443: connect: connection refused
Jul 23 01:03:35.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-2tlrd node/ip-10-0-158-71.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.61:8443/healthz": dial tcp 10.128.0.61:8443: connect: connection refused
Jul 23 01:03:37.000 W ns/openshift-network-diagnostics node/ip-10-0-179-155.us-west-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: openshift-apiserver-endpoint-ip-10-0-165-182: failed to establish a TCP connection to 10.129.0.29:8443: dial tcp 10.129.0.29:8443: connect: connection refused
Jul 23 01:03:45.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-2tlrd node/ip-10-0-158-71.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.61:8443/healthz": dial tcp 10.128.0.61:8443: connect: connection refused\nbody: \n (2 times)
#1815533853792538624build-log.txt.gz4 days ago
Jul 23 01:05:01.818 W ns/openshift-apiserver pod/apiserver-85fcdf9cfc-5m5jv reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 23 01:05:01.822 I ns/openshift-apiserver pod/apiserver-7d56bf7cdb-crt4h node/ip-10-0-197-226.us-west-2.compute.internal reason/GracefulDelete duration/70s
Jul 23 01:05:01.892 I ns/openshift-apiserver pod/apiserver-85fcdf9cfc-5m5jv node/ reason/Created
Jul 23 01:05:04.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Jul 23 01:05:04.558 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Jul 23 01:05:11.000 I ns/openshift-apiserver pod/apiserver-7d56bf7cdb-crt4h node/apiserver-7d56bf7cdb-crt4h reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 23 01:05:11.000 I ns/openshift-apiserver pod/apiserver-7d56bf7cdb-crt4h node/apiserver-7d56bf7cdb-crt4h reason/TerminationStoppedServing Server has stopped listening
Jul 23 01:05:16.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-crt4h node/ip-10-0-197-226.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.33:8443/healthz": dial tcp 10.130.0.33:8443: connect: connection refused\nbody: \n
Jul 23 01:05:16.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-crt4h node/ip-10-0-197-226.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.33:8443/healthz": dial tcp 10.130.0.33:8443: connect: connection refused\nbody: \n
Jul 23 01:05:16.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-crt4h node/ip-10-0-197-226.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.33:8443/healthz": dial tcp 10.130.0.33:8443: connect: connection refused
Jul 23 01:05:16.000 W ns/openshift-apiserver pod/apiserver-7d56bf7cdb-crt4h node/ip-10-0-197-226.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.33:8443/healthz": dial tcp 10.130.0.33:8443: connect: connection refused
#1815483626326855680build-log.txt.gz4 days ago
Jul 22 21:41:18.000 I ns/openshift-kube-apiserver pod/installer-11-ip-10-0-143-174.us-west-2.compute.internal reason/StaticPodInstallerCompleted Successfully installed revision 11
Jul 22 21:41:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-174.us-west-2.compute.internal node/ip-10-0-143-174 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Jul 22 21:41:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-174.us-west-2.compute.internal node/ip-10-0-143-174 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Jul 22 21:41:19.419 I ns/openshift-kube-apiserver pod/installer-11-ip-10-0-143-174.us-west-2.compute.internal node/ip-10-0-143-174.us-west-2.compute.internal container/installer reason/ContainerExit code/0 cause/Completed
Jul 22 21:42:04.000 W ns/openshift-etcd-operator deployment/etcd-operator reason/EtcdLeaderChangeMetrics Detected leader change increase of 2.22223045270538 over 5 minutes on "AWS"; disk metrics are: etcd-ip-10-0-143-174.us-west-2.compute.internal=0.006431111111111101,etcd-ip-10-0-183-159.us-west-2.compute.internal=0.022319999999999965,etcd-ip-10-0-247-98.us-west-2.compute.internal=0.056239999999999915. Most often this is as a result of inadequate storage or sometimes due to networking issues.
Jul 22 21:44:48.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-174.us-west-2.compute.internal node/ip-10-0-143-174 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 22 21:44:48.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-143-174.us-west-2.compute.internal node/ip-10-0-143-174 reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:45:00.000 I ns/openshift-multus pod/ip-reconciler-28694745-bw77q node/ip-10-0-171-160.us-west-2.compute.internal container/whereabouts reason/Created
Jul 22 21:45:00.000 I ns/openshift-multus pod/ip-reconciler-28694745-bw77q node/ip-10-0-171-160.us-west-2.compute.internal container/whereabouts reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd5df8d240e617470c8c7477bed01714cc83e453f53d0897fd493807c868cc53
Jul 22 21:45:00.000 I ns/openshift-multus pod/ip-reconciler-28694745-bw77q node/ip-10-0-171-160.us-west-2.compute.internal container/whereabouts reason/Started
Jul 22 21:45:00.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulCreate Created job ip-reconciler-28694745
#1815483626326855680build-log.txt.gz4 days ago
Jul 22 21:47:09.000 I ns/openshift-kube-apiserver configmap/cert-regeneration-controller-lock reason/LeaderElection ip-10-0-183-159_bf475f15-12ec-4427-9597-76d88f384035 became leader
Jul 22 21:47:54.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (13 times)
Jul 22 21:48:54.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (14 times)
Jul 22 21:49:57.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (15 times)
Jul 22 21:50:00.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (16 times)
Jul 22 21:50:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-247-98.us-west-2.compute.internal node/ip-10-0-247-98 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 22 21:50:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-247-98.us-west-2.compute.internal node/ip-10-0-247-98 reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:50:54.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (17 times)
Jul 22 21:51:02.704 I ns/openshift-marketplace pod/certified-operators-8vwzp node/ip-10-0-171-160.us-west-2.compute.internal reason/Scheduled
Jul 22 21:51:02.729 I ns/openshift-marketplace pod/certified-operators-8vwzp node/ reason/Created
Jul 22 21:51:04.000 I ns/openshift-marketplace pod/certified-operators-8vwzp reason/AddedInterface Add eth0 [10.129.2.27/23] from openshift-sdn
#1815483626326855680build-log.txt.gz4 days ago
Jul 22 21:53:54.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (30 times)
Jul 22 21:54:54.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (31 times)
Jul 22 21:55:38.000 W ns/openshift-machine-api machineset/ci-op-8ptqpk9c-1d119-hvjqp-worker-us-west-2d reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (7 times)
Jul 22 21:55:38.000 W ns/openshift-machine-api machineset/ci-op-8ptqpk9c-1d119-hvjqp-worker-us-west-2a reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (8 times)
Jul 22 21:55:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (32 times)
Jul 22 21:56:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-183-159.us-west-2.compute.internal node/ip-10-0-183-159 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 22 21:56:33.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-183-159.us-west-2.compute.internal node/ip-10-0-183-159 reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:56:36.574 I ns/openshift-marketplace pod/community-operators-2r8dz node/ip-10-0-171-160.us-west-2.compute.internal reason/Scheduled
Jul 22 21:56:36.596 I ns/openshift-marketplace pod/community-operators-2r8dz node/ reason/Created
Jul 22 21:56:38.000 I ns/openshift-marketplace pod/community-operators-2r8dz node/ip-10-0-171-160.us-west-2.compute.internal container/registry-server reason/Pulling image/registry.redhat.io/redhat/community-operator-index:v4.8
Jul 22 21:56:38.000 I ns/openshift-marketplace pod/community-operators-2r8dz reason/AddedInterface Add eth0 [10.129.2.29/23] from openshift-sdn
#1815483626326855680build-log.txt.gz4 days ago
Jul 22 22:03:50.642 W ns/openshift-apiserver pod/apiserver-7c9fd7bbb6-fzvc9 reason/FailedScheduling skip schedule deleting pod: openshift-apiserver/apiserver-7c9fd7bbb6-fzvc9
Jul 22 22:03:50.678 W ns/openshift-apiserver pod/apiserver-69cd6c4cb5-h545s reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 22 22:03:50.772 I ns/openshift-apiserver pod/apiserver-7c9fd7bbb6-fzvc9 node/ reason/DeletedBeforeScheduling
Jul 22 22:03:50.824 I ns/openshift-apiserver pod/apiserver-69cd6c4cb5-h545s node/ reason/Created
Jul 22 22:03:54.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Jul 22 22:03:56.000 I ns/openshift-apiserver pod/apiserver-5c59bdc7fd-2bnzd node/apiserver-5c59bdc7fd-2bnzd reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 22 22:03:56.000 I ns/openshift-apiserver pod/apiserver-5c59bdc7fd-2bnzd node/apiserver-5c59bdc7fd-2bnzd reason/TerminationStoppedServing Server has stopped listening
Jul 22 22:04:06.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-2bnzd node/ip-10-0-143-174.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused\nbody: \n
Jul 22 22:04:06.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-2bnzd node/ip-10-0-143-174.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused\nbody: \n
Jul 22 22:04:06.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-2bnzd node/ip-10-0-143-174.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused
Jul 22 22:04:06.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-2bnzd node/ip-10-0-143-174.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.128.0.42:8443/healthz": dial tcp 10.128.0.42:8443: connect: connection refused
#1815483626326855680build-log.txt.gz4 days ago
Jul 22 22:05:26.292 W ns/openshift-apiserver pod/apiserver-69cd6c4cb5-qvdwm reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 22 22:05:26.328 I ns/openshift-apiserver pod/apiserver-69cd6c4cb5-qvdwm node/ reason/Created
Jul 22 22:05:27.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Jul 22 22:05:36.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-xp4nj node/ip-10-0-247-98.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.52:8443/healthz": dial tcp 10.129.0.52:8443: connect: connection refused\nbody: \n
Jul 22 22:05:36.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-xp4nj node/ip-10-0-247-98.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.52:8443/healthz": dial tcp 10.129.0.52:8443: connect: connection refused\nbody: \n
Jul 22 22:05:36.000 I ns/openshift-apiserver pod/apiserver-5c59bdc7fd-xp4nj node/apiserver-5c59bdc7fd-xp4nj reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 22 22:05:36.000 I ns/openshift-apiserver pod/apiserver-5c59bdc7fd-xp4nj node/apiserver-5c59bdc7fd-xp4nj reason/TerminationStoppedServing Server has stopped listening
Jul 22 22:05:36.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-xp4nj node/ip-10-0-247-98.us-west-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.52:8443/healthz": dial tcp 10.129.0.52:8443: connect: connection refused
Jul 22 22:05:36.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-xp4nj node/ip-10-0-247-98.us-west-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.52:8443/healthz": dial tcp 10.129.0.52:8443: connect: connection refused
Jul 22 22:05:46.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-xp4nj node/ip-10-0-247-98.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.52:8443/healthz": dial tcp 10.129.0.52:8443: connect: connection refused\nbody: \n (2 times)
Jul 22 22:05:46.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-xp4nj node/ip-10-0-247-98.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.52:8443/healthz": dial tcp 10.129.0.52:8443: connect: connection refused\nbody: \n (2 times)
#1815483626326855680build-log.txt.gz4 days ago
Jul 22 22:07:07.113 I ns/openshift-apiserver pod/apiserver-5c59bdc7fd-sxtr4 node/ip-10-0-183-159.us-west-2.compute.internal reason/GracefulDelete duration/70s
Jul 22 22:07:07.138 W ns/openshift-apiserver pod/apiserver-69cd6c4cb5-rk2dn reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 22 22:07:07.172 I ns/openshift-apiserver pod/apiserver-69cd6c4cb5-rk2dn node/ reason/Created
Jul 22 22:07:10.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Degraded message changed from "All is well" to "APIServerDeploymentDegraded: 1 of 3 requested instances are unavailable for apiserver.openshift-apiserver ()",Progressing changed from True to False ("All is well")
Jul 22 22:07:10.244 W clusteroperator/openshift-apiserver condition/Progressing status/False reason/AsExpected changed: All is well
Jul 22 22:07:17.000 I ns/openshift-apiserver pod/apiserver-5c59bdc7fd-sxtr4 node/apiserver-5c59bdc7fd-sxtr4 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 22 22:07:17.000 I ns/openshift-apiserver pod/apiserver-5c59bdc7fd-sxtr4 node/apiserver-5c59bdc7fd-sxtr4 reason/TerminationStoppedServing Server has stopped listening
Jul 22 22:07:17.611 I ns/openshift-marketplace pod/redhat-marketplace-ldtpb node/ip-10-0-171-160.us-west-2.compute.internal reason/Scheduled
Jul 22 22:07:17.634 I ns/openshift-marketplace pod/redhat-marketplace-ldtpb node/ reason/Created
Jul 22 22:07:18.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-sxtr4 node/ip-10-0-183-159.us-west-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused\nbody: \n
Jul 22 22:07:18.000 W ns/openshift-apiserver pod/apiserver-5c59bdc7fd-sxtr4 node/ip-10-0-183-159.us-west-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused\nbody: \n
release-openshift-origin-installer-e2e-aws-disruptive-4.9 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1815568087370436608build-log4 days ago
Jul 23 02:56:27.000 I ns/openshift-oauth-apiserver replicaset/apiserver-74c47bf859 reason/SuccessfulCreate Created pod: apiserver-74c47bf859-5qbss
Jul 23 02:56:27.000 I ns/openshift-oauth-apiserver replicaset/apiserver-84b474ccd5 reason/SuccessfulDelete Deleted pod: apiserver-84b474ccd5-h5hw5
Jul 23 02:56:27.000 I ns/openshift-oauth-apiserver replicaset/apiserver-84b474ccd5 reason/SuccessfulDelete Deleted pod: apiserver-84b474ccd5-h5hw5
Jul 23 02:56:27.000 I ns/openshift-oauth-apiserver replicaset/apiserver-84b474ccd5 reason/SuccessfulDelete Deleted pod: apiserver-84b474ccd5-h5hw5
Jul 23 02:56:27.000 I ns/openshift-oauth-apiserver replicaset/apiserver-84b474ccd5 reason/SuccessfulDelete Deleted pod: apiserver-84b474ccd5-h5hw5
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Jul 23 02:56:27.000 I ns/default namespace/kube-system node/apiserver-84b474ccd5-h5hw5 reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
#1815568087370436608build-log4 days ago
Jul 23 02:56:59.986 I kube-apiserver received an error while watching events: Internal error occurred: etcdserver: no leader
Jul 23 02:56:59.992 I kube-apiserver received an error while watching events: Internal error occurred: etcdserver: no leader
Jul 23 02:56:59.994 I kube-apiserver received an error while watching events: Internal error occurred: etcdserver: no leader
Jul 23 02:56:59.996 I kube-apiserver received an error while watching events: Internal error occurred: etcdserver: no leader
Jul 23 02:56:59.998 I kube-apiserver received an error while watching events: Internal error occurred: etcdserver: no leader
Jul 23 02:57:00.000 I ns/openshift-apiserver pod/apiserver-79f46c4b48-tkr2q node/apiserver-79f46c4b48-tkr2q reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Jul 23 02:57:00.000 I ns/openshift-apiserver pod/apiserver-79f46c4b48-tkr2q node/apiserver-79f46c4b48-tkr2q reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Jul 23 02:57:00.000 I ns/openshift-apiserver pod/apiserver-79f46c4b48-tkr2q node/apiserver-79f46c4b48-tkr2q reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Jul 23 02:57:00.000 W ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-648qb node/ip-10-0-185-79.ec2.internal reason/Unhealthy Readiness probe failed:
Jul 23 02:57:00.000 W ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-648qb node/ip-10-0-185-79.ec2.internal reason/Unhealthy Readiness probe failed:
Jul 23 02:57:00.000 W ns/openshift-etcd pod/etcd-quorum-guard-654cb75ccd-648qb node/ip-10-0-185-79.ec2.internal reason/Unhealthy Readiness probe failed:
Jul 23 02:57:00.000 I kube-apiserver received an error while watching events: Internal error occurred: etcdserver: no leader
Jul 23 02:57:00.003 I kube-apiserver received an error while watching events: Internal error occurred: etcdserver: no leader
periodic-ci-openshift-release-master-ci-4.9-upgrade-from-stable-4.8-e2e-aws-ovn-upgrade (all) - 4 runs, 100% failed, 25% of failures match = 25% impact
#1815483626666594304build-log.txt.gz4 days ago
Jul 22 21:17:38.682 I ns/openshift-marketplace pod/redhat-marketplace-x8l74 node/ip-10-0-132-57.us-east-2.compute.internal container/registry-server reason/Ready
Jul 22 21:17:38.685 I ns/openshift-marketplace pod/redhat-marketplace-x8l74 node/ip-10-0-132-57.us-east-2.compute.internal reason/GracefulDelete duration/1s
Jul 22 21:17:39.000 I ns/openshift-marketplace pod/redhat-marketplace-x8l74 node/ip-10-0-132-57.us-east-2.compute.internal container/registry-server reason/Killing
Jul 22 21:17:40.669 I ns/openshift-marketplace pod/redhat-marketplace-x8l74 node/ip-10-0-132-57.us-east-2.compute.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Jul 22 21:17:52.646 I ns/openshift-marketplace pod/redhat-marketplace-x8l74 node/ip-10-0-132-57.us-east-2.compute.internal reason/Deleted
Jul 22 21:18:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-249-33.us-east-2.compute.internal node/ip-10-0-249-33 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 22 21:18:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-249-33.us-east-2.compute.internal node/ip-10-0-249-33 reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:19:08.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-249-33.us-east-2.compute.internal node/ip-10-0-249-33.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Created
Jul 22 21:19:08.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-249-33.us-east-2.compute.internal node/ip-10-0-249-33.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Pulled image/quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe82481094d454493a7738877f477b790d683b41edb1124fe4c4a53c51ae1252
Jul 22 21:19:08.000 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-249-33.us-east-2.compute.internal node/ip-10-0-249-33.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/Started
Jul 22 21:19:08.134 I ns/openshift-kube-controller-manager pod/kube-controller-manager-ip-10-0-249-33.us-east-2.compute.internal node/ip-10-0-249-33.us-east-2.compute.internal container/kube-controller-manager-recovery-controller reason/ContainerExit code/0 cause/Completed
#1815483626666594304build-log.txt.gz4 days ago
Jul 22 21:24:02.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (19 times)
Jul 22 21:24:06.000 W ns/openshift-network-diagnostics node/ip-10-0-144-57.us-east-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-249-33: failed to establish a TCP connection to 10.0.249.33:6443: dial tcp 10.0.249.33:6443: connect: connection refused
Jul 22 21:24:06.000 W ns/openshift-network-diagnostics node/ip-10-0-144-57.us-east-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-249-33: failed to establish a TCP connection to 10.0.249.33:6443: dial tcp 10.0.249.33:6443: connect: connection refused
Jul 22 21:24:06.000 I ns/openshift-network-diagnostics node/ip-10-0-144-57.us-east-2.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000060398s: kubernetes-apiserver-endpoint-ip-10-0-249-33: tcp connection to 10.0.249.33:6443 succeeded
Jul 22 21:24:06.000 I ns/openshift-network-diagnostics node/ip-10-0-144-57.us-east-2.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000060398s: kubernetes-apiserver-endpoint-ip-10-0-249-33: tcp connection to 10.0.249.33:6443 succeeded
Jul 22 21:24:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-241.us-east-2.compute.internal node/ip-10-0-187-241 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 22 21:24:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-241.us-east-2.compute.internal node/ip-10-0-187-241 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 22 21:24:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-241.us-east-2.compute.internal node/ip-10-0-187-241 reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:24:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-187-241.us-east-2.compute.internal node/ip-10-0-187-241 reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:24:57.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (20 times)
Jul 22 21:24:57.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (20 times)
Jul 22 21:24:58.000 W ns/openshift-machine-api machineset/ci-op-7g3c17d1-978ed-ckpjh-worker-us-east-2b reason/FailedUpdate Failed to set autoscaling from zero annotations, instance type unknown (7 times)
#1815483626666594304build-log.txt.gz4 days ago
Jul 22 21:30:01.000 I ns/openshift-multus job/ip-reconciler-28694730 reason/Completed Job completed
Jul 22 21:30:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SawCompletedJob Saw completed job: ip-reconciler-28694730, status: Complete
Jul 22 21:30:01.000 I ns/openshift-multus cronjob/ip-reconciler reason/SuccessfulDelete Deleted job ip-reconciler-28694730
Jul 22 21:30:01.156 I ns/openshift-multus pod/ip-reconciler-28694730-6pc7m node/ip-10-0-132-57.us-east-2.compute.internal container/whereabouts reason/ContainerExit code/0 cause/Completed
Jul 22 21:30:01.233 I ns/openshift-multus pod/ip-reconciler-28694730-6pc7m node/ip-10-0-132-57.us-east-2.compute.internal reason/DeletedAfterCompletion
Jul 22 21:30:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-217.us-east-2.compute.internal node/ip-10-0-173-217 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 3m30s finished
Jul 22 21:30:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-173-217.us-east-2.compute.internal node/ip-10-0-173-217 reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:30:55.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1c05098097979f753c3ce1d97baf2a0b2bac7f8edf0a003bd7e9357d4e88e04c,registry.ci.openshift.org/ocp/4.9-2024-07-22-202121@sha256:528f7053c41381399923f69e230671c2e85d23202b2390877f14e8e0c10dea58 (40 times)
Jul 22 21:31:06.000 W ns/openshift-network-diagnostics node/ip-10-0-144-57.us-east-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-endpoint-ip-10-0-187-241: failed to establish a TCP connection to 10.0.187.241:6443: dial tcp 10.0.187.241:6443: connect: connection refused
Jul 22 21:31:06.000 I ns/openshift-network-diagnostics node/ip-10-0-144-57.us-east-2.compute.internal reason/ConnectivityRestored roles/worker Connectivity restored after 1m0.000176108s: kubernetes-apiserver-endpoint-ip-10-0-187-241: tcp connection to 10.0.187.241:6443 succeeded
Jul 22 21:31:10.000 W ns/openshift-network-diagnostics node/ip-10-0-144-57.us-east-2.compute.internal reason/ConnectivityOutageDetected roles/worker Connectivity outage detected: kubernetes-apiserver-service-cluster: failed to establish a TCP connection to 172.30.224.137:443: dial tcp 172.30.224.137:443: connect: connection refused
#1815483626666594304build-log.txt.gz4 days ago
Jul 22 21:38:37.828 I ns/openshift-apiserver pod/apiserver-7f847f76b5-lvk89 node/ reason/Created
Jul 22 21:38:41.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: observed generation is 6, desired generation is 7." to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation"
Jul 22 21:38:41.382 W ns/openshift-apiserver pod/apiserver-7f847f76b5-lvk89 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 22 21:38:41.382 W ns/openshift-apiserver pod/apiserver-7f847f76b5-lvk89 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 22 21:38:41.385 I ns/openshift-machine-api pod/machine-api-operator-74d47b9559-fsqtv node/ip-10-0-249-33.us-east-2.compute.internal reason/Deleted
Jul 22 21:38:44.000 I ns/openshift-apiserver pod/apiserver-84dfdc6f4f-drmw4 node/apiserver-84dfdc6f4f-drmw4 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 22 21:38:44.000 I ns/openshift-apiserver pod/apiserver-84dfdc6f4f-drmw4 node/apiserver-84dfdc6f4f-drmw4 reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:38:46.000 W ns/openshift-apiserver pod/apiserver-84dfdc6f4f-drmw4 node/ip-10-0-173-217.us-east-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused\nbody: \n
Jul 22 21:38:46.000 W ns/openshift-apiserver pod/apiserver-84dfdc6f4f-drmw4 node/ip-10-0-173-217.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused\nbody: \n
Jul 22 21:38:46.000 W ns/openshift-apiserver pod/apiserver-84dfdc6f4f-drmw4 node/ip-10-0-173-217.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused
Jul 22 21:38:46.000 W ns/openshift-apiserver pod/apiserver-84dfdc6f4f-drmw4 node/ip-10-0-173-217.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.130.0.42:8443/healthz": dial tcp 10.130.0.42:8443: connect: connection refused
#1815483626666594304build-log.txt.gz4 days ago
Jul 22 21:40:09.030 I ns/openshift-apiserver pod/apiserver-7f847f76b5-lvk89 node/ip-10-0-173-217.us-east-2.compute.internal container/openshift-apiserver reason/Ready
Jul 22 21:40:09.074 I ns/openshift-apiserver pod/apiserver-84dfdc6f4f-k6wkg node/ip-10-0-187-241.us-east-2.compute.internal reason/GracefulDelete duration/70s
Jul 22 21:40:09.190 W ns/openshift-apiserver pod/apiserver-7f847f76b5-8b84t reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 22 21:40:09.194 I ns/openshift-apiserver pod/apiserver-7f847f76b5-8b84t node/ reason/Created
Jul 22 21:40:10.000 I ns/openshift-apiserver-operator deployment/openshift-apiserver-operator reason/OperatorStatusChanged Status for clusteroperator/openshift-apiserver changed: Progressing message changed from "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 1/3 pods have been updated to the latest generation" to "APIServerDeploymentProgressing: deployment/apiserver.openshift-apiserver: 2/3 pods have been updated to the latest generation"
Jul 22 21:40:19.000 I ns/openshift-apiserver pod/apiserver-84dfdc6f4f-k6wkg node/apiserver-84dfdc6f4f-k6wkg reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 10s finished
Jul 22 21:40:19.000 I ns/openshift-apiserver pod/apiserver-84dfdc6f4f-k6wkg node/apiserver-84dfdc6f4f-k6wkg reason/TerminationStoppedServing Server has stopped listening
Jul 22 21:40:23.000 W ns/openshift-apiserver pod/apiserver-84dfdc6f4f-k6wkg node/ip-10-0-187-241.us-east-2.compute.internal reason/ProbeError Liveness probe error: Get "https://10.129.0.33:8443/healthz": dial tcp 10.129.0.33:8443: connect: connection refused\nbody: \n
Jul 22 21:40:23.000 W ns/openshift-apiserver pod/apiserver-84dfdc6f4f-k6wkg node/ip-10-0-187-241.us-east-2.compute.internal reason/ProbeError Readiness probe error: Get "https://10.129.0.33:8443/healthz": dial tcp 10.129.0.33:8443: connect: connection refused\nbody: \n
Jul 22 21:40:23.000 W ns/openshift-apiserver pod/apiserver-84dfdc6f4f-k6wkg node/ip-10-0-187-241.us-east-2.compute.internal reason/Unhealthy Liveness probe failed: Get "https://10.129.0.33:8443/healthz": dial tcp 10.129.0.33:8443: connect: connection refused
Jul 22 21:40:23.000 W ns/openshift-apiserver pod/apiserver-84dfdc6f4f-k6wkg node/ip-10-0-187-241.us-east-2.compute.internal reason/Unhealthy Readiness probe failed: Get "https://10.129.0.33:8443/healthz": dial tcp 10.129.0.33:8443: connect: connection refused
periodic-ci-openshift-release-master-ci-4.10-upgrade-from-stable-4.9-e2e-aws-upgrade-workload (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1814950760501219328build-log5 days ago
Jul 21 09:59:42.720 I ns/openshift-etcd pod/etcd-ip-10-0-128-84.ec2.internal node/ip-10-0-128-84.ec2.internal container/etcd-health-monitor reason/ContainerStart duration/1.00s
Jul 21 09:59:42.720 I ns/openshift-etcd pod/etcd-ip-10-0-128-84.ec2.internal node/ip-10-0-128-84.ec2.internal container/etcdctl reason/Ready
Jul 21 09:59:42.720 I ns/openshift-etcd pod/etcd-ip-10-0-128-84.ec2.internal node/ip-10-0-128-84.ec2.internal container/etcd-readyz reason/Ready
Jul 21 09:59:42.720 I ns/openshift-etcd pod/etcd-ip-10-0-128-84.ec2.internal node/ip-10-0-128-84.ec2.internal container/etcd-metrics reason/Ready
Jul 21 09:59:42.720 I ns/openshift-etcd pod/etcd-ip-10-0-128-84.ec2.internal node/ip-10-0-128-84.ec2.internal container/etcd-health-monitor reason/Ready
Jul 21 09:59:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-84.ec2.internal node/ip-10-0-128-84 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Jul 21 09:59:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-84.ec2.internal node/ip-10-0-128-84 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 21 09:59:46.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-128-84.ec2.internal node/ip-10-0-128-84 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 21 09:59:46.000 W ns/openshift-etcd pod/etcd-quorum-guard-75fd8fc7c6-pksmm node/ip-10-0-128-84.ec2.internal reason/Unhealthy Readiness probe failed:  (12 times)
Jul 21 09:59:46.511 - 59s   I alert/KubeContainerWaiting ns/openshift-etcd pod/etcd-ip-10-0-128-84.ec2.internal container/etcd ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="etcd", namespace="openshift-etcd", pod="etcd-ip-10-0-128-84.ec2.internal", prometheus="openshift-monitoring/k8s", severity="warning"}
Jul 21 09:59:46.511 - 59s   I alert/KubeContainerWaiting ns/openshift-etcd pod/etcd-ip-10-0-128-84.ec2.internal container/etcd-health-monitor ALERTS{alertname="KubeContainerWaiting", alertstate="pending", container="etcd-health-monitor", namespace="openshift-etcd", pod="etcd-ip-10-0-128-84.ec2.internal", prometheus="openshift-monitoring/k8s", severity="warning"}
#1814950760501219328build-log5 days ago
Jul 21 10:04:11.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-06-18-021838@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (19 times)
Jul 21 10:04:11.430 W ns/kube-system openshifttest/openshift-api reason/DisruptionBegan disruption/openshift-api connection/new stopped responding to GET requests over new connections: Get "https://api.ci-op-x00jflrr-65545.aws-2.ci.openshift.org:6443/apis/image.openshift.io/v1/namespaces/default/imagestreams": dial tcp: i/o timeout
Jul 21 10:04:11.446 I ns/kube-system openshifttest/openshift-api reason/DisruptionEnded disruption/openshift-api connection/new started responding to GET requests over new connections
Jul 21 10:04:16.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-246-236.ec2.internal node/ip-10-0-246-236.ec2.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (35 times)
Jul 21 10:04:25.511 - 209s  I alert/APIRemovedInNextEUSReleaseInUse ns/openshift-kube-apiserver ALERTS{alertname="APIRemovedInNextEUSReleaseInUse", alertstate="pending", group="policy", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", resource="podsecuritypolicies", severity="info", version="v1beta1"}
Jul 21 10:05:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-246-236.ec2.internal node/ip-10-0-246-236 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Jul 21 10:05:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-246-236.ec2.internal node/ip-10-0-246-236 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 21 10:05:05.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-246-236.ec2.internal node/ip-10-0-246-236 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 21 10:05:05.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-06-18-021838@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (20 times)
Jul 21 10:05:07.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-246-236.ec2.internal node/ip-10-0-246-236 reason/TerminationGracefulTerminationFinished All pending requests processed
Jul 21 10:05:18.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-246-236.ec2.internal node/ip-10-0-246-236.ec2.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-06-18-021838@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8
#1814950760501219328build-log5 days ago
Jul 21 10:09:16.000 I ns/openshift-marketplace pod/certified-operators-vl7ck node/ip-10-0-144-134.ec2.internal container/registry-server reason/Killing
Jul 21 10:09:17.917 I ns/openshift-marketplace pod/certified-operators-vl7ck node/ip-10-0-144-134.ec2.internal container/registry-server reason/ContainerExit code/0 cause/Completed
Jul 21 10:09:17.932 I ns/openshift-marketplace pod/certified-operators-vl7ck node/ip-10-0-144-134.ec2.internal reason/Deleted
Jul 21 10:09:28.000 W ns/openshift-kube-apiserver pod/kube-apiserver-guard-ip-10-0-135-182.ec2.internal node/ip-10-0-135-182.ec2.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]ping ok\n[+]log ok\n[+]etcd ok\n[+]api-openshift-apiserver-available ok\n[+]api-openshift-oauth-apiserver-available ok\n[+]informer-sync ok\n[+]poststarthook/start-kube-apiserver-admission-initializer ok\n[+]poststarthook/quota.openshift.io-clusterquotamapping ok\n[+]poststarthook/openshift.io-deprecated-api-requests-filter ok\n[+]poststarthook/openshift.io-startkubeinformers ok\n[+]poststarthook/openshift.io-openshift-apiserver-reachable ok\n[+]poststarthook/openshift.io-oauth-apiserver-reachable ok\n[+]poststarthook/generic-apiserver-start-informers ok\n[+]poststarthook/priority-and-fairness-config-consumer ok\n[+]poststarthook/priority-and-fairness-filter ok\n[+]poststarthook/start-apiextensions-informers ok\n[+]poststarthook/start-apiextensions-controllers ok\n[+]poststarthook/crd-informer-synced ok\n[+]poststarthook/bootstrap-controller ok\n[+]poststarthook/rbac/bootstrap-roles ok\n[+]poststarthook/scheduling/bootstrap-system-priority-classes ok\n[+]poststarthook/priority-and-fairness-config-producer ok\n[+]poststarthook/start-cluster-authentication-info-controller ok\n[+]poststarthook/aggregator-reload-proxy-client-cert ok\n[+]poststarthook/start-kube-aggregator-informers ok\n[+]poststarthook/apiservice-registration-controller ok\n[+]poststarthook/apiservice-status-available-controller ok\n[+]poststarthook/apiservice-wait-for-first-sync ok\n[+]poststarthook/kube-apiserver-autoregistration ok\n[+]autoregister-completion ok\n[+]poststarthook/apiservice-openapi-controller ok\n[-]shutdown failed: reason withheld\nreadyz check failed\n\n (34 times)
Jul 21 10:10:06.000 I ns/openshift-kube-apiserver-operator deployment/kube-apiserver-operator reason/MultipleVersions multiple versions found, probably in transition: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0cc71f56205cf95d0a6f6215a9296d826cefc70d8faafaf2021d8483770eaa22,registry.ci.openshift.org/ocp/4.10-2024-06-18-021838@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8 (40 times)
Jul 21 10:10:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-182.ec2.internal node/ip-10-0-135-182 reason/AfterShutdownDelayDuration The minimal shutdown duration of 3m30s finished
Jul 21 10:10:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-182.ec2.internal node/ip-10-0-135-182 reason/HTTPServerStoppedListening HTTP Server has stopped listening
Jul 21 10:10:22.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-182.ec2.internal node/ip-10-0-135-182 reason/InFlightRequestsDrained All non long-running request(s) in-flight have drained
Jul 21 10:10:24.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-182.ec2.internal node/ip-10-0-135-182 reason/TerminationGracefulTerminationFinished All pending requests processed
Jul 21 10:10:25.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-182.ec2.internal node/ip-10-0-135-182.ec2.internal container/kube-apiserver reason/Killing
Jul 21 10:10:31.000 I ns/openshift-kube-apiserver pod/kube-apiserver-ip-10-0-135-182.ec2.internal node/ip-10-0-135-182.ec2.internal container/setup reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-06-18-021838@sha256:1a8894ec099b4800586376ffb745e1758b104ef3d736f06f76fe24d899a910f8
#1814950760501219328build-log5 days ago
Jul 21 10:28:47.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-fb4dcc6b9 to 1
Jul 21 10:28:47.000 I ns/openshift-apiserver replicaset/apiserver-646d958968 reason/SuccessfulCreate Created pod: apiserver-646d958968-qr5l7
Jul 21 10:28:47.000 I ns/openshift-oauth-apiserver replicaset/apiserver-fb4dcc6b9 reason/SuccessfulCreate Created pod: apiserver-fb4dcc6b9-tzwpz
Jul 21 10:28:47.000 I ns/openshift-apiserver replicaset/apiserver-577cb77749 reason/SuccessfulDelete Deleted pod: apiserver-577cb77749-pwwnl
Jul 21 10:28:47.000 I ns/openshift-oauth-apiserver replicaset/apiserver-d465bdb4f reason/SuccessfulDelete Deleted pod: apiserver-d465bdb4f-wwxnm
Jul 21 10:28:47.000 I ns/default namespace/kube-system node/apiserver-d465bdb4f-wwxnm reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 0s finished
Jul 21 10:28:47.000 I ns/default namespace/kube-system node/apiserver-d465bdb4f-wwxnm reason/TerminationPreShutdownHooksFinished All pre-shutdown hooks have been finished
Jul 21 10:28:47.000 I ns/default namespace/kube-system node/apiserver-d465bdb4f-wwxnm reason/TerminationStart Received signal to terminate, becoming unready, but keeping serving
Jul 21 10:28:47.000 I ns/default namespace/kube-system node/apiserver-d465bdb4f-wwxnm reason/TerminationStoppedServing Server has stopped listening
Jul 21 10:28:47.060 I ns/openshift-oauth-apiserver pod/apiserver-d465bdb4f-wwxnm node/ip-10-0-135-182.ec2.internal reason/GracefulDelete duration/70s
Jul 21 10:28:47.124 I ns/openshift-apiserver pod/apiserver-577cb77749-pwwnl node/ reason/DeletedBeforeScheduling
#1814950760501219328build-log5 days ago
Jul 21 10:28:57.000 I ns/openshift-ingress-canary pod/ingress-canary-kgzbx reason/AddedInterface Add eth0 [10.131.0.19/23] from openshift-sdn
Jul 21 10:28:57.000 I ns/openshift-ingress pod/router-default-5b9d88f6f6-stq7g reason/AddedInterface Add eth0 [10.131.0.20/23] from openshift-sdn
Jul 21 10:28:57.000 I ns/openshift-oauth-apiserver deployment/apiserver reason/ScalingReplicaSet Scaled up replica set apiserver-65d67857b4 to 1
Jul 21 10:28:57.000 I ns/openshift-oauth-apiserver replicaset/apiserver-65d67857b4 reason/SuccessfulCreate Created pod: apiserver-65d67857b4-nlgz7
Jul 21 10:28:57.000 I ns/openshift-oauth-apiserver replicaset/apiserver-fb4dcc6b9 reason/SuccessfulDelete Deleted pod: apiserver-fb4dcc6b9-tzwpz
Jul 21 10:28:57.000 I ns/openshift-apiserver pod/apiserver-8ddb6bc85-r8gh9 node/apiserver-8ddb6bc85-r8gh9 reason/TerminationMinimalShutdownDurationFinished The minimal shutdown duration of 15s finished
Jul 21 10:28:57.000 I ns/openshift-apiserver pod/apiserver-8ddb6bc85-r8gh9 node/apiserver-8ddb6bc85-r8gh9 reason/TerminationStoppedServing Server has stopped listening
Jul 21 10:28:57.008 I ns/openshift-oauth-apiserver pod/apiserver-fb4dcc6b9-tzwpz node/ reason/DeletedBeforeScheduling
Jul 21 10:28:57.034 W ns/openshift-oauth-apiserver pod/apiserver-65d67857b4-nlgz7 reason/FailedScheduling 0/6 nodes are available: 3 node(s) didn't match Pod's node affinity/selector, 3 node(s) didn't match pod anti-affinity rules.
Jul 21 10:28:57.034 I ns/openshift-oauth-apiserver pod/apiserver-65d67857b4-nlgz7 node/ reason/Created
Jul 21 10:28:57.198 I ns/openshift-cluster-samples-operator pod/cluster-samples-operator-98d44db78-bngwd node/ip-10-0-135-182.ec2.internal container/cluster-samples-operator reason/ContainerExit code/0 cause/Completed
#1814950760501219328build-log5 days ago
Jul 21 10:29:08.000 I ns/openshift-ingress-canary pod/ingress-canary-ng4dk node/ip-10-0-144-134.ec2.internal container/serve-healthcheck-canary reason/Pulled duration/2.630s image/registry.ci.openshift.org/ocp/4.10-2024-06-18-021838@sha256:f502bc03e3350d717ee8a812fc8511e561aaeb9e656080fe3c1f9ff4759b3636
Jul 21 10:29:08.000 I ns/openshift-controller-manager pod/controller-manager-d87ks reason/AddedInterface Add eth0 [10.129.0.73/23] from openshift-sdn
Jul 21 10:29:08.000 W ns/openshift-ingress pod/router-default-5d464bdbf4-f7cqs node/ip-10-0-213-136.ec2.internal reason/ProbeError Readiness probe error: HTTP probe failed with statuscode: 500\nbody: [+]backend-proxy-http ok\n[+]has-synced ok\n[-]process-running failed: reason withheld\nhealthz check failed\n\n (2 times)
Jul 21 10:29:08.000 W ns/openshift-ingress pod/router-default-5d464bdbf4-f7cqs node/ip-10-0-213-136.ec2.internal reason/Unhealthy Readiness probe failed: HTTP probe failed with statuscode: 500 (2 times)
Jul 21 10:29:08.244 I ns/openshift-ingress pod/router-default-5b9d88f6f6-stq7g node/ip-10-0-146-72.ec2.internal container/router reason/Ready
Jul 21 10:29:08.281 E ns/openshift-console-operator pod/console-operator-6f97f996df-ctsqz node/ip-10-0-135-182.ec2.internal container/console-operator reason/ContainerExit code/1 cause/Error , but keeping serving\nI0721 10:29:06.704524       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-ctsqz", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationMinimalShutdownDurationFinished' The minimal shutdown duration of 0s finished\nI0721 10:29:06.704531       1 genericapiserver.go:362] "[graceful-termination] shutdown event" name="AfterShutdownDelayDuration"\nI0721 10:29:06.704543       1 genericapiserver.go:709] Event(v1.ObjectReference{Kind:"Pod", Namespace:"openshift-console-operator", Name:"console-operator-6f97f996df-ctsqz", UID:"", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'TerminationStoppedServing' Server has stopped listening\nI0721 10:29:06.704574       1 genericapiserver.go:387] "[graceful-termination] shutdown event" name="InFlightRequestsDrained"\nI0721 10:29:06.704805       1 base_controller.go:167] Shutting down ConsoleCLIDownloadsController ...\nI0721 10:29:06.704826       1 base_controller.go:167] Shutting down ConsoleDownloadsDeploymentSyncController ...\nI0721 10:29:06.704835       1 base_controller.go:167] Shutting down ConsoleOperator ...\nI0721 10:29:06.704844       1 base_controller.go:167] Shutting down DownloadsRouteController ...\nI0721 10:29:06.704872       1 base_controller.go:167] Shutting down ManagementStateController ...\nI0721 10:29:06.704891       1 base_controller.go:167] Shutting down ConsoleServiceController ...\nI0721 10:29:06.704903       1 base_controller.go:167] Shutting down HealthCheckController ...\nI0721 10:29:06.704919       1 base_controller.go:167] Shutting down StatusSyncer_console ...\nI0721 10:29:06.704929       1 base_controller.go:145] All StatusSyncer_console post start hooks have been terminated\nI0721 10:29:06.704966       1 base_controller.go:167] Shutting down ResourceSyncController ...\nW0721 10:29:06.704998       1 builder.go:101] graceful termination failed, controllers failed with error: stopped\n
Jul 21 10:29:08.281 E ns/openshift-console-operator pod/console-operator-6f97f996df-ctsqz node/ip-10-0-135-182.ec2.internal container/console-operator reason/TerminationStateCleared lastState.terminated was cleared on a pod (bug https://bugzilla.redhat.com/show_bug.cgi?id=1933760 or similar)
Jul 21 10:29:08.346 I ns/openshift-console-operator pod/console-operator-6f97f996df-ctsqz node/ip-10-0-135-182.ec2.internal reason/Deleted
Jul 21 10:29:08.835 I ns/openshift-ingress pod/router-default-5b9d88f6f6-9nhgp node/ip-10-0-213-136.ec2.internal container/router reason/Ready
Jul 21 10:29:09.000 I ns/openshift-machine-api pod/cluster-autoscaler-operator-5568d9c689-bl24d node/ip-10-0-246-236.ec2.internal container/cluster-autoscaler-operator reason/Pulled duration/4.338s image/registry.ci.openshift.org/ocp/4.10-2024-06-18-021838@sha256:f854ca9537338dd6ae441219ce24b38f06578d485d96e604ddcbac3139260d16
Jul 21 10:29:09.000 I ns/openshift-controller-manager pod/controller-manager-d87ks node/ip-10-0-246-236.ec2.internal container/controller-manager reason/Pulling image/registry.ci.openshift.org/ocp/4.10-2024-06-18-021838@sha256:bacd460ec040ff2266125de7956ccf7ad16f284a89575ab3cd2680590fde0609

Found in 6.20% of runs (7.41% of failures) across 129 total runs and 81 jobs (83.72% failed) in 3.032s - clear search | chart view - source code located on github