Job:
periodic-ci-openshift-release-master-nightly-4.15-upgrade-from-stable-4.14-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 7 runs, 100% failed, 71% of failures match = 71% impact
#1817025416188137472junit2 hours ago
V2 alert ClusterOperatorDown fired for 12m58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="machine-config", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 2m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
#1816633453509087232junit28 hours ago
V2 alert ClusterOperatorDown fired for 11m58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="machine-config", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
#1816526862327746560junit35 hours ago
V2 alert ClusterOperatorDown fired for 1m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 3m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
#1816422668631543808junit42 hours ago
V2 alert ClusterOperatorDown fired for 58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
#1816326513075687424junit2 days ago
V2 alert ClusterOperatorDown fired for 1m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 3m58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-nightly-4.14-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 8 runs, 88% failed, 14% of failures match = 13% impact
#1816996324671754240junit4 hours ago
alert ClusterOperatorDown fired for 300 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 2m28s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-ci-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1817008789535068160junit4 hours ago
alert ClusterOperatorDown fired for 570 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", severity="critical"} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1955300)
pull-ci-openshift-installer-master-e2e-azurestack (all) - 12 runs, 33% failed, 25% of failures match = 8% impact
#1816861259149086720junit14 hours ago
V2 alert ClusterOperatorDown fired for 3m34s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="authentication", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="OAuthServerRouteEndpointAccessibleController_EndpointUnavailable", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 5m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="console", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="RouteHealth_FailedGet", severity="critical"} result=reject
pull-ci-openshift-cluster-image-registry-operator-master-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1816761073068412928junit21 hours ago
V2 alert ClusterOperatorDown fired for 2s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="critical"} result=reject
periodic-ci-openshift-release-master-nightly-4.8-e2e-aws-upgrade-rollback-oldest-supported (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1816689490681401344junit24 hours ago
alert ClusterOperatorDown fired for 210 seconds with labels: {endpoint="metrics", instance="10.0.166.102:9099", job="cluster-version-operator", name="machine-config", namespace="openshift-cluster-version", pod="cluster-version-operator-b87f8465d-dpp7n", service="cluster-version-operator", severity="critical", version="4.8.22"} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1955300)
release-openshift-origin-installer-e2e-azure-upgrade (all) - 16 runs, 50% failed, 25% of failures match = 13% impact
#1816420430009864192junit42 hours ago
alert ClusterOperatorDown fired for 3932 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 4172 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 4622 seconds with labels: {name="control-plane-machine-set", namespace="openshift-cluster-version", reason="UnavailableReplicas", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h10m38s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="machine-config", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h18m8s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="control-plane-machine-set", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UnavailableReplicas", severity="critical"} result=reject
#1816420420774006784junit42 hours ago
alert ClusterOperatorDown fired for 3878 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 4238 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 4868 seconds with labels: {name="control-plane-machine-set", namespace="openshift-cluster-version", reason="UnavailableReplicas", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h11m34s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h22m4s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="control-plane-machine-set", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UnavailableReplicas", severity="critical"} result=reject
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-e2e-upgrade-aws-ovn-arm64 (all) - 14 runs, 7% failed, 100% of failures match = 7% impact
#1816423879162204160junit43 hours ago
alert ClusterOperatorDown fired for 991 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="UpdatingPrometheusK8SFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-ci-4.10-e2e-azure-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1816354274959953920junit47 hours ago
alert ClusterOperatorDown fired for 540 seconds with labels: {endpoint="metrics", instance="10.0.0.8:9099", job="cluster-version-operator", name="machine-config", namespace="openshift-cluster-version", pod="cluster-version-operator-7ccc4dd7f6-txqrj", service="cluster-version-operator", severity="critical", version="4.10.0-0.ci-2024-06-18-021838"} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1955300)
periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn-upgrade-out-of-change (all) - 7 runs, 14% failed, 100% of failures match = 14% impact
#1816360307895832576junit2 days ago
V2 alert ClusterOperatorDown fired for 4s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="critical"} result=reject

Found in 0.04% of runs (0.23% of failures) across 36482 total runs and 5487 jobs (17.92% failed) in 723ms - clear search | chart view - source code located on github