Job:
periodic-ci-openshift-release-master-okd-4.13-e2e-gcp-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1760640955611877376junit2 hours ago
alert ClusterOperatorDown fired for 6176 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="UpdatingAlertmanagerFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h43m36s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingAlertmanagerFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-okd-4.14-e2e-gcp-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1760640955641237504junit2 hours ago
alert ClusterOperatorDown fired for 5917 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="UpdatingAlertmanagerFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h38m42s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingAlertmanagerFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-nightly-4.15-e2e-metal-ipi-ovn-serial-virtualmedia-bond (all) - 5 runs, 80% failed, 25% of failures match = 20% impact
#1760617635390689280junit3 hours ago
V2 alert ClusterOperatorDown fired for 15m0s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="machine-config", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 5 runs, 100% failed, 20% of failures match = 20% impact
#1760606216221888512junit3 hours ago
V2 alert ClusterOperatorDown fired for 2m58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 4m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-nightly-4.16-upgrade-from-stable-4.15-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 5 runs, 100% failed, 40% of failures match = 40% impact
#1760606242625032192junit3 hours ago
V2 alert ClusterOperatorDown fired for 2m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
#1760342978821361664junit21 hours ago
V2 alert ClusterOperatorDown fired for 2m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-okd-scos-4.14-e2e-aws-ovn-upgrade (all) - 3 runs, 100% failed, 100% of failures match = 100% impact
#1760505809252388864junit10 hours ago
alert ClusterOperatorDown fired for 3370 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 4810 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="MultipleTasksFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 5620 seconds with labels: {name="control-plane-machine-set", namespace="openshift-cluster-version", reason="UnavailableReplicas", severity="critical"} result=reject
#1760437213687975936junit14 hours ago
alert ClusterOperatorDown fired for 3383 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 4853 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="MultipleTasksFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 5633 seconds with labels: {name="control-plane-machine-set", namespace="openshift-cluster-version", reason="UnavailableReplicas", severity="critical"} result=reject
#1760069328159379456junit39 hours ago
alert ClusterOperatorDown fired for 3276 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 4536 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
alert ClusterOperatorDown fired for 5526 seconds with labels: {name="control-plane-machine-set", namespace="openshift-cluster-version", reason="UnavailableReplicas", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h15m38s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h32m8s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="control-plane-machine-set", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UnavailableReplicas", severity="critical"} result=reject
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-upgrade-azure-ovn-arm64 (all) - 13 runs, 38% failed, 20% of failures match = 8% impact
#1760440714904211456junit14 hours ago
V2 alert ClusterOperatorDown fired for 1h45m50s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="control-plane-machine-set", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UnavailableReplicas", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h9m50s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="machine-config", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
periodic-ci-openshift-release-master-nightly-4.14-e2e-aws-ovn-upgrade-rollback-oldest-supported (all) - 7 runs, 100% failed, 14% of failures match = 14% impact
#1760237585713598464junit27 hours ago
V2 alert ClusterOperatorDown fired for 1h19m26s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="machine-config", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h45m26s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingNodeExporterFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h55m56s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="control-plane-machine-set", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UnavailableReplicas", severity="critical"} result=reject
periodic-ci-openshift-release-master-ci-4.12-e2e-azure-ovn-upgrade (all) - 9 runs, 78% failed, 14% of failures match = 11% impact
#1760175622082007040junit33 hours ago
alert ClusterOperatorDown fired for 90 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", severity="critical"} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1955300)
periodic-ci-openshift-release-master-ci-4.16-e2e-gcp-ovn-upgrade (all) - 70 runs, 40% failed, 4% of failures match = 1% impact
#1760126891047522304junit36 hours ago
V2 alert ClusterOperatorDown fired for 1m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="machine-config", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
periodic-ci-openshift-multiarch-master-nightly-4.11-upgrade-from-nightly-4.10-ocp-e2e-aws-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1760108189182857216junit38 hours ago
alert ClusterOperatorDown fired for 240 seconds with labels: {endpoint="metrics", instance="10.0.131.34:9099", job="cluster-version-operator", name="machine-config", namespace="openshift-cluster-version", pod="cluster-version-operator-7d9bdb89dd-p9knk", service="cluster-version-operator", severity="critical", version="4.10.0-0.nightly-arm64-2023-10-07-045859"} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1955300)
periodic-ci-openshift-release-master-nightly-4.12-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 3 runs, 0% failed, 33% of runs match
#1760023031956115456junit43 hours ago
alert ClusterOperatorDown fired for 60 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", severity="critical"} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1955300)
periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade (all) - 4 runs, 50% failed, 50% of failures match = 25% impact
#1760023037006057472junit43 hours ago
alert ClusterOperatorDown fired for 360 seconds with labels: {name="machine-config", namespace="openshift-cluster-version", severity="critical"} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1955300)
periodic-ci-openshift-release-master-okd-4.13-e2e-aws-ovn-upgrade (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1759964491895803904junit47 hours ago
alert ClusterOperatorDown fired for 6025 seconds with labels: {name="monitoring", namespace="openshift-cluster-version", reason="UpdatingAlertmanagerFailed", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 1h41m18s seconds with labels: ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="monitoring", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="UpdatingAlertmanagerFailed", severity="critical"} result=reject

Found in 0.03% of runs (0.18% of failures) across 51511 total runs and 5489 jobs (18.09% failed) in 827ms - clear search | chart view - source code located on github