Job:
pull-ci-openshift-installer-master-e2e-azurestack (all) - 4 runs, 50% failed, 50% of failures match = 25% impact
#1816861259149086720junit10 hours ago
V2 alert ClusterNotUpgradeable fired for 15m34s seconds with labels: alertstate/firing severity/info ALERTS{alertname="ClusterNotUpgradeable", alertstate="firing", condition="Upgradeable", endpoint="metrics", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="info"} result=reject
V2 alert ClusterOperatorDown fired for 3m34s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="authentication", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="OAuthServerRouteEndpointAccessibleController_EndpointUnavailable", severity="critical"} result=reject
V2 alert ClusterOperatorDown fired for 5m28s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="console", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="RouteHealth_FailedGet", severity="critical"} result=reject
V2 alert InsightsRecommendationActive fired for 1h14m6s seconds with labels: alertstate/firing severity/info ALERTS{alertname="InsightsRecommendationActive", alertstate="firing", container="insights-operator", description="The control plan nodes' disks of OpenShift cluster on Azure don't provide enough IOPS performance for etcd", endpoint="https", info_link="https://console.redhat.com/openshift/insights/advisor/clusters/e0473a89-cbc5-4ee8-8e36-5f765faa907e?first=ccx_rules_ocp.external.rules.disk_low_throughput_in_azure%7CERROR_DISK_LOW_THROUGHPUT_IN_AZURE", instance="10.129.0.26:8443", job="metrics", namespace="openshift-insights", pod="insights-operator-745865bb4f-69bjr", prometheus="openshift-monitoring/k8s", service="metrics", severity="info", total_risk="Important"} result=reject
pull-ci-openshift-cluster-image-registry-operator-master-e2e-aws-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1816761073068412928junit17 hours ago
V2 alert ClusterOperatorDown fired for 2s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="version", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", severity="critical"} result=reject
V2 alert PodStartupStorageOperationsFailing fired for 28s seconds with labels: alertstate/firing severity/info ALERTS{alertname="PodStartupStorageOperationsFailing", alertstate="firing", endpoint="https-metrics", instance="10.0.115.231:10250", job="kubelet", metrics_path="/metrics", migrated="false", namespace="kube-system", node="ip-10-0-115-231.ec2.internal", operation_name="volume_mount", prometheus="openshift-monitoring/k8s", service="kubelet", severity="info", status="fail-unknown", volume_plugin="kubernetes.io/secret"} result=reject
periodic-ci-openshift-release-master-nightly-4.8-e2e-aws-upgrade-rollback-oldest-supported (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1816689490681401344junit20 hours ago
alert ClusterOperatorDown fired for 210 seconds with labels: {endpoint="metrics", instance="10.0.166.102:9099", job="cluster-version-operator", name="machine-config", namespace="openshift-cluster-version", pod="cluster-version-operator-b87f8465d-dpp7n", service="cluster-version-operator", severity="critical", version="4.8.22"} (open bug: https://bugzilla.redhat.com/show_bug.cgi?id=1955300)
alert KubePodCrashLooping pending for 1 seconds with labels: {__name__="ALERTS", container="cluster-policy-controller", endpoint="https-main", job="kube-state-metrics", namespace="openshift-kube-controller-manager", pod="kube-controller-manager-ip-10-0-172-200.ec2.internal", service="kube-state-metrics", severity="warning"}
periodic-ci-openshift-release-master-nightly-4.15-upgrade-from-stable-4.14-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 3 runs, 100% failed, 33% of failures match = 33% impact
#1816633453509087232junit23 hours ago
V2 alert ClusterOperatorDegraded fired for 34m58s seconds with labels: alertstate/firing severity/warning ALERTS{alertname="ClusterOperatorDegraded", alertstate="firing", name="authentication", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="APIServerDeployment_UnavailablePod::OAuthServerDeployment_UnavailablePod", severity="warning"} result=reject
V2 alert ClusterOperatorDown fired for 11m58s seconds with labels: alertstate/firing severity/critical ALERTS{alertname="ClusterOperatorDown", alertstate="firing", name="machine-config", namespace="openshift-cluster-version", prometheus="openshift-monitoring/k8s", reason="MachineConfigDaemonFailed", severity="critical"} result=reject
V2 alert InsightsDisabled fired for 1h2m8s seconds with labels: alertstate/firing severity/info ALERTS{alertname="InsightsDisabled", alertstate="firing", condition="Disabled", endpoint="metrics", name="insights", namespace="openshift-insights", prometheus="openshift-monitoring/k8s", reason="NoToken", severity="info"} result=reject

Found in 0.03% of runs (0.15% of failures) across 14927 total runs and 3217 jobs (17.83% failed) in 482ms - clear search | chart view - source code located on github