Job:
pull-ci-openshift-kubernetes-release-4.17-e2e-agnostic-ovn-cmd (all) - 1 runs, 0% failed, 100% of runs match
#1942440827821232128junit6 minutes ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m24s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 9m24s, firing for 0s:
Jul 08 06:29:25.788 - 564s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-kubernetes-release-4.17-e2e-aws-csi (all) - 1 runs, 0% failed, 100% of runs match
#1942440827930284032junit27 minutes ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m36s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 9m36s, firing for 0s:
Jul 08 06:08:29.948 - 576s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.14-e2e-azure-ovn (all) - 21 runs, 0% failed, 43% of runs match
#1942437327737458688junit47 minutes ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 08 05:12:00.951 - 28s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942297907113758720junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 07 20:02:51.111 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942219976135938048junit14 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 11m28s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 11m28s, firing for 0s:
Jul 07 15:33:01.527 - 194s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 15:33:01.527 - 494s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942185162435465216junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 07 12:36:09.559 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942117465496489984junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m38s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m38s, firing for 0s:
Jul 07 08:05:59.158 - 278s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941941805121540096junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 06 20:28:11.854 - 28s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941906824668123136junit36 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m20s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 8m20s, firing for 0s:
Jul 06 18:10:09.013 - 130s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 18:10:09.013 - 370s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941871844210511872junit38 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 06 15:54:23.463 - 28s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941729654704443392junit47 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 06 06:28:36.551 - 28s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.14-upgrade-from-stable-4.13-e2e-azure-sdn-upgrade (all) - 4 runs, 0% failed, 100% of runs match
#1942406447744684032junitAbout an hour ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m50s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m50s, firing for 0s:
Jul 08 04:20:09.426 - 110s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942406447744684032junitAbout an hour ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 46m18s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 46m18s, firing for 0s:
Jul 08 03:09:07.746 - 238s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 03:14:37.746 - 28s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 03:37:07.746 - 2512s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942309970909335552junit8 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 27m58s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 27m58s, firing for 0s:
Jul 07 21:17:12.077 - 1678s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942219404938842112junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m32s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 7m32s, firing for 0s:
Jul 07 16:24:22.219 - 452s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942219404938842112junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 42m12s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 42m12s, firing for 0s:
Jul 07 15:14:27.003 - 88s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 15:41:57.003 - 1736s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 16:11:25.003 - 708s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942007801224105984junit28 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 24m14s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 24m14s, firing for 0s:
Jul 07 01:20:17.419 - 260s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 01:27:39.419 - 718s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 01:41:09.419 - 418s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 01:54:39.419 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.18-e2e-azure-ovn-runc (all) - 2 runs, 0% failed, 100% of runs match
#1942422733161762816junitAbout an hour ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 08 04:24:31.251 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942060346751586304junit25 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 07 04:25:23.665 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.20-e2e-aws-ovn-upgrade-ipsec (all) - 2 runs, 0% failed, 50% of runs match
#1942424241894854656junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m54s on platformidentification.JobType{Release:"4.20", FromRelease:"4.20", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m54s, firing for 0s:
Jul 08 04:32:11.765 - 354s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.13-upgrade-from-stable-4.12-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 2 runs, 0% failed, 100% of runs match
#1942404540317831168junitAbout an hour ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.13", FromRelease:"4.12", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m28s, firing for 0s:
Jul 08 04:39:03.788 - 148s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941948554784280576junit32 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.13", FromRelease:"4.12", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 06 22:30:27.635 - 58s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.15-upgrade-from-stable-4.14-e2e-azure-sdn-upgrade (all) - 6 runs, 33% failed, 200% of failures match = 67% impact
#1942404559611629568junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m28s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 7m28s, firing for 0s:
Jul 08 03:04:30.172 - 448s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942256143367671808junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 07 17:26:24.180 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942193447175720960junit15 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 07 13:54:10.351 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942142180927737856junit19 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m58s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 3m58s, firing for 0s:
Jul 07 09:45:17.044 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.14-upgrade-from-stable-4.13-e2e-aws-ovn-upgrade (all) - 4 runs, 0% failed, 75% of runs match
#1942406445274238976junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m30s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m30s, firing for 0s:
Jul 08 02:52:14.406 - 210s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942309970859003904junit8 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 11m4s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 11m4s, firing for 0s:
Jul 07 20:27:22.079 - 606s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 20:39:00.079 - 58s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942007799558967296junit28 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m22s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m22s, firing for 0s:
Jul 07 00:28:36.558 - 262s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-cluster-network-operator-release-4.17-e2e-aws-ovn-subnet-configs (all) - 1 runs, 0% failed, 100% of runs match
#1942418547594498048junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m28s, firing for 0s:
Jul 08 04:17:42.229 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.13-upgrade-from-stable-4.12-e2e-gcp-ovn-upgrade (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1942404653568233472junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 16s on platformidentification.JobType{Release:"4.13", FromRelease:"4.12", Platform:"gcp", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 16s, firing for 0s:
Jul 08 02:50:45.537 - 16s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.15-upgrade-from-stable-4.14-e2e-aws-ovn-upgrade (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1942404554603630592junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 11m32s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 11m32s, firing for 0s:
Jul 08 02:48:47.625 - 34s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 02:50:53.625 - 658s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.15-upgrade-from-stable-4.14-e2e-aws-sdn-upgrade (all) - 4 runs, 50% failed, 50% of failures match = 25% impact
#1942404549545299968junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m56s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 4m56s, firing for 0s:
Jul 08 02:50:30.961 - 296s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-upgrade-from-stable-4.16-e2e-azure-ovn-upgrade (all) - 10 runs, 60% failed, 150% of failures match = 90% impact
#1942393709987368960junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m38s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m38s, firing for 0s:
Jul 08 03:47:50.326 - 458s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942393709987368960junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h10m10s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h10m10s, firing for 0s:
Jul 08 02:33:32.060 - 4182s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 02:34:02.060 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1942340282896879616junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m56s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 9m56s, firing for 0s:
Jul 07 23:58:52.233 - 596s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942340282896879616junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h9m14s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h9m14s, firing for 0s:
Jul 07 22:46:50.057 - 4066s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 22:48:20.057 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1942286355270733824junit9 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 23m22s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 23m22s, firing for 0s:
Jul 07 20:33:55.307 - 1402s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942286355270733824junit9 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2h1m2s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2h1m2s, firing for 0s:
Jul 07 19:22:53.552 - 4000s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 19:24:17.552 - 958s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 19:25:07.552 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 19:41:47.552 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 19:49:47.552 - 2158s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1942224988937392128junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m2s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 10m2s, firing for 0s:
Jul 07 16:44:50.896 - 602s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942224988937392128junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h8m26s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h8m26s, firing for 0s:
Jul 07 15:34:24.727 - 3958s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 16:02:24.727 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1942127044485713920junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 17m22s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 17m22s, firing for 0s:
Jul 07 09:56:05.378 - 1042s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942127044485713920junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h33m26s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h33m26s, firing for 0s:
Jul 07 08:44:53.906 - 4s    I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:45:59.906 - 3956s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:46:29.906 - 358s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 09:14:29.906 - 1288s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1942074241566380032junit23 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 14m20s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 14m20s, firing for 0s:
Jul 07 06:16:07.588 - 860s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942074241566380032junit23 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h1m54s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h1m54s, firing for 0s:
Jul 07 05:10:00.257 - 3714s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942017006064635904junit27 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 16m50s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 16m50s, firing for 0s:
Jul 07 02:52:46.142 - 1010s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942017006064635904junit27 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h20m48s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h20m48s, firing for 0s:
Jul 07 01:42:38.833 - 3954s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 01:44:38.833 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 01:46:08.833 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 02:11:38.833 - 748s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1941926392442654720junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m56s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m56s, firing for 0s:
Jul 06 20:32:13.727 - 778s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:46:43.727 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941926392442654720junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h1m18s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h1m18s, firing for 0s:
#1941835761171042304junit39 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m0s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 9m0s, firing for 0s:
Jul 06 14:39:20.203 - 540s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941835761171042304junit39 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h1m10s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h1m10s, firing for 0s:
Jul 06 13:33:52.167 - 3670s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-ovn-kubernetes-release-4.16-e2e-ibmcloud-ipi-ovn-periodic (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942399578607194112junit2 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h57m34s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h40m38s, firing for 16m56s:
Jul 08 03:06:31.228 - 322s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 03:07:01.228 - 292s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 03:07:23.228 - 270s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 08 03:13:51.228 - 512s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 03:13:51.228 - 512s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 03:37:21.228 - 878s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 03:37:21.228 - 3252s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 03:11:53.228 - 118s  E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="5m"}
Jul 08 03:22:23.228 - 898s  E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
periodic-ci-openshift-release-master-ci-4.13-e2e-azure-ovn-upgrade (all) - 3 runs, 67% failed, 50% of failures match = 33% impact
#1942404480754520064junit2 hours ago
        <*errors.errorString | 0xc001b56290>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"TargetDown\",\n      \"alertstate\": \"firing\",\n      \"job\": \"node-tuning-operator\",\n      \"namespace\": \"openshift-cluster-node-tuning-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"node-tuning-operator\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751948082.048,\n      \"1\"\n    ]\n  }\n]",
        },
#1942404480754520064junit2 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
#1942404480754520064junit2 hours ago
        <*errors.errorString | 0xc001688740>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"TargetDown\",\n      \"alertstate\": \"firing\",\n      \"job\": \"node-tuning-operator\",\n      \"namespace\": \"openshift-cluster-node-tuning-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"node-tuning-operator\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751950471.81,\n      \"1\"\n    ]\n  }\n]",
        },
#1942404480754520064junit2 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
periodic-ci-openshift-release-master-ci-4.13-e2e-aws-sdn-techpreview-serial (all) - 2 runs, 0% failed, 100% of runs match
#1942404531086168064junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m54s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 5m54s, firing for 0s:
Jul 08 02:43:22.708 - 354s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941948513461997568junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m38s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 13m38s, firing for 0s:
Jul 06 20:31:57.241 - 818s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
openshift-cluster-storage-operator-596-ci-4.20-upgrade-from-stable-4.19-e2e-aws-ovn-upgrade (all) - 10 runs, 100% failed, 100% of failures match = 100% impact
#1942376020619300864junit3 hours ago
        <*errors.errorString | 0xc001cad860>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751943440.131,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-w74wn\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"8a6160da-446d-48f5-be4c-8380a4f0a770\"\n    },\n    \"value\": [\n      1751943440.131,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-lg8sl\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"494ee0b0-0d9c-4f7c-8110-a7e7d6186a2f\"\n    },\n    \"value\": [\n      1751943440.131,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-258dc\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"e906f1fa-f825-4a20-95ee-45ce97e65645\"\n    },\n    \"value\": [\n      1751943440.131,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-5lhdd\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"01b62c13-3bbc-4c23-afc9-c0aed1348d7f\"\n    },\n    \"value\": [\n      1751943440.131,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376020619300864junit3 hours ago
        <*errors.errorString | 0xc006788020>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751947226.822,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-w74wn\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"8a6160da-446d-48f5-be4c-8380a4f0a770\"\n    },\n    \"value\": [\n      1751947226.822,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-lg8sl\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"494ee0b0-0d9c-4f7c-8110-a7e7d6186a2f\"\n    },\n    \"value\": [\n      1751947226.822,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-258dc\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"e906f1fa-f825-4a20-95ee-45ce97e65645\"\n    },\n    \"value\": [\n      1751947226.822,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-5lhdd\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"01b62c13-3bbc-4c23-afc9-c0aed1348d7f\"\n    },\n    \"value\": [\n      1751947226.822,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376019717525504junit3 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942376019717525504junit3 hours ago
        <*errors.errorString | 0xc002259510>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-k5td7\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"f7a0e5f7-1253-4cb5-ae6d-525a49689968\"\n    },\n    \"value\": [\n      1751943590.346,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-rdk5n\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"da384fec-5c79-4a97-a32e-54e5d201b836\"\n    },\n    \"value\": [\n      1751943590.346,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-96rz9\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"44556430-4c5d-4310-ac3e-6d0cae5d157c\"\n    },\n    \"value\": [\n      1751943590.346,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-g2cqb\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"8029129c-3618-42c2-bf1d-884780223fb8\"\n    },\n    \"value\": [\n      1751943590.346,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376019717525504junit3 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942376019717525504junit3 hours ago
        <*errors.errorString | 0xc0022a1020>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-k5td7\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"f7a0e5f7-1253-4cb5-ae6d-525a49689968\"\n    },\n    \"value\": [\n      1751947196.838,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-rdk5n\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"da384fec-5c79-4a97-a32e-54e5d201b836\"\n    },\n    \"value\": [\n      1751947196.838,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-96rz9\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"44556430-4c5d-4310-ac3e-6d0cae5d157c\"\n    },\n    \"value\": [\n      1751947196.838,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-g2cqb\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"8029129c-3618-42c2-bf1d-884780223fb8\"\n    },\n    \"value\": [\n      1751947196.838,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376022896807936junit3 hours ago
        <*errors.errorString | 0xc001f49bb0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751943260.177,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-lwdpp\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"1e300fd2-9349-4a63-88fb-e5f352ad74ec\"\n    },\n    \"value\": [\n      1751943260.177,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-4dh89\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"ac620a91-f3bc-4f6c-8b5e-1546ba93a55f\"\n    },\n    \"value\": [\n      1751943260.177,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-wvgl6\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"341d5864-e6dc-4b04-ac83-73825db79b28\"\n    },\n    \"value\": [\n      1751943260.177,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-zk5wb\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"d97e91be-8192-42d6-b6cd-ea2f1c8f21b2\"\n    },\n    \"value\": [\n      1751943260.177,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376022896807936junit3 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942376022896807936junit3 hours ago
        <*errors.errorString | 0xc0021877f0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-lwdpp\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"1e300fd2-9349-4a63-88fb-e5f352ad74ec\"\n    },\n    \"value\": [\n      1751946551.896,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-4dh89\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"ac620a91-f3bc-4f6c-8b5e-1546ba93a55f\"\n    },\n    \"value\": [\n      1751946551.896,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-wvgl6\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"341d5864-e6dc-4b04-ac83-73825db79b28\"\n    },\n    \"value\": [\n      1751946551.896,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-zk5wb\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"d97e91be-8192-42d6-b6cd-ea2f1c8f21b2\"\n    },\n    \"value\": [\n      1751946551.896,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376021982449664junit3 hours ago
        <*errors.errorString | 0xc0023e7a90>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751943153.195,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-xcsdg\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"f38380c4-53c7-4099-a64e-2b1f7e4f6ee2\"\n    },\n    \"value\": [\n      1751943153.195,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-2mrkf\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"35eee38e-e371-4385-b471-853bd88f91fb\"\n    },\n    \"value\": [\n      1751943153.195,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-ts6fj\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"df854e63-8d6e-4e02-b79a-ae408c846294\"\n    },\n    \"value\": [\n      1751943153.195,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-w8s5t\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"0568ba04-345c-4959-8579-835ab4366a3d\"\n    },\n    \"value\": [\n      1751943153.195,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376021982449664junit3 hours ago
        <*errors.errorString | 0xc0067a0640>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751946694.944,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-xcsdg\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"f38380c4-53c7-4099-a64e-2b1f7e4f6ee2\"\n    },\n    \"value\": [\n      1751946694.944,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-2mrkf\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"35eee38e-e371-4385-b471-853bd88f91fb\"\n    },\n    \"value\": [\n      1751946694.944,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-ts6fj\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"df854e63-8d6e-4e02-b79a-ae408c846294\"\n    },\n    \"value\": [\n      1751946694.944,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-w8s5t\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"0568ba04-345c-4959-8579-835ab4366a3d\"\n    },\n    \"value\": [\n      1751946694.944,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376020166316032junit3 hours ago
        <*errors.errorString | 0xc001b59040>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751943013.889,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751943013.889,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-5596bdf57-f9kml\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"e80cea6a-0040-424f-9fe5-a723747c20a6\"\n    },\n    \"value\": [\n      1751943013.889,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5697d9f468-ktf7t\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"d9584ffa-745d-4b25-9087-63ab56476856\"\n    },\n    \"value\": [\n      1751943013.889,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-7qgqg\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"f6a88284-160e-46fe-9373-1cc90ebbd6ac\"\n    },\n    \"value\": [\n      1751943013.889,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-mfxt7\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"00c169c9-c936-45a1-9b0d-52ebc12b2225\"\n    },\n    \"value\": [\n      1751943013.889,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376020166316032junit3 hours ago
        <*errors.errorString | 0xc001a834b0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751946602.113,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-5596bdf57-f9kml\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"e80cea6a-0040-424f-9fe5-a723747c20a6\"\n    },\n    \"value\": [\n      1751946602.113,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5697d9f468-ktf7t\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"d9584ffa-745d-4b25-9087-63ab56476856\"\n    },\n    \"value\": [\n      1751946602.113,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-7qgqg\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"f6a88284-160e-46fe-9373-1cc90ebbd6ac\"\n    },\n    \"value\": [\n      1751946602.113,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-mfxt7\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"00c169c9-c936-45a1-9b0d-52ebc12b2225\"\n    },\n    \"value\": [\n      1751946602.113,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376023353987072junit3 hours ago
        <*errors.errorString | 0xc00134ac70>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751942939.57,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-5596bdf57-87klg\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"300c4680-9ab6-41f3-b476-1f8a1996c3cc\"\n    },\n    \"value\": [\n      1751942939.57,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5697d9f468-jbpmf\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"438cd9dc-315e-4b50-833b-2c69edaf882b\"\n    },\n    \"value\": [\n      1751942939.57,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-58nkk\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"26e865ac-aef7-46f9-abb1-90b58440d42d\"\n    },\n    \"value\": [\n      1751942939.57,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-jvss4\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"6d6de144-a598-43d2-8818-0c52631b5db4\"\n    },\n    \"value\": [\n      1751942939.57,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376023353987072junit3 hours ago
        <*errors.errorString | 0xc0064ef170>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751946465.387,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-5596bdf57-87klg\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"300c4680-9ab6-41f3-b476-1f8a1996c3cc\"\n    },\n    \"value\": [\n      1751946465.387,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5697d9f468-jbpmf\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"438cd9dc-315e-4b50-833b-2c69edaf882b\"\n    },\n    \"value\": [\n      1751946465.387,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-58nkk\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"26e865ac-aef7-46f9-abb1-90b58440d42d\"\n    },\n    \"value\": [\n      1751946465.387,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-jvss4\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"6d6de144-a598-43d2-8818-0c52631b5db4\"\n    },\n    \"value\": [\n      1751946465.387,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376021533659136junit3 hours ago
        <*errors.errorString | 0xc0065b3780>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751943065.41,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-9txqg\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"968526e9-83f8-478e-8f98-e36f2d599371\"\n    },\n    \"value\": [\n      1751943065.41,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-s722b\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"f8394925-c2d4-44fd-97e7-355a3419e89d\"\n    },\n    \"value\": [\n      1751943065.41,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-67jxf\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"8dd3c958-c1df-4a95-89de-6417e7cee10f\"\n    },\n    \"value\": [\n      1751943065.41,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-s9t8v\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"cc73da55-0405-4538-86b1-16a869f2003b\"\n    },\n    \"value\": [\n      1751943065.41,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376021533659136junit3 hours ago
        <*errors.errorString | 0xc0066aae80>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751946211.544,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-9txqg\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"968526e9-83f8-478e-8f98-e36f2d599371\"\n    },\n    \"value\": [\n      1751946211.544,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-s722b\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"f8394925-c2d4-44fd-97e7-355a3419e89d\"\n    },\n    \"value\": [\n      1751946211.544,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-67jxf\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"8dd3c958-c1df-4a95-89de-6417e7cee10f\"\n    },\n    \"value\": [\n      1751946211.544,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-s9t8v\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"cc73da55-0405-4538-86b1-16a869f2003b\"\n    },\n    \"value\": [\n      1751946211.544,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376022439628800junit3 hours ago
        <*errors.errorString | 0xc0026661d0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751943071.325,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-h8ctq\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"ac9170be-fec2-4ac3-b637-fd28bb254827\"\n    },\n    \"value\": [\n      1751943071.325,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-rkl2h\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"ec4ff89c-2a7c-4dc5-adaf-24a96b1b20bf\"\n    },\n    \"value\": [\n      1751943071.325,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-jc96r\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"115e35b0-9c79-409f-b1bf-06b4e3a0c19d\"\n    },\n    \"value\": [\n      1751943071.325,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-mtk4d\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"23deeda9-292d-4f22-a2ef-afd723cc9a3d\"\n    },\n    \"value\": [\n      1751943071.325,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376022439628800junit3 hours ago
        <*errors.errorString | 0xc006915f20>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751946231.817,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-h8ctq\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"ac9170be-fec2-4ac3-b637-fd28bb254827\"\n    },\n    \"value\": [\n      1751946231.817,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-rkl2h\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"ec4ff89c-2a7c-4dc5-adaf-24a96b1b20bf\"\n    },\n    \"value\": [\n      1751946231.817,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-jc96r\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"115e35b0-9c79-409f-b1bf-06b4e3a0c19d\"\n    },\n    \"value\": [\n      1751946231.817,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-mtk4d\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"23deeda9-292d-4f22-a2ef-afd723cc9a3d\"\n    },\n    \"value\": [\n      1751946231.817,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376023815360512junit3 hours ago
        <*errors.errorString | 0xc00678e8f0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751942949.858,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-5596bdf57-z4ml5\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"57059eb0-fd5e-479d-b278-a7793cd6de48\"\n    },\n    \"value\": [\n      1751942949.858,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5697d9f468-tvt78\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"881019e8-1da4-489b-93f1-02a5a9c5d44b\"\n    },\n    \"value\": [\n      1751942949.858,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-pjqq9\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"6464a791-b6d4-461a-a1af-48598c1f65de\"\n    },\n    \"value\": [\n      1751942949.858,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-rqwjh\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"e95e65b2-0010-4639-b97f-2b06ecbef3f0\"\n    },\n    \"value\": [\n      1751942949.858,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376023815360512junit3 hours ago
        <*errors.errorString | 0xc00658d820>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751946077.088,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-5596bdf57-z4ml5\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"57059eb0-fd5e-479d-b278-a7793cd6de48\"\n    },\n    \"value\": [\n      1751946077.088,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5697d9f468-tvt78\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"881019e8-1da4-489b-93f1-02a5a9c5d44b\"\n    },\n    \"value\": [\n      1751946077.088,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-pjqq9\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"6464a791-b6d4-461a-a1af-48598c1f65de\"\n    },\n    \"value\": [\n      1751946077.088,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-598b697b99-rqwjh\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"e95e65b2-0010-4639-b97f-2b06ecbef3f0\"\n    },\n    \"value\": [\n      1751946077.088,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376021080674304junit3 hours ago
        <*errors.errorString | 0xc001e4a6d0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751942866.077,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-wlnjc\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"1d9adfc9-ffc7-40ca-ba47-39dc56e3234f\"\n    },\n    \"value\": [\n      1751942866.077,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-4rskd\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"aec04797-11b5-4e93-ba45-8c99aaf757a9\"\n    },\n    \"value\": [\n      1751942866.077,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-6vdpm\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"af6c39de-7b44-4e05-ad94-6c96479a5cbc\"\n    },\n    \"value\": [\n      1751942866.077,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-rxfnq\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"b4797ed5-2656-4d43-bbda-e2debaf9de92\"\n    },\n    \"value\": [\n      1751942866.077,\n      \"1\"\n    ]\n  }\n]",
        },
#1942376021080674304junit3 hours ago
        <*errors.errorString | 0xc006744960>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeDeploymentReplicasMismatch\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"deployment\": \"csi-snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751945844.445,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"cluster-storage-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"cluster-storage-operator-655547458b-wlnjc\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"1d9adfc9-ffc7-40ca-ba47-39dc56e3234f\"\n    },\n    \"value\": [\n      1751945844.445,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"csi-snapshot-controller-operator\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-operator-5c795f6fd8-4rskd\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"aec04797-11b5-4e93-ba45-8c99aaf757a9\"\n    },\n    \"value\": [\n      1751945844.445,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-6vdpm\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"af6c39de-7b44-4e05-ad94-6c96479a5cbc\"\n    },\n    \"value\": [\n      1751945844.445,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"snapshot-controller\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"csi-snapshot-controller-84f665d4db-rxfnq\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"b4797ed5-2656-4d43-bbda-e2debaf9de92\"\n    },\n    \"value\": [\n      1751945844.445,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-openshift-release-master-ci-4.16-upgrade-from-stable-4.15-e2e-azure-sdn-upgrade (all) - 4 runs, 25% failed, 400% of failures match = 100% impact
#1942386931643977728junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 24m50s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 24m50s, firing for 0s:
Jul 08 03:09:40.087 - 1404s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 03:33:36.087 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 03:35:36.087 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942386931643977728junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58m28s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 58m28s, firing for 0s:
#1942296215748087808junit9 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 16m10s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 16m10s, firing for 0s:
Jul 07 21:18:59.400 - 970s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942296215748087808junit9 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 57m14s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 57m14s, firing for 0s:
Jul 07 20:17:40.478 - 3434s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942190469899358208junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 50m6s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 50m6s, firing for 0s:
Jul 07 12:56:50.054 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 13:01:20.054 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 13:03:20.054 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 13:11:50.054 - 2772s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941931402723332096junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 21m6s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 21m6s, firing for 0s:
Jul 06 20:47:07.932 - 1208s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 21:09:17.932 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941931402723332096junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 55m40s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 55m40s, firing for 0s:
periodic-ci-openshift-ovn-kubernetes-release-4.20-e2e-ibmcloud-ipi-ovn-periodic (all) - 2 runs, 0% failed, 100% of runs match
#1942373384805421056junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28m26s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28m26s, firing for 0s:
Jul 08 02:08:38.944 - 1618s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 02:09:08.944 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1942011013863837696junit26 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 24m58s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 24m58s, firing for 0s:
Jul 07 02:27:47.357 - 1498s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.13-e2e-vsphere-ovn-serial (all) - 2 runs, 0% failed, 50% of runs match
#1942404561293545472junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 15m34s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 15m34s, firing for 0s:
Jul 08 02:53:08.160 - 934s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.13-e2e-aws-sdn-serial (all) - 8 runs, 50% failed, 100% of failures match = 50% impact
#1942396029420703744junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 53m26s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 53m26s, firing for 0s:
Jul 08 02:20:11.746 - 44s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 08 02:20:11.746 - 292s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 02:20:11.746 - 2842s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 02:25:35.746 - 28s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1942337774489178112junit7 hours ago
2025-07-07T23:52:40Z: Call to sippy finished after: 1.143565919s
response Body: {"ProwJobName":"periodic-ci-openshift-release-master-ci-4.13-e2e-aws-sdn-serial","ProwJobRunID":1942337774489178112,"Release":"4.13","CompareRelease":"4.13","Tests":[{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","TestID":0,"Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 13 runs on jobs [periodic-ci-openshift-release-master-ci-4.13-e2e-aws-sdn-serial] in the last 14 days."],"CurrentRuns":13,"CurrentPasses":13,"CurrentPassPercentage":100},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"High","Level":100},"Reasons":["Maximum failed test risk: High"],"JobRunTestCount":823,"JobRunTestFailures":1,"NeverStableJob":false,"HistoricalRunTestCount":552},"OpenBugs":[]}
#1942337774489178112junit7 hours ago
2025-07-07T23:52:40Z: Call to sippy finished after: 1.143565919s
response Body: {"ProwJobName":"periodic-ci-openshift-release-master-ci-4.13-e2e-aws-sdn-serial","ProwJobRunID":1942337774489178112,"Release":"4.13","CompareRelease":"4.13","Tests":[{"Name":"[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info","TestID":0,"Risk":{"Level":{"Name":"High","Level":100},"Reasons":["This test has passed 100.00% of 13 runs on jobs [periodic-ci-openshift-release-master-ci-4.13-e2e-aws-sdn-serial] in the last 14 days."],"CurrentRuns":13,"CurrentPasses":13,"CurrentPassPercentage":100},"OpenBugs":[]}],"OverallRisk":{"Level":{"Name":"High","Level":100},"Reasons":["Maximum failed test risk: High"],"JobRunTestCount":823,"JobRunTestFailures":1,"NeverStableJob":false,"HistoricalRunTestCount":552},"OpenBugs":[]}
#1942337774489178112junit7 hours ago
Jul 07 22:22:27.244 - 5400s E alert/Watchdog ns/openshift-monitoring ALERTS{alertname="Watchdog", alertstate="firing", namespace="openshift-monitoring", prometheus="openshift-monitoring/k8s", severity="none"}
Jul 07 22:22:29.244 - 268s  E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 22:23:52.396 E ns/openshift-ingress-canary pod/ingress-canary-nxh7q node/ip-10-0-179-137.ec2.internal uid/2021cf33-93e7-47bb-89e7-fb3c4f857def container/serve-healthcheck-canary reason/ContainerExit code/2 cause/Error serving on 8888\nserving on 8080\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\nServing canary healthcheck request\n
#1942337774489178112junit7 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h42m38s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1h38m10s, firing for 4m28s:
Jul 07 22:22:27.244 - 2s    I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 22:22:27.244 - 2s    I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 22:26:57.244 - 756s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 22:26:57.244 - 5130s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 22:22:29.244 - 268s  E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1942002410754936832junit29 hours ago
Jul 07 00:13:45.295 - 1s    E node/ip-10-0-203-122.us-west-2.compute.internal reason/FailedToDeleteCGroupsPath Jul 07 00:13:45.295738 ip-10-0-203-122 kubenswrapper[1386]: I0707 00:13:45.295701    1386 pod_container_manager_linux.go:191] "Failed to delete cgroup paths" cgroupName=[kubepods podaca59ed5-d74d-4f0c-a1ab-0c7599be629e] err="unable to destroy cgroup paths for cgroup [kubepods podaca59ed5-d74d-4f0c-a1ab-0c7599be629e] : Timed out while waiting for systemd to remove kubepods-podaca59ed5_d74d_4f0c_a1ab_0c7599be629e.slice"
Jul 07 00:27:36.075 E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 00:27:36.075 - 6060s E alert/Watchdog ns/openshift-monitoring ALERTS{alertname="Watchdog", alertstate="firing", namespace="openshift-monitoring", prometheus="openshift-monitoring/k8s", severity="none"}
#1942002410754936832junit29 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 42m56s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 42m56s, firing for 0s:
Jul 07 00:27:36.075 - 268s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 00:27:36.075 - 2308s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 00:27:36.075 E alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1941906209980289024junit36 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m10s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 5m10s, firing for 0s:
Jul 06 17:40:25.858 - 310s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.13-e2e-vsphere-ovn-upi-serial (all) - 3 runs, 100% failed, 67% of failures match = 67% impact
#1942404526053003264junit3 hours ago
        <*errors.errorString | 0xc00870b5e0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"VSphereOpenshiftClusterHealthFail\",\n      \"alertstate\": \"firing\",\n      \"check\": \"CheckAccountPermissions\",\n      \"container\": \"vsphere-problem-detector-operator\",\n      \"endpoint\": \"vsphere-metrics\",\n      \"instance\": \"10.128.0.5:8444\",\n      \"job\": \"vsphere-problem-detector-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"vsphere-problem-detector-operator-64d747f854-vvgf8\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"vsphere-problem-detector-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751942803.001,\n      \"1\"\n    ]\n  }\n]",
        },
#1942404526053003264junit3 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
#1941948542830514176junit33 hours ago
        <*errors.errorString | 0xc00384c0b0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"VSphereOpenshiftClusterHealthFail\",\n      \"alertstate\": \"firing\",\n      \"check\": \"CheckAccountPermissions\",\n      \"container\": \"vsphere-problem-detector-operator\",\n      \"endpoint\": \"vsphere-metrics\",\n      \"instance\": \"10.129.0.7:8444\",\n      \"job\": \"vsphere-problem-detector-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"vsphere-problem-detector-operator-68b758f54-tflmq\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"vsphere-problem-detector-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751834592.86,\n      \"1\"\n    ]\n  }\n]",
        },
#1941948542830514176junit33 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
#1941948542830514176junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m32s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m32s, firing for 0s:
Jul 06 20:42:28.275 - 394s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:50:34.275 - 58s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.18-upgrade-from-stable-4.17-e2e-azure-ovn-upgrade (all) - 3 runs, 33% failed, 200% of failures match = 67% impact
#1942370944240586752junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 33m20s on platformidentification.JobType{Release:"4.18", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 33m20s, firing for 0s:
Jul 08 01:15:15.993 - 448s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:15:45.993 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 01:43:15.993 - 478s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:51:45.993 - 508s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 02:04:45.993 - 538s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942220861800976384junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 14m54s on platformidentification.JobType{Release:"4.18", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 14m54s, firing for 0s:
Jul 07 15:39:02.388 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 16:08:02.388 - 418s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 16:18:32.388 - 268s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.18-e2e-aws-ovn-cgroupsv1 (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1942398071182725120junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m50s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m50s, firing for 0s:
Jul 08 02:39:51.520 - 230s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-ovn-kubernetes-release-4.19-e2e-ibmcloud-ipi-ovn-periodic (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1942373379772256256junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 21m26s on platformidentification.JobType{Release:"4.19", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 21m26s, firing for 0s:
Jul 08 01:46:30.633 - 1258s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:47:30.633 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
periodic-ci-openshift-release-master-nightly-4.13-e2e-vsphere-ovn-upi (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1942404650208595968junit3 hours ago
        <*errors.errorString | 0xc0029923c0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"VSphereOpenshiftClusterHealthFail\",\n      \"alertstate\": \"firing\",\n      \"check\": \"CheckAccountPermissions\",\n      \"container\": \"vsphere-problem-detector-operator\",\n      \"endpoint\": \"vsphere-metrics\",\n      \"instance\": \"10.129.0.6:8444\",\n      \"job\": \"vsphere-problem-detector-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"vsphere-problem-detector-operator-7b7587787b-spk46\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"vsphere-problem-detector-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751943161.844,\n      \"1\"\n    ]\n  }\n]",
        },
#1942404650208595968junit3 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
#1942404650208595968junit3 hours ago
        <*errors.errorString | 0xc001c349f0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"VSphereOpenshiftClusterHealthFail\",\n      \"alertstate\": \"firing\",\n      \"check\": \"CheckAccountPermissions\",\n      \"container\": \"vsphere-problem-detector-operator\",\n      \"endpoint\": \"vsphere-metrics\",\n      \"instance\": \"10.129.0.6:8444\",\n      \"job\": \"vsphere-problem-detector-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"vsphere-problem-detector-operator-7b7587787b-spk46\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"vsphere-problem-detector-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751945114.673,\n      \"1\"\n    ]\n  }\n]",
        },
#1942404650208595968junit3 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
#1941948566318616576junit34 hours ago
        <*errors.errorString | 0xc001dffeb0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"VSphereOpenshiftClusterHealthFail\",\n      \"alertstate\": \"firing\",\n      \"check\": \"CheckAccountPermissions\",\n      \"container\": \"vsphere-problem-detector-operator\",\n      \"endpoint\": \"vsphere-metrics\",\n      \"instance\": \"10.130.0.5:8444\",\n      \"job\": \"vsphere-problem-detector-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"vsphere-problem-detector-operator-64d747f854-j8dxt\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"vsphere-problem-detector-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751834596.187,\n      \"1\"\n    ]\n  }\n]",
        },
#1941948566318616576junit34 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
#1941948566318616576junit34 hours ago
        <*errors.errorString | 0xc002cff000>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"VSphereOpenshiftClusterHealthFail\",\n      \"alertstate\": \"firing\",\n      \"check\": \"CheckAccountPermissions\",\n      \"container\": \"vsphere-problem-detector-operator\",\n      \"endpoint\": \"vsphere-metrics\",\n      \"instance\": \"10.130.0.5:8444\",\n      \"job\": \"vsphere-problem-detector-metrics\",\n      \"namespace\": \"openshift-cluster-storage-operator\",\n      \"pod\": \"vsphere-problem-detector-operator-64d747f854-j8dxt\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"vsphere-problem-detector-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751836674.739,\n      \"1\"\n    ]\n  }\n]",
        },
#1941948566318616576junit34 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
periodic-ci-openshift-ovn-kubernetes-sandbox-release-4.17-e2e-ibmcloud-ipi-ovn-periodic (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1942373386625748992junit3 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h20m24s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h20m24s, firing for 0s:
Jul 08 01:37:24.001 - 1588s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 01:37:24.001 - 2818s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:37:48.001 - 418s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
pull-ci-openshift-machine-config-operator-release-4.17-e2e-aws-ovn (all) - 2 runs, 0% failed, 50% of runs match
#1942390147282636800junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m6s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 10m6s, firing for 0s:
Jul 08 02:27:15.905 - 228s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 02:27:15.905 - 378s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-origin-main-e2e-gcp-ovn-etcd-scaling (all) - 23 runs, 65% failed, 20% of failures match = 13% impact
#1942387093057572864junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m44s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m44s, firing for 0s:
Jul 08 03:01:00.624 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 03:01:00.624 - 136s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942146916552806400junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m52s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m52s, firing for 0s:
Jul 07 10:47:20.926 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 10:48:20.926 - 84s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942104715567304704junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 44m48s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 44m48s, firing for 0s:
Jul 07 08:18:56.739 - 1228s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:20:56.739 - 1108s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 08:29:00.739 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 08:34:00.739 - 324s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-azure-sdn-fips-serial (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361343583588352junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m26s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m26s, firing for 0s:
Jul 08 00:37:48.498 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 00:41:18.498 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.19-ocp-e2e-ovn-powervs-capi-multi-p-p (all) - 4 runs, 100% failed, 25% of failures match = 25% impact
#1942358150090854400junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m28s on platformidentification.JobType{Release:"4.19", FromRelease:"", Platform:"", Architecture:"ppc64le", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m28s, firing for 0s:
Jul 08 01:10:42.643 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-azurestack-csi (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361346095976448junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h10m16s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 55m48s, firing for 14m28s:
Jul 08 00:34:12.354 - 3118s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:40:38.354 - 230s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:26:10.354 - 868s  W namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 14m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 55m48s, firing for 14m28s:
periodic-ci-openshift-ovn-kubernetes-release-4.17-e2e-ibmcloud-ipi-ovn-periodic (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1942373378098728960junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 23m54s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 23m54s, firing for 0s:
Jul 08 01:27:10.873 - 1108s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:27:40.873 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 01:28:04.873 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
periodic-ci-openshift-csi-operator-release-4.17-periodic-e2e-azure-file-csi (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1942384226565361664junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m32s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m32s, firing for 0s:
Jul 08 02:16:58.535 - 332s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942021833398161408junit29 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 07 02:08:05.299 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.20-e2e-osd-ccs-gcp (all) - 6 runs, 67% failed, 25% of failures match = 17% impact
#1942371191645802496junit4 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|KubeDaemonSetMisScheduled",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942371191645802496junit4 hours ago
        <*errors.errorString | 0xc001a1b390>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|KubeDaemonSetMisScheduled\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubePodCrashLooping\",\n      \"alertstate\": \"firing\",\n      \"container\": \"osd-cluster-ready\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"osd-cluster-ready-nhwm4\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"reason\": \"CrashLoopBackOff\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\",\n      \"uid\": \"deebb310-5cf2-4c85-89d1-68e75e87467f\"\n    },\n    \"value\": [\n      1751937632.031,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-openshift-release-master-ci-4.16-upgrade-from-stable-4.15-e2e-azure-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1942361110548058112junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 21m26s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 21m26s, firing for 0s:
Jul 08 01:25:20.886 - 1286s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942361110548058112junit4 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 52m56s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 52m56s, firing for 0s:
Jul 08 00:20:48.507 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 00:22:18.507 - 898s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 00:43:48.507 - 2250s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-azure-sdn-fips (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361340211367936junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m28s, firing for 0s:
Jul 08 00:32:43.874 - 328s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-azure-ovn-kube-apiserver-rollout (all) - 2 runs, 100% failed, 100% of failures match = 100% impact
#1942365224472416256junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 11m28s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 11m28s, firing for 0s:
Jul 08 00:40:56.891 - 688s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942002707774574592junit29 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m58s, firing for 0s:
Jul 07 00:50:22.369 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-cilium (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361074556735488junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m58s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"", Topology:"ha"} (maxAllowed=0s): pending for 10m58s, firing for 0s:
Jul 08 00:39:05.524 - 658s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-cloud-credential-operator-release-4.17-periodics-e2e-azure-manual-oidc (all) - 2 runs, 0% failed, 100% of runs match
#1942362835598184448junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 08 00:46:26.360 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942000441982193664junit29 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 07 00:41:10.338 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-ibmcloud-csi (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361368774578176junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h0m30s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 42m56s, firing for 17m34s:
Jul 08 00:42:34.457 - 1012s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 00:42:34.457 - 1012s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:05:54.457 - 552s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:15:06.457 - 666s  W namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 00:59:26.457 - 388s  E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-azure-ovn-upi (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361338529452032junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m58s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m58s, firing for 0s:
Jul 08 00:25:34.731 - 358s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-sdn (all) - 1 runs, 0% failed, 100% of runs match
#1942361076989431808junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 08 00:37:45.636 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-upgrade-from-stable-4.15-e2e-aws-sdn-upgrade-infra (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361105523281920junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m26s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m26s, firing for 0s:
Jul 08 01:21:17.292 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:26:17.292 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.19-e2e-metal-ovn-two-node-fencing (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1942356544695832576junit5 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942356544695832576junit5 hours ago
        <*errors.errorString | 0xc00690be20>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"SystemMemoryExceedsReservation\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-machine-config-operator\",\n      \"node\": \"master-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751934064.052,\n      \"1\"\n    ]\n  }\n]",
        },
#1942356544695832576junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 25m8s on platformidentification.JobType{Release:"4.19", FromRelease:"", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:""} (maxAllowed=0s): pending for 25m8s, firing for 0s:
Jul 08 01:07:52.253 - 874s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 01:11:52.253 - 634s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-azure-ovn-etcd-scaling (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361336851730432junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 34m34s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 34m34s, firing for 0s:
Jul 08 00:37:52.089 - 2074s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-upgrade-from-stable-4.15-e2e-aws-sdn-upgrade-workload (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361108027281408junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m58s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m58s, firing for 0s:
Jul 08 00:40:11.751 - 298s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-upgrade-from-stable-4.14-e2e-aws-ovn-upgrade-paused (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361385514045440junit5 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h31m2s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h29m52s, firing for 1m10s:
Jul 08 00:07:10.752 - 1422s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 08 00:07:10.752 - 3970s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 08 00:06:00.752 - 70s   E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 1m10s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h29m52s, firing for 1m10s:
periodic-ci-openshift-release-master-ci-4.16-upgrade-from-stable-4.15-e2e-aws-ovn-uwm (all) - 1 runs, 0% failed, 100% of runs match
#1942361102167838720junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m58s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m58s, firing for 0s:
Jul 08 00:11:34.317 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-e2e-aws-ovn-multi-day-0-x-a (all) - 2 runs, 0% failed, 50% of runs match
#1942364345459544064junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m48s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m48s, firing for 0s:
Jul 08 00:16:59.366 - 228s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.14-e2e-aws-ovn-cpu-partitioning (all) - 8 runs, 100% failed, 25% of failures match = 25% impact
#1942361326542131200junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m56s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m56s, firing for 0s:
Jul 07 23:56:10.041 - 176s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942089279601643520junit24 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m6s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 8m6s, firing for 0s:
Jul 07 06:02:22.239 - 486s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-okd-scos-4.16-e2e-vsphere-ovn (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942361581169938432junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 08 00:11:20.786 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-csi-operator-release-4.17-periodic-e2e-smb-csi (all) - 2 runs, 0% failed, 100% of runs match
#1942356543806640128junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m10s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m10s, firing for 0s:
Jul 08 00:36:01.251 - 250s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941994150505222144junit30 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m16s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m16s, firing for 0s:
Jul 07 00:57:41.267 - 390s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 00:57:41.267 - 406s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-vsphere-externallb-ovn (all) - 1 runs, 0% failed, 100% of runs match
#1942361377960103936junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 08 00:01:52.005 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-ovn-kubernetes-release-4.15-4.15-upgrade-from-stable-4.14-e2e-aws-ovn-upgrade (all) - 3 runs, 67% failed, 50% of failures match = 33% impact
#1942334066833494016junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.15", FromRelease:"4.14", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 07 22:40:11.249 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-shiftstack-ci-release-4.16-e2e-openstack-ovn-password (all) - 1 runs, 0% failed, 100% of runs match
#1942348239890026496junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m28s, firing for 0s:
Jul 07 23:27:42.032 - 90s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 23:29:44.032 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.13-e2e-aws-ovn-upgrade (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1942337781204258816junit6 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m58s on platformidentification.JobType{Release:"4.13", FromRelease:"4.13", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m58s, firing for 0s:
Jul 07 22:34:29.901 - 118s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-kubernetes-master-e2e-aws-ovn-techpreview (all) - 4 runs, 50% failed, 50% of failures match = 25% impact
#1942321403621543936junit7 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942321403621543936junit7 hours ago
        <*errors.errorString | 0xc002380a10>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"periodic-gathering-5nx76\",\n      \"namespace\": \"openshift-insights\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751927431.183,\n      \"1\"\n    ]\n  }\n]",
        },
#1942321403621543936junit7 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942321403621543936junit7 hours ago
        <*errors.errorString | 0xc00656fc40>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"periodic-gathering-5nx76\",\n      \"namespace\": \"openshift-insights\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751930651.117,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-openshift-release-master-ci-4.17-e2e-aws-upgrade-ovn-single-node (all) - 3 runs, 67% failed, 100% of failures match = 67% impact
#1942320520930267136junit7 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"single"} (maxAllowed=0s): pending for 0s, firing for 1m28s:
Jul 07 21:49:05.460 - 88s   E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 1m28s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"single"} (maxAllowed=0s): pending for 0s, firing for 1m28s:
Jul 07 21:49:05.460 - 88s   E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
#1942272138232729600junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"single"} (maxAllowed=0s): pending for 0s, firing for 2m28s:
Jul 07 18:24:50.333 - 148s  E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 2m28s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"single"} (maxAllowed=0s): pending for 0s, firing for 2m28s:
Jul 07 18:24:50.333 - 148s  E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
periodic-ci-shiftstack-ci-release-4.17-e2e-openstack-ovn-parallel (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942319298525532160junit8 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 17m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 17m38s, firing for 20s:
Jul 07 21:41:03.557 - 184s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 21:41:03.557 - 874s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 21:40:43.557 - 20s   E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 20s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 17m38s, firing for 20s:
periodic-ci-openshift-release-master-ci-4.16-upgrade-from-stable-4.15-e2e-aws-ovn-upgrade (all) - 5 runs, 20% failed, 200% of failures match = 40% impact
#1942296215597092864junit9 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m52s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m52s, firing for 0s:
Jul 07 21:02:14.517 - 472s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942296215597092864junit9 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h31m26s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h31m26s, firing for 0s:
Jul 07 19:52:43.397 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 19:53:07.397 - 1528s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 19:53:07.397 - 3900s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941931402425536512junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 43m56s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 43m56s, firing for 0s:
Jul 06 19:42:12.906 - 1078s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:09:12.906 - 1558s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.19-upgrade-from-stable-4.18-e2e-azure-ovn-upgrade (all) - 5 runs, 20% failed, 200% of failures match = 40% impact
#1942291981342347264junit9 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m28s on platformidentification.JobType{Release:"4.19", FromRelease:"4.18", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m28s, firing for 0s:
Jul 07 19:44:23.870 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942201378461978624junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m58s on platformidentification.JobType{Release:"4.19", FromRelease:"4.18", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m58s, firing for 0s:
Jul 07 14:57:21.971 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-installer-release-4.18-e2e-aws-ovn (all) - 1 runs, 0% failed, 100% of runs match
#1942288281769086976junit9 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 52s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 52s, firing for 0s:
Jul 07 20:28:36.408 - 52s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-e2e-azure-ovn-serial (all) - 2 runs, 0% failed, 100% of runs match
#1942272078669418496junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 37m0s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 37m0s, firing for 0s:
Jul 07 18:35:08.644 - 2164s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:36:44.644 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 19:12:44.644 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119255927427072junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 17m26s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 17m26s, firing for 0s:
Jul 07 08:30:11.252 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 08:30:11.252 - 988s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-ovn-multi (all) - 3 runs, 33% failed, 200% of failures match = 67% impact
#1942301934220218368junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m54s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 13m54s, firing for 0s:
Jul 07 20:09:36.206 - 72s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 20:09:36.206 - 762s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941939540373540864junit34 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m32s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 1m32s, firing for 0s:
Jul 06 20:01:13.674 - 92s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
openshift-origin-29962-ci-4.20-e2e-azure-ovn-techpreview-serial (all) - 5 runs, 60% failed, 33% of failures match = 20% impact
#1942240405093355520junit10 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942240405093355520junit10 hours ago
        <*errors.errorString | 0xc0021517d0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"periodic-gathering-9wnbz\",\n      \"namespace\": \"openshift-insights\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751906953.831,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-openshift-release-master-nightly-4.17-upgrade-from-stable-4.16-e2e-metal-ipi-upgrade-ovn-ipv6 (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1942272156742193152junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 07 19:47:52.731 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
aggregator-periodic-ci-openshift-release-master-ci-4.20-e2e-azure-ovn-techpreview-serial (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942240440312926208junit10 hours ago
# openshift-tests.[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
name: '[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at
  or above pending'
#1942240440312926208junit10 hours ago
# openshift-tests.[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
name: '[bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at
  or above info'
periodic-ci-openshift-release-master-nightly-4.17-upgrade-from-stable-4.16-e2e-metal-ipi-ovn-upgrade (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1942272057697898496junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m54s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 9m54s, firing for 0s:
Jul 07 19:25:42.837 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 19:26:12.837 - 508s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 19:38:12.837 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-e2e-azure-ovn-upgrade-out-of-change (all) - 2 runs, 0% failed, 100% of runs match
#1942272142431227904junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m48s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m48s, firing for 0s:
Jul 07 19:31:30.248 - 228s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942272142431227904junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 56m50s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 56m50s, firing for 0s:
Jul 07 18:23:50.683 - 1048s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:47:50.683 - 2362s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119273539309568junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 48m14s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 48m14s, firing for 0s:
Jul 07 08:37:04.457 - 182s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 08:37:04.457 - 1356s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 08:37:04.457 - 1356s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119273539309568junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m52s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 10m52s, firing for 0s:
periodic-ci-openshift-release-master-ci-4.17-e2e-azure-ovn-upgrade (all) - 3 runs, 33% failed, 300% of failures match = 100% impact
#1942272036759932928junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m16s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m16s, firing for 0s:
Jul 07 19:31:17.055 - 436s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942272036759932928junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 36m52s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 36m52s, firing for 0s:
Jul 07 18:29:35.803 - 268s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:35:35.803 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:54:35.803 - 1558s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 19:21:05.803 - 358s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942163218650632192junit18 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m58s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 10m58s, firing for 0s:
Jul 07 11:13:27.663 - 658s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119282775166976junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m58s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m58s, firing for 0s:
Jul 07 08:29:59.155 - 298s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-upgrade-from-stable-4.16-ocp-e2e-upgrade-azure-ovn-multi-a-a (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1942272735103160320junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m44s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 10m44s, firing for 0s:
Jul 07 19:31:54.101 - 644s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942272735103160320junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h1m8s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h1m8s, firing for 0s:
Jul 07 18:26:36.304 - 3668s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942120391245828096junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m14s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 8m14s, firing for 0s:
Jul 07 09:26:47.558 - 494s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942120391245828096junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h4m22s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h4m22s, firing for 0s:
Jul 07 08:18:33.580 - 3834s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:19:33.580 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
periodic-ci-openshift-release-master-ci-4.17-e2e-azure-ovn-techpreview-serial (all) - 2 runs, 0% failed, 100% of runs match
#1942272004048556032junit10 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 14m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 14m58s, firing for 0s:
Jul 07 18:18:14.043 - 898s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119325557067776junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m58s, firing for 0s:
Jul 07 08:23:45.345 - 358s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-upgrade-from-stable-4.16-e2e-aws-ovn-upgrade (all) - 7 runs, 14% failed, 100% of failures match = 14% impact
#1942272108876795904junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 49m4s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 49m4s, firing for 0s:
Jul 07 18:07:00.234 - 212s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 18:07:00.234 - 2732s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.20-e2e-gcp-ovn-upgrade (all) - 85 runs, 12% failed, 10% of failures match = 1% impact
#1942263690074001408junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m20s on platformidentification.JobType{Release:"4.20", FromRelease:"4.20", Platform:"gcp", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m20s, firing for 0s:
Jul 07 17:46:13.401 - 322s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 17:53:07.401 - 478s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.20-upgrade-from-stable-4.19-e2e-azure-ovn-upgrade (all) - 80 runs, 18% failed, 7% of failures match = 1% impact
#1942263674425053184junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 23m26s on platformidentification.JobType{Release:"4.20", FromRelease:"4.19", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 23m26s, firing for 0s:
Jul 07 18:19:45.582 - 928s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:36:45.582 - 478s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-upgrade-from-stable-4.16-ocp-e2e-upgrade-azure-ovn-multi-x-ax (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1942272711841550336junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h8m10s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h8m10s, firing for 0s:
Jul 07 18:46:32.085 - 4090s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942120397952520192junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 44m46s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 44m46s, firing for 0s:
Jul 07 08:47:43.083 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 09:10:43.083 - 2658s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-upgrade-from-stable-4.16-ocp-e2e-upgrade-gcp-ovn-multi-a-a (all) - 2 runs, 0% failed, 100% of runs match
#1942272745177878528junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m22s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 9m22s, firing for 0s:
Jul 07 19:13:10.925 - 562s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942272745177878528junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h7m10s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h7m10s, firing for 0s:
Jul 07 18:04:37.836 - 176s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 18:04:37.836 - 3854s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942120366071615488junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m54s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m54s, firing for 0s:
Jul 07 09:10:06.360 - 354s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942120366071615488junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 44m2s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 44m2s, firing for 0s:
Jul 07 08:03:31.838 - 358s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:27:01.838 - 1408s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:51:01.838 - 876s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-vsphere-ovn-upi-serial (all) - 2 runs, 0% failed, 100% of runs match
#1942272100488187904junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m58s, firing for 0s:
Jul 07 18:09:57.979 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119194665422848junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 12m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 12m58s, firing for 0s:
Jul 07 07:58:42.964 - 778s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-e2e-azure-ovn (all) - 2 runs, 0% failed, 100% of runs match
#1942272165147578368junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m6s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m6s, firing for 0s:
Jul 07 18:33:40.148 - 246s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119179733700608junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 07 08:30:08.579 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-upgrade-from-stable-4.16-ocp-e2e-aws-ovn-upgrade-multi-x-ax (all) - 2 runs, 0% failed, 50% of runs match
#1942272742657101824junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 30m10s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 30m10s, firing for 0s:
Jul 07 18:25:06.246 - 1722s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:56:20.246 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-e2e-azure-ovn-techpreview (all) - 2 runs, 0% failed, 100% of runs match
#1942272167592857600junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m28s, firing for 0s:
Jul 07 18:31:47.422 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119244154015744junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 07 08:36:05.509 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-shiftstack-ci-release-4.20-e2e-openstack-ccpmso-zone (all) - 3 runs, 100% failed, 67% of failures match = 67% impact
#1942221031246663680junit11 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 29m8s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 0s, firing for 29m8s:
Jul 07 18:46:11.688 - 1748s W namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 29m8s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 0s, firing for 29m8s:
Jul 07 18:46:11.688 - 1748s W namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942129783513026560junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 43m26s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 0s, firing for 43m26s:
Jul 07 12:45:24.249 - 828s  W namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 12:45:24.249 - 1778s W namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 43m26s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 0s, firing for 43m26s:
periodic-ci-openshift-release-master-nightly-4.17-e2e-azure-csi (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1942272004111470592junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 21m4s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 21m4s, firing for 0s:
Jul 07 18:30:17.428 - 1112s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:31:21.428 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 18:50:21.428 - 4s    I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-aws-ovn-fips (all) - 3 runs, 33% failed, 100% of failures match = 33% impact
#1942272004316991488junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 23m26s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 23m0s, firing for 26s:
Jul 07 18:18:46.928 - 270s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 18:18:46.928 - 1110s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:18:20.928 - 26s   E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above info
KubeAPIErrorBudgetBurn was at or above info for at least 26s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 23m0s, firing for 26s:
pull-ci-openshift-cluster-version-operator-release-4.18-e2e-agnostic-ovn-upgrade-into-change (all) - 1 runs, 0% failed, 100% of runs match
#1942256914620485632junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m58s on platformidentification.JobType{Release:"4.18", FromRelease:"4.18", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m58s, firing for 0s:
Jul 07 17:58:35.217 - 478s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-e2e-aws-ovn-techpreview (all) - 2 runs, 0% failed, 50% of runs match
#1942272089574608896junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m16s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m16s, firing for 0s:
Jul 07 18:03:40.850 - 256s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-vsphere-static-ovn (all) - 2 runs, 0% failed, 50% of runs match
#1942272093773107200junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 07 18:18:55.491 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-version-operator-release-4.16-e2e-agnostic-ovn-upgrade-out-of-change (all) - 1 runs, 0% failed, 100% of runs match
#1942257022716088320junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 55m38s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 55m38s, firing for 0s:
Jul 07 17:57:39.332 - 3220s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 17:58:09.332 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
pull-ci-openshift-cluster-version-operator-release-4.17-e2e-agnostic-ovn-upgrade-out-of-change (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942256939689840640junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h12m28s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h12m28s, firing for 0s:
Jul 07 17:45:19.418 - 12s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 17:45:19.418 - 4008s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 17:46:33.418 - 328s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
pull-ci-openshift-cluster-version-operator-release-4.16-e2e-agnostic-ovn (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942257022615425024junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m28s, firing for 0s:
Jul 07 17:57:42.491 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-version-operator-release-4.18-e2e-agnostic-ovn (all) - 1 runs, 0% failed, 100% of runs match
#1942256914578542592junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 07 17:48:47.072 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-version-operator-release-4.16-e2e-agnostic-ovn-upgrade-into-change (all) - 1 runs, 0% failed, 100% of runs match
#1942257022669950976junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 50m42s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 50m42s, firing for 0s:
Jul 07 17:55:00.594 - 1348s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:18:30.594 - 1694s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-machine-config-operator-release-4.16-e2e-azure-ovn-upgrade-out-of-change (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942233511335301120junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 16m42s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 16m42s, firing for 0s:
Jul 07 17:44:51.838 - 1002s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942233511335301120junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h1m32s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h1m32s, firing for 0s:
Jul 07 16:38:56.399 - 3692s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-version-operator-release-4.17-e2e-agnostic-ovn (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942256934660870144junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 22m16s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 21m18s, firing for 58s:
Jul 07 17:49:49.295 - 110s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 17:49:49.295 - 110s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 17:52:37.295 - 94s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 17:52:37.295 - 964s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 17:51:39.295 - 58s   E namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/firing severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="firing", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
pull-ci-openshift-cluster-version-operator-release-4.17-e2e-agnostic-ovn-upgrade-into-change (all) - 1 runs, 0% failed, 100% of runs match
#1942256937173258240junit12 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 14m26s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 14m26s, firing for 0s:
Jul 07 17:40:32.430 - 418s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 18:06:02.430 - 448s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-aws-csi (all) - 2 runs, 0% failed, 50% of runs match
#1942272104678297600junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m42s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m42s, firing for 0s:
Jul 07 18:00:42.050 - 462s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.13-upgrade-from-stable-4.12-e2e-aws-ovn-upgrade (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1942241400221339648junit13 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 31m36s on platformidentification.JobType{Release:"4.13", FromRelease:"4.12", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 31m36s, firing for 0s:
Jul 07 15:57:33.389 - 1896s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-origin-main-e2e-vsphere-ovn-etcd-scaling (all) - 23 runs, 70% failed, 13% of failures match = 9% impact
#1942234741038125056junit14 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 22m26s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 22m26s, firing for 0s:
Jul 07 16:36:29.764 - 1318s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 16:36:59.764 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1942104728158605312junit23 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m12s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m12s, firing for 0s:
Jul 07 07:48:40.133 - 126s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 07:50:40.133 - 6s    I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
periodic-ci-openshift-release-master-ci-4.14-e2e-aws-sdn-serial (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1942219404980785152junit14 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 45m24s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 45m24s, firing for 0s:
Jul 07 15:01:01.143 - 492s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 15:01:01.143 - 2232s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-hypershift-main-okd-scos-e2e-aws-ovn (all) - 26 runs, 23% failed, 17% of failures match = 4% impact
#1942227432887029760junit14 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m42s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m42s, firing for 0s:
Jul 07 15:41:23.823 - 14s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 15:42:39.823 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-origin-main-e2e-azure-ovn-etcd-scaling (all) - 23 runs, 61% failed, 7% of failures match = 4% impact
#1942196820767674368junit15 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 07 14:18:02.338 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-ibm-vpc-block-csi-driver-operator-release-4.17-e2e-ibmcloud-csi (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942212355907653632junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 40m46s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 40m46s, firing for 0s:
Jul 07 14:53:36.625 - 1040s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 14:54:06.625 - 958s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 14:54:46.625 - 448s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
periodic-ci-openshift-multiarch-master-nightly-4.16-upgrade-from-nightly-4.15-ocp-e2e-upgrade-azure-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1942189440403247104junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m50s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m50s, firing for 0s:
Jul 07 14:02:34.946 - 830s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942189440403247104junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 49m58s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 49m58s, firing for 0s:
Jul 07 12:58:26.191 - 718s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 13:20:56.191 - 2280s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn-upgrade (all) - 5 runs, 0% failed, 20% of runs match
#1942195707867828224junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 6s on platformidentification.JobType{Release:"4.12", FromRelease:"4.12", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 6s, firing for 0s:
Jul 07 12:56:55.801 - 6s    I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.19-upgrade-from-stable-4.18-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 0% failed, 100% of runs match
#1942192824766173184junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m6s on platformidentification.JobType{Release:"4.19", FromRelease:"4.18", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"single"} (maxAllowed=0s): pending for 4m6s, firing for 0s:
Jul 07 12:50:14.382 - 38s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 12:57:24.382 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
pull-ci-openshift-cluster-api-provider-aws-main-e2e-aws-ovn-techpreview (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942201867551379456junit16 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942201867551379456junit16 hours ago
        <*errors.errorString | 0xc001e98a30>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"periodic-gathering-n9g8d\",\n      \"namespace\": \"openshift-insights\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751896676.763,\n      \"1\"\n    ]\n  }\n]",
        },
#1942201867551379456junit16 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942201867551379456junit16 hours ago
        <*errors.errorString | 0xc0065f4a70>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"periodic-gathering-n9g8d\",\n      \"namespace\": \"openshift-insights\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751899973.45,\n      \"1\"\n    ]\n  }\n]",
        },
pull-ci-openshift-oc-release-4.17-e2e-agnostic-ovn-cmd (all) - 1 runs, 0% failed, 100% of runs match
#1942199109649698816junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m58s, firing for 0s:
Jul 07 14:30:40.746 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-upgrade-from-nightly-4.15-ocp-e2e-upgrade-azure-ovn-heterogeneous (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942189440440995840junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 43m2s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 43m2s, firing for 0s:
Jul 07 13:48:13.419 - 2582s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.12-e2e-aws-sdn-serial (all) - 3 runs, 0% failed, 67% of runs match
#1942195708111097856junit16 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m54s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=2s): pending for 1m54s, firing for 0s:
Jul 07 13:00:40.706 - 114s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942092375866216448junit23 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h11m22s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=2s): pending for 1h11m22s, firing for 0s:
Jul 07 06:09:52.358 - 536s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 06:09:52.358 - 3746s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.12-e2e-aws-ovn (all) - 3 runs, 33% failed, 100% of failures match = 33% impact
#1942200725253656576junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m18s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 2m18s, firing for 0s:
Jul 07 13:19:08.854 - 138s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-agent-ha-dualstack-conformance (all) - 5 runs, 20% failed, 100% of failures match = 20% impact
#1942192145406365696junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m40s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m40s, firing for 0s:
Jul 07 13:18:19.966 - 280s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-agent-compact-fips (all) - 6 runs, 50% failed, 100% of failures match = 50% impact
#1942192149604864000junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 14m42s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 14m42s, firing for 0s:
Jul 07 13:12:24.762 - 882s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942119250881679360junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 57m2s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 57m2s, firing for 0s:
Jul 07 08:29:21.246 - 586s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 08:29:21.246 - 2836s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941829719867527168junit41 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28m10s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28m10s, firing for 0s:
Jul 06 13:25:27.580 - 1690s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-e2e-ovn-remote-libvirt-s390x (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942192095070523392junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m56s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"", Architecture:"s390x", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m56s, firing for 0s:
Jul 07 12:54:58.885 - 418s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 13:07:28.885 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-shiftstack-ci-release-4.17-techpreview-e2e-openstack-ovn-techpreview (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942181389256364032junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m58s, firing for 0s:
Jul 07 12:47:48.109 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-cloud-credential-operator-release-4.18-periodics-e2e-azure-manual-oidc (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942172327710035968junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 07 12:27:47.498 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-shiftstack-ci-release-4.18-e2e-openstack-ovn-parallel (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1942171149282578432junit17 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m28s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m28s, firing for 0s:
Jul 07 12:03:20.749 - 208s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-network-operator-master-e2e-aws-ovn-upgrade-ipsec (all) - 4 runs, 50% failed, 100% of failures match = 50% impact
#1942158217752612864junit18 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 18s on platformidentification.JobType{Release:"4.20", FromRelease:"4.20", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 18s, firing for 0s:
Jul 07 11:53:45.822 - 18s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941770510266273792junit43 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m42s on platformidentification.JobType{Release:"4.20", FromRelease:"4.20", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m42s, firing for 0s:
Jul 06 09:21:10.068 - 104s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 09:24:26.068 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-origin-main-e2e-openstack-ovn (all) - 22 runs, 45% failed, 10% of failures match = 5% impact
#1942146947573878784junit18 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m4s on platformidentification.JobType{Release:"4.20", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m4s, firing for 0s:
Jul 07 11:16:50.421 - 304s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-network-operator-release-4.16-e2e-azure-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1942152018625826816junit18 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m12s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m12s, firing for 0s:
Jul 07 11:32:38.309 - 764s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 11:46:54.309 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942152018625826816junit18 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 55m56s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 55m56s, firing for 0s:
pull-ci-openshift-cluster-network-operator-release-4.16-4.16-upgrade-from-stable-4.15-e2e-aws-ovn-upgrade-ipsec (all) - 1 runs, 0% failed, 100% of runs match
#1942152018407723008junit19 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h19m44s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h19m44s, firing for 0s:
Jul 07 10:49:06.328 - 598s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 10:49:06.328 - 4068s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 11:00:36.328 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
pull-ci-openshift-cluster-network-operator-release-4.16-e2e-azure-ovn (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942152018567106560junit19 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m20s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m20s, firing for 0s:
Jul 07 10:47:07.140 - 260s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-capi-operator-main-e2e-aws-ovn-techpreview (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942147190793179136junit19 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942147190793179136junit19 hours ago
        <*errors.errorString | 0xc0066e4bf0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"periodic-gathering-cnfwc\",\n      \"namespace\": \"openshift-insights\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751884113.845,\n      \"1\"\n    ]\n  }\n]",
        },
#1942147190793179136junit19 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942147190793179136junit19 hours ago
        <*errors.errorString | 0xc00656c070>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"periodic-gathering-cnfwc\",\n      \"namespace\": \"openshift-insights\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751887498.662,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-openshift-release-master-nightly-4.12-upgrade-from-stable-4.11-e2e-aws-sdn-upgrade (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1942143572744605696junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m8s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 2m8s, firing for 0s:
Jul 07 09:24:06.341 - 128s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942095557451321344junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m46s on platformidentification.JobType{Release:"4.12", FromRelease:"4.11", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 4m46s, firing for 0s:
Jul 07 06:42:02.601 - 286s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.18-e2e-azure-ovn-techpreview-serial (all) - 1 runs, 0% failed, 100% of runs match
#1942117580382670848junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m26s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m26s, firing for 0s:
Jul 07 08:12:12.094 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:15:12.094 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.18-upgrade-from-stable-4.17-ocp-e2e-upgrade-azure-ovn-multi-a-a (all) - 1 runs, 0% failed, 100% of runs match
#1942118855799214080junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m24s on platformidentification.JobType{Release:"4.18", FromRelease:"4.17", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m24s, firing for 0s:
Jul 07 09:27:21.680 - 144s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942118855799214080junit20 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h13m50s on platformidentification.JobType{Release:"4.18", FromRelease:"4.17", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h13m50s, firing for 0s:
Jul 07 08:12:43.172 - 56s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 08:12:43.172 - 182s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 08:12:43.172 - 182s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:16:17.172 - 4010s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.18-upgrade-from-stable-4.17-e2e-aws-upgrade-ovn-single-node (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942117490985275392junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m28s on platformidentification.JobType{Release:"4.18", FromRelease:"4.17", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"single"} (maxAllowed=0s): pending for 4m28s, firing for 0s:
Jul 07 07:54:50.332 - 268s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
pull-ci-openshift-origin-release-4.17-e2e-aws-ovn-single-node-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1942114708194594816junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m58s on platformidentification.JobType{Release:"4.17", FromRelease:"4.17", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"single"} (maxAllowed=0s): pending for 5m58s, firing for 0s:
Jul 07 08:20:03.713 - 358s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
periodic-ci-openshift-multiarch-master-nightly-4.18-ocp-e2e-upgrade-azure-ovn-multi-a-a (all) - 1 runs, 0% failed, 100% of runs match
#1942118855883100160junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m46s on platformidentification.JobType{Release:"4.18", FromRelease:"4.18", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m46s, firing for 0s:
Jul 07 08:23:31.981 - 286s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.18-e2e-azure-ovn-upgrade-out-of-change (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942117622329905152junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 12m20s on platformidentification.JobType{Release:"4.18", FromRelease:"4.18", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 12m20s, firing for 0s:
Jul 07 08:31:42.870 - 740s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-heavy-build-ovn-remote-libvirt-s390x (all) - 1 runs, 0% failed, 100% of runs match
#1942146737397305344junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m52s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"s390x", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 10m52s, firing for 0s:
Jul 07 09:40:53.172 - 504s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 09:46:21.172 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-e2e-azure-ovn-multi-x-ax (all) - 3 runs, 0% failed, 33% of runs match
#1942120366461685760junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m40s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m40s, firing for 0s:
Jul 07 08:52:57.266 - 170s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 08:52:57.266 - 650s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.18-e2e-azure-ovn-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1942117511591890944junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.18", FromRelease:"4.18", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 07 08:12:55.929 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-ovn-remote-libvirt-s390x (all) - 1 runs, 0% failed, 100% of runs match
#1942131707360579584junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m26s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"s390x", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m26s, firing for 0s:
Jul 07 08:38:24.946 - 238s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:43:54.946 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-ovn-proxy (all) - 3 runs, 33% failed, 200% of failures match = 67% impact
#1942127007055745024junit21 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m6s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 1m6s, firing for 0s:
Jul 07 08:23:54.297 - 66s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942095524731555840junit24 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 52s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 52s, firing for 0s:
Jul 07 06:28:33.287 - 52s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.18-e2e-azure-ovn-techpreview (all) - 1 runs, 0% failed, 100% of runs match
#1942117577866088448junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m30s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m30s, firing for 0s:
Jul 07 08:18:14.462 - 210s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-e2e-gcp-ovn-multi-x-ax (all) - 3 runs, 0% failed, 33% of runs match
#1942120382022553600junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m44s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m44s, firing for 0s:
Jul 07 08:21:03.604 - 202s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 08:21:03.604 - 622s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-upgrade-from-stable-4.16-e2e-vsphere-ovn-upgrade (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1942119354900418560junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m54s on platformidentification.JobType{Release:"4.17", FromRelease:"4.16", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m54s, firing for 0s:
Jul 07 08:04:31.080 - 298s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:10:01.080 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 08:12:01.080 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.17-e2e-gcp-ovn (all) - 2 runs, 0% failed, 50% of runs match
#1942119205566418944junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 27m22s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 27m22s, firing for 0s:
Jul 07 08:02:19.222 - 200s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 08:02:19.222 - 316s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 08:02:19.222 - 1126s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-vsphere-ovn-upi (all) - 2 runs, 0% failed, 50% of runs match
#1942119302903631872junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m58s, firing for 0s:
Jul 07 08:03:44.061 - 178s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-vsphere-ovn-techpreview (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1942119209760722944junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 07 08:01:00.195 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-vsphere-ovn (all) - 2 runs, 0% failed, 50% of runs match
#1942119211438444544junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m58s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m58s, firing for 0s:
Jul 07 08:04:31.960 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.18-e2e-azure-csi (all) - 1 runs, 0% failed, 100% of runs match
#1942117491031412736junit22 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 11m16s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 11m16s, firing for 0s:
Jul 07 08:09:38.127 - 676s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-vsphere-ovn-csi (all) - 2 runs, 0% failed, 50% of runs match
#1942119352379641856junit23 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m28s, firing for 0s:
Jul 07 07:58:18.685 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-upgrade-from-nightly-4.15-ocp-ovn-remote-libvirt-s390x (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942101458866409472junit23 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 19m10s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"libvirt", Architecture:"s390x", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 19m10s, firing for 0s:
Jul 07 07:38:44.206 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 07:39:14.206 - 1006s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 07:39:18.206 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 07 07:44:14.206 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-ovn-upi (all) - 1 runs, 0% failed, 100% of runs match
#1942095530603581440junit24 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m0s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=2s): pending for 8m0s, firing for 0s:
Jul 07 06:33:04.564 - 30s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 06:33:04.564 - 450s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-ovn-single-node-serial (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1942095497284030464junit24 hours ago
        <*errors.errorString | 0xc00143c9b0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing\",alertstate=\"firing\",severity!=\"info\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"etcdDatabaseHighFragmentationRatio\",\n      \"alertstate\": \"firing\",\n      \"endpoint\": \"etcd-metrics\",\n      \"instance\": \"10.0.178.219:9979\",\n      \"job\": \"etcd\",\n      \"namespace\": \"openshift-etcd\",\n      \"pod\": \"etcd-ip-10-0-178-219.ec2.internal\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"etcd\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751869630.87,\n      \"1\"\n    ]\n  }\n]",
        },
#1942095497284030464junit24 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailed|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing",alertstate="firing",severity!="info"} >= 1
    [
periodic-ci-openshift-release-master-ci-4.12-e2e-gcp-sdn-techpreview (all) - 3 runs, 33% failed, 100% of failures match = 33% impact
#1942095521380306944junit24 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 42s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=2s): pending for 42s, firing for 0s:
Jul 07 06:29:23.830 - 42s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-sdn-cgroupsv2 (all) - 1 runs, 0% failed, 100% of runs match
#1942095589332226048junit24 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m10s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=2s): pending for 8m10s, firing for 0s:
Jul 07 06:20:55.921 - 80s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 06:20:55.921 - 410s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.18-e2e-azure-cgroupsv1-upgrade (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1942061848140451840junit24 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m58s on platformidentification.JobType{Release:"4.18", FromRelease:"4.18", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m58s, firing for 0s:
Jul 07 04:28:30.872 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.18-upgrade-from-stable-4.17-e2e-metal-ipi-ovn-upgrade-cgroupsv1 (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1942054051994669056junit25 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.18", FromRelease:"4.17", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 07 05:12:55.881 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-gcp-filestore-csi-driver-operator-release-4.17-periodic-e2e-gcp-filestore-csi (all) - 2 runs, 0% failed, 50% of runs match
#1942043483850149888junit26 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h34m18s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h34m18s, firing for 0s:
Jul 07 03:27:08.227 - 654s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 03:27:08.227 - 5004s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.18-e2e-vsphere-ipi-ovn-runc-techpreview (all) - 2 runs, 0% failed, 50% of runs match
#1942058828614864896junit26 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m50s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m50s, firing for 0s:
Jul 07 04:05:13.559 - 110s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-sdn-upgrade (all) - 3 runs, 67% failed, 150% of failures match = 100% impact
#1942026785323487232junit27 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 9m36s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 9m36s, firing for 0s:
Jul 07 03:02:54.252 - 548s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 07 03:13:34.252 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1942026785323487232junit27 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 53m40s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 53m40s, firing for 0s:
#1941979189875838976junit30 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 27m12s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 27m12s, firing for 0s:
Jul 06 23:48:38.034 - 1632s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941979189875838976junit30 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h19m40s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1h19m40s, firing for 0s:
Jul 06 22:49:51.614 - 3292s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 23:15:51.614 - 388s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 23:22:51.614 - 718s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 23:38:21.614 - 382s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
#1941928829706571776junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 29m52s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 29m52s, firing for 0s:
Jul 06 20:37:14.716 - 1792s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941928829706571776junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h7m48s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1h7m48s, firing for 0s:
Jul 06 19:34:47.720 - 3502s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:18:37.720 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 20:22:37.720 - 418s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
periodic-ci-openshift-release-master-nightly-4.18-e2e-azure-ovn-serial-runc (all) - 2 runs, 0% failed, 50% of runs match
#1942014285815222272junit27 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m32s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 8m32s, firing for 0s:
Jul 07 01:41:32.096 - 512s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.19-e2e-aws-ovn-serial-ipsec (all) - 2 runs, 0% failed, 50% of runs match
#1942038695724978176junit27 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 6s on platformidentification.JobType{Release:"4.19", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 6s, firing for 0s:
Jul 07 03:00:36.926 - 6s    I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.20-e2e-azure-ovn-upgrade (all) - 50 runs, 18% failed, 11% of failures match = 2% impact
#1942002079253925888junit27 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.20", FromRelease:"4.20", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 07 01:03:16.098 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.18-e2e-azure-crun-upgrade (all) - 2 runs, 0% failed, 50% of runs match
#1942007992530505728junit27 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.18", FromRelease:"4.18", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m28s, firing for 0s:
Jul 07 01:04:24.238 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.14-e2e-aws-ovn-cgroupsv1 (all) - 2 runs, 0% failed, 50% of runs match
#1942035675507331072junit28 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m52s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m52s, firing for 0s:
Jul 07 02:18:36.834 - 172s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.20-upgrade-from-stable-4.19-e2e-gcp-ovn-rt-upgrade (all) - 50 runs, 34% failed, 6% of failures match = 2% impact
#1942001955085750272junit28 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942001955085750272junit28 hours ago
        <*errors.errorString | 0xc001f61330>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"collect-profiles-29197515\",\n      \"namespace\": \"openshift-operator-lifecycle-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751854268.242,\n      \"1\"\n    ]\n  }\n]",
        },
#1942001955085750272junit28 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1942001955085750272junit28 hours ago
        <*errors.errorString | 0xc006cb6240>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeJobFailed\",\n      \"alertstate\": \"firing\",\n      \"condition\": \"true\",\n      \"container\": \"kube-rbac-proxy-main\",\n      \"endpoint\": \"https-main\",\n      \"job\": \"kube-state-metrics\",\n      \"job_name\": \"collect-profiles-29197515\",\n      \"namespace\": \"openshift-operator-lifecycle-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"kube-state-metrics\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751857257.743,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-openshift-release-master-nightly-4.18-e2e-azure-ovn-runc-techpreview (all) - 2 runs, 0% failed, 50% of runs match
#1942023367661981696junit28 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 07 01:56:42.817 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.14-upgrade-from-stable-4.13-e2e-aws-sdn-upgrade (all) - 4 runs, 0% failed, 25% of runs match
#1942007797898022912junit28 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m58s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1m58s, firing for 0s:
Jul 07 00:32:38.059 - 118s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.18-e2e-azure-ovn-kube-apiserver-rollout (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1942002967729147904junit29 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 07 00:56:38.493 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.17-e2e-aws-ovn-kube-apiserver-rollout (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1942002707732631552junit29 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 22m10s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 22m10s, firing for 0s:
Jul 07 00:41:29.021 - 1330s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.14-e2e-gcp-sdn (all) - 4 runs, 0% failed, 25% of runs match
#1942007802931187712junit29 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m44s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 13m44s, firing for 0s:
Jul 07 00:34:33.547 - 202s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 00:34:33.547 - 622s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-e2e-aws-upi-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1941999687120719872junit30 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 18m0s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 18m0s, firing for 0s:
Jul 07 00:12:02.796 - 270s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 07 00:12:02.796 - 810s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-e2e-ovn-agent-remote-libvirt-multi-z-z (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941995758588792832junit30 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 20m46s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"", Architecture:"s390x", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 20m46s, firing for 0s:
Jul 06 23:51:32.946 - 218s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 23:51:32.946 - 1028s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-ovn-upgrade (all) - 2 runs, 50% failed, 200% of failures match = 100% impact
#1941951990787477504junit32 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m34s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m34s, firing for 0s:
Jul 06 22:16:41.687 - 454s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941951990787477504junit32 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 49m24s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 49m24s, firing for 0s:
Jul 06 21:15:17.720 - 928s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 21:38:47.720 - 2036s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941928919317876736junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m6s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m6s, firing for 0s:
Jul 06 19:41:25.496 - 98s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 19:42:35.496 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
periodic-ci-openshift-release-master-nightly-4.12-e2e-ibmcloud-csi (all) - 1 runs, 0% failed, 100% of runs match
#1941971753135771648junit32 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 10m24s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 10m24s, firing for 0s:
Jul 06 22:23:58.898 - 124s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 22:23:58.898 - 500s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.13-upgrade-from-stable-4.12-e2e-aws-sdn-upgrade (all) - 2 runs, 0% failed, 50% of runs match
#1941948452204187648junit32 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 23m50s on platformidentification.JobType{Release:"4.13", FromRelease:"4.12", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 23m50s, firing for 0s:
Jul 06 20:36:18.632 - 1430s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.13-e2e-aws-sdn-serial (all) - 2 runs, 0% failed, 50% of runs match
#1941948467324653568junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m44s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 8m44s, firing for 0s:
Jul 06 20:34:17.625 - 524s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.13-e2e-gcp-sdn-techpreview-serial (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1941948497532030976junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m50s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"gcp", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 8m50s, firing for 0s:
Jul 06 20:34:30.791 - 530s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-ovn-techpreview-serial (all) - 1 runs, 0% failed, 100% of runs match
#1941928978885382144junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 19m54s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 19m54s, firing for 0s:
Jul 06 19:37:54.855 - 1108s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:01:24.855 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:02:54.855 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-ovn-serial (all) - 1 runs, 0% failed, 100% of runs match
#1941928846345375744junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m28s, firing for 0s:
Jul 06 19:31:41.701 - 448s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.17-ocp-image-ecosystem-ovn-remote-libvirt-multi-p-p (all) - 1 runs, 0% failed, 100% of runs match
#1941965545754595328junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 18m54s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"", Architecture:"ppc64le", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 18m54s, firing for 0s:
Jul 06 21:43:23.777 - 76s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/critical ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="6h", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="critical", short="30m"}
Jul 06 21:43:23.777 - 444s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 21:43:23.777 - 614s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-upgrade-from-stable-4.15-ocp-e2e-upgrade-azure-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1941928509475655680junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 21m22s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 21m22s, firing for 0s:
Jul 06 20:45:13.654 - 1282s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941928509475655680junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h2m34s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h2m34s, firing for 0s:
Jul 06 19:38:37.441 - 3754s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-ovn-agent-remote-libvirt-s390x (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941950464371200000junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 6m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"s390x", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 6m28s, firing for 0s:
Jul 06 20:50:30.016 - 388s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-sdn-upgrade-out-of-change (all) - 1 runs, 0% failed, 100% of runs match
#1941928884924583936junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 33m16s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 33m16s, firing for 0s:
Jul 06 20:34:11.814 - 1308s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:57:31.814 - 688s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941928884924583936junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h45m36s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 1h45m36s, firing for 0s:
periodic-ci-openshift-release-master-nightly-4.16-e2e-metal-ipi-ovn-bm-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1941928958740140032junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m20s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"metal", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m20s, firing for 0s:
Jul 06 20:53:35.731 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 21:04:05.731 - 82s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-ovn-upgrade-out-of-change (all) - 1 runs, 0% failed, 100% of runs match
#1941928855560261632junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 15m2s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 15m2s, firing for 0s:
Jul 06 20:41:42.210 - 902s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941928855560261632junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 36m18s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 36m18s, firing for 0s:
Jul 06 20:01:45.709 - 2178s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-vsphere-ovn-techpreview-serial (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941928902544855040junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 5m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 5m28s, firing for 0s:
Jul 06 19:39:24.191 - 328s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-upgrade-azure-ovn-arm64 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941928509433712640junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 15m12s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 15m12s, firing for 0s:
Jul 06 20:31:07.331 - 912s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941928509433712640junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h0m58s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h0m58s, firing for 0s:
Jul 06 19:31:43.059 - 306s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 19:31:43.059 - 3352s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-upgrade-from-stable-4.15-ocp-e2e-upgrade-azure-ovn-heterogeneous (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941929436249067520junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 42m26s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 42m26s, firing for 0s:
Jul 06 20:12:37.554 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:37:07.554 - 2518s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.13-e2e-azure-csi (all) - 2 runs, 0% failed, 50% of runs match
#1941948491647422464junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 24m28s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 24m28s, firing for 0s:
Jul 06 21:02:23.565 - 132s  I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 21:02:23.565 - 1336s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-upgrade-azure-ovn-heterogeneous (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941929436127432704junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 37m20s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 37m20s, firing for 0s:
Jul 06 20:38:17.297 - 2240s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-upgrade-gcp-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1941928509572124672junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 6m16s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 6m16s, firing for 0s:
Jul 06 20:21:06.111 - 288s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:26:26.111 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941928509572124672junit33 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1h0m26s on platformidentification.JobType{Release:"4.16", FromRelease:"4.16", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1h0m26s, firing for 0s:
periodic-ci-openshift-multiarch-master-nightly-4.16-upgrade-from-stable-4.15-ocp-e2e-upgrade-gcp-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1941928510880747520junit34 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 15m26s on platformidentification.JobType{Release:"4.16", FromRelease:"4.15", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 15m26s, firing for 0s:
Jul 06 19:53:10.569 - 478s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 20:08:40.569 - 448s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-azure-sdn (all) - 1 runs, 0% failed, 100% of runs match
#1941928882416390144junit34 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m8s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 7m8s, firing for 0s:
Jul 06 19:32:44.567 - 428s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-ovn-techpreview (all) - 1 runs, 0% failed, 100% of runs match
#1941928829656240128junit34 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m58s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m58s, firing for 0s:
Jul 06 19:31:12.987 - 118s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-azure-ovn (all) - 1 runs, 0% failed, 100% of runs match
#1941928896672829440junit34 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m58s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m58s, firing for 0s:
Jul 06 19:32:30.699 - 178s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-aws-sdn (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1941928920999792640junit34 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m20s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 3m20s, firing for 0s:
Jul 06 19:23:18.818 - 200s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.12-e2e-aws-sdn-crun (all) - 2 runs, 0% failed, 50% of runs match
#1941934507045163008junit34 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 56s on platformidentification.JobType{Release:"4.12", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"sdn", Topology:"ha"} (maxAllowed=2s): pending for 56s, firing for 0s:
Jul 06 19:33:29.633 - 56s   I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-ci-4.16-e2e-aws-ovn (all) - 1 runs, 0% failed, 100% of runs match
#1941928889961943040junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m44s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 3m44s, firing for 0s:
Jul 06 19:17:25.932 - 224s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-gcp-ovn-arm64 (all) - 2 runs, 0% failed, 100% of runs match
#1941928509530181632junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m48s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m48s, firing for 0s:
Jul 06 19:13:48.170 - 288s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
#1941814558867853312junit42 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 58s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 58s, firing for 0s:
Jul 06 11:41:31.276 - 58s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-vsphere-ovn-upi (all) - 1 runs, 0% failed, 100% of runs match
#1941928941971312640junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 06 19:28:55.392 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-image-ecosystem-aws-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1941936016390623232junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 14m4s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 14m4s, firing for 0s:
Jul 06 19:50:27.539 - 844s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.16-e2e-azure-csi (all) - 1 runs, 0% failed, 100% of runs match
#1941928949521059840junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 6m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 6m28s, firing for 0s:
Jul 06 19:31:03.088 - 388s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-jenkins-e2e-ovn-remote-libvirt-s390x (all) - 1 runs, 0% failed, 100% of runs match
#1941935348368019456junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"s390x", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m28s, firing for 0s:
Jul 06 19:40:05.876 - 148s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-image-ecosystem-ovn-remote-libvirt-s390x (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941920251813826560junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 8m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"s390x", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 8m28s, firing for 0s:
Jul 06 18:51:18.596 - 508s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-e2e-ovn-serial-aws-arm64 (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941905314634797056junit35 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 32s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 32s, firing for 0s:
Jul 06 17:41:50.855 - 32s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-image-ecosystem-ovn-remote-libvirt-ppc64le (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941920250127716352junit36 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 4m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"", Architecture:"ppc64le", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 4m28s, firing for 0s:
Jul 06 18:51:04.405 - 268s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-e2e-upgrade-azure-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1941905324722098176junit36 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m8s on platformidentification.JobType{Release:"4.14", FromRelease:"4.14", Platform:"azure", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m8s, firing for 0s:
Jul 06 17:57:59.157 - 128s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-e2e-aws-ovn-heterogeneous-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1941907579135332352junit36 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 2m40s on platformidentification.JobType{Release:"4.14", FromRelease:"4.14", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 2m40s, firing for 0s:
Jul 06 17:54:07.371 - 160s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-e2e-aws-sdn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1941905314114703360junit36 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 3m14s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"sdn", Topology:"ha"} (maxAllowed=0s): pending for 3m14s, firing for 0s:
Jul 06 17:38:03.795 - 194s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-cluster-storage-operator-release-4.18-e2e-aws-group-snapshot-tp (all) - 6 runs, 17% failed, 100% of failures match = 17% impact
#1941903553387827200junit37 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 06 17:57:51.766 - 28s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-release-master-nightly-4.14-upgrade-from-stable-4.13-e2e-vsphere-upgrade (all) - 1 runs, 0% failed, 100% of runs match
#1941896254774579200junit36 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 45m22s on platformidentification.JobType{Release:"4.14", FromRelease:"4.13", Platform:"vsphere", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 45m22s, firing for 0s:
Jul 06 17:09:12.215 - 1914s I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
Jul 06 17:42:38.215 - 808s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-monitoring-operator-main-e2e-aws-ovn (all) - 3 runs, 67% failed, 100% of failures match = 67% impact
#1941847844566601728junit40 hours ago
        <*errors.errorString | 0xc00678ca20>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751811040.154,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751811040.154,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751811040.154,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751811040.154,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.129.2.18:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751811040.154,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.131.0.22:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751811040.154,\n      \"1\"\n    ]\n  }\n]",
        },
#1941847844566601728junit40 hours ago
        <*errors.errorString | 0xc0066ebe50>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751814125.287,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751814125.287,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751814125.287,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751814125.287,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.129.2.18:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751814125.287,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.131.0.22:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751814125.287,\n      \"1\"\n    ]\n  }\n]",
        },
#1941803438270582784junit42 hours ago
        <*errors.errorString | 0xc001e6f690>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802589.711,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802589.711,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802589.711,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802589.711,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.128.2.23:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751802589.711,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.131.0.22:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751802589.711,\n      \"1\"\n    ]\n  }\n]",
        },
#1941803438270582784junit42 hours ago
        <*errors.errorString | 0xc0068403c0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751805244.561,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751805244.561,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751805244.561,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751805244.561,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.128.2.23:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751805244.561,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.131.0.22:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751805244.561,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-shiftstack-ci-release-4.18-e2e-openstack-ccpmso-zone (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941822266622873600junit41 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 22m50s on platformidentification.JobType{Release:"4.18", FromRelease:"", Platform:"openstack", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 22m50s, firing for 0s:
Jul 06 13:46:30.584 - 1370s I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.16-ocp-e2e-azure-ovn-heterogeneous (all) - 2 runs, 0% failed, 50% of runs match
#1941814555613073408junit41 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 1m28s on platformidentification.JobType{Release:"4.16", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 1m28s, firing for 0s:
Jul 06 12:53:58.457 - 88s   I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-e2e-azure-ovn-heterogeneous (all) - 1 runs, 100% failed, 100% of failures match = 100% impact
#1941814541255970816junit41 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 7m10s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"azure", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 7m10s, firing for 0s:
Jul 06 12:25:50.243 - 430s  I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-monitoring-operator-main-e2e-aws-ovn-single-node (all) - 2 runs, 50% failed, 100% of failures match = 50% impact
#1941803438308331520junit42 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1941803438308331520junit42 hours ago
        <*errors.errorString | 0xc001def3b0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802283.742,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802283.742,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802283.742,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802283.742,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.128.0.131:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751802283.742,\n      \"1\"\n    ]\n  }\n]",
        },
#1941803438308331520junit42 hours ago
    promQL query returned unexpected results:
    ALERTS{alertname!~"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation",alertstate="firing",severity!="info",namespace!="openshift-e2e-loki"} >= 1
    [
#1941803438308331520junit42 hours ago
        <*errors.errorString | 0xc001a497b0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751806719.543,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751806719.543,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751806719.543,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751806719.543,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.128.0.131:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751806719.543,\n      \"1\"\n    ]\n  }\n]",
        },
pull-ci-openshift-cluster-monitoring-operator-main-e2e-aws-ovn-techpreview (all) - 6 runs, 100% failed, 17% of failures match = 17% impact
#1941803438367051776junit42 hours ago
        <*errors.errorString | 0xc004a8ab40>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802040.613,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802040.613,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802040.613,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751802040.613,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.128.2.20:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751802040.613,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.129.2.13:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751802040.613,\n      \"1\"\n    ]\n  }\n]",
        },
#1941803438367051776junit42 hours ago
        <*errors.errorString | 0xc0067c6040>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|TechPreviewNoUpgrade|ClusterNotUpgradeable\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751805441.487,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751805441.487,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751805441.487,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751805441.487,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.128.2.20:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751805441.487,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.129.2.13:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751805441.487,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-openshift-multiarch-master-nightly-4.14-ocp-e2e-gcp-ovn-arm64 (all) - 1 runs, 0% failed, 100% of runs match
#1941814543780941824junit42 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 28s on platformidentification.JobType{Release:"4.14", FromRelease:"", Platform:"gcp", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 28s, firing for 0s:
Jul 06 11:44:06.044 - 28s   I alert/KubeAPIErrorBudgetBurn namespace/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
pull-ci-openshift-cluster-monitoring-operator-main-okd-scos-e2e-aws-ovn (all) - 2 runs, 100% failed, 50% of failures match = 50% impact
#1941803454670311424junit42 hours ago
        <*errors.errorString | 0xc0066a7500>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|OperatorHubSourceError\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751801950.383,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751801950.383,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751801950.383,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751801950.383,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.128.2.18:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751801950.383,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.129.2.18:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751801950.383,\n      \"1\"\n    ]\n  }\n]",
        },
#1941803454670311424junit42 hours ago
        <*errors.errorString | 0xc0025c83c0>{
            s: "promQL query returned unexpected results:\nALERTS{alertname!~\"Watchdog|AlertmanagerReceiversNotConfigured|PrometheusRemoteWriteDesiredShards|KubeJobFailingSRE|TelemeterClientFailures|CDIDefaultStorageClassDegraded|VirtHandlerRESTErrorsHigh|VirtControllerRESTErrorsHigh|Watchdog|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|KubePodNotReady|etcdMembersDown|etcdMembersDown|etcdGRPCRequestsSlow|etcdGRPCRequestsSlow|etcdHighNumberOfFailedGRPCRequests|etcdHighNumberOfFailedGRPCRequests|etcdMemberCommunicationSlow|etcdMemberCommunicationSlow|etcdNoLeader|etcdNoLeader|etcdHighFsyncDurations|etcdHighFsyncDurations|etcdHighCommitDurations|etcdHighCommitDurations|etcdInsufficientMembers|etcdInsufficientMembers|TargetDown|etcdHighNumberOfLeaderChanges|etcdHighNumberOfLeaderChanges|KubeAPIErrorBudgetBurn|KubeAPIErrorBudgetBurn|KubeClientErrors|KubeClientErrors|KubePersistentVolumeErrors|KubePersistentVolumeErrors|MCDDrainError|MCDDrainError|KubeMemoryOvercommit|KubeMemoryOvercommit|MCDPivotError|MCDPivotError|PrometheusOperatorWatchErrors|PrometheusOperatorWatchErrors|OVNKubernetesResourceRetryFailure|OVNKubernetesResourceRetryFailure|RedhatOperatorsCatalogError|RedhatOperatorsCatalogError|VSphereOpenshiftNodeHealthFail|VSphereOpenshiftNodeHealthFail|SamplesImagestreamImportFailing|SamplesImagestreamImportFailing|PodSecurityViolation|OperatorHubSourceError\",alertstate=\"firing\",severity!=\"info\",namespace!=\"openshift-e2e-loki\"} >= 1\n[\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"ClusterVersionOperatorDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-cluster-version\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751804676.72,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeControllerManagerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-controller-manager\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751804676.72,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"KubeSchedulerDown\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-kube-scheduler\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751804676.72,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"NoRunningOvnControlPlane\",\n      \"alertstate\": \"firing\",\n      \"namespace\": \"openshift-ovn-kubernetes\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"severity\": \"critical\"\n    },\n    \"value\": [\n      1751804676.72,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.128.2.18:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-1\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751804676.72,\n      \"1\"\n    ]\n  },\n  {\n    \"metric\": {\n      \"__name__\": \"ALERTS\",\n      \"alertname\": \"PrometheusKubernetesListWatchFailures\",\n      \"alertstate\": \"firing\",\n      \"container\": \"kube-rbac-proxy\",\n      \"endpoint\": \"metrics\",\n      \"instance\": \"10.129.2.18:9092\",\n      \"job\": \"prometheus-k8s\",\n      \"namespace\": \"openshift-monitoring\",\n      \"pod\": \"prometheus-k8s-0\",\n      \"prometheus\": \"openshift-monitoring/k8s\",\n      \"service\": \"prometheus-k8s\",\n      \"severity\": \"warning\"\n    },\n    \"value\": [\n      1751804676.72,\n      \"1\"\n    ]\n  }\n]",
        },
periodic-ci-openshift-multiarch-master-nightly-4.13-ocp-e2e-aws-ovn-arm64-techpreview-serial (all) - 1 runs, 0% failed, 100% of runs match
#1941778478206554112junit44 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 19m26s on platformidentification.JobType{Release:"4.13", FromRelease:"", Platform:"aws", Architecture:"arm64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 19m26s, firing for 0s:
Jul 06 09:15:20.004 - 1166s I alert/KubeAPIErrorBudgetBurn ns/openshift-kube-apiserver ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}
periodic-ci-openshift-cluster-storage-operator-release-4.17-e2e-aws-selinux-alpha (all) - 2 runs, 0% failed, 50% of runs match
#1941789046334296064junit44 hours ago
# [bz-kube-apiserver][invariant] alert/KubeAPIErrorBudgetBurn should not be at or above pending
KubeAPIErrorBudgetBurn was at or above pending for at least 13m44s on platformidentification.JobType{Release:"4.17", FromRelease:"", Platform:"aws", Architecture:"amd64", Network:"ovn", Topology:"ha"} (maxAllowed=0s): pending for 13m44s, firing for 0s:
Jul 06 10:13:59.863 - 204s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="1d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="2h"}
Jul 06 10:13:59.863 - 620s  I namespace/openshift-kube-apiserver alert/KubeAPIErrorBudgetBurn alertstate/pending severity/warning ALERTS{alertname="KubeAPIErrorBudgetBurn", alertstate="pending", long="3d", namespace="openshift-kube-apiserver", prometheus="openshift-monitoring/k8s", severity="warning", short="6h"}

Found in 0.97% of runs (5.38% of failures) across 30688 total runs and 6889 jobs (18.07% failed) in 594ms - clear search | chart view - source code located on github